title
listlengths
0
18
author
listlengths
0
4.41k
authoraffiliation
listlengths
0
6.45k
venue
listlengths
0
9
abstract
stringlengths
1
37.6k
doi
stringlengths
10
114
pdfurls
listlengths
1
3
corpusid
int64
158
259M
arxivid
stringlengths
9
16
pdfsha
stringlengths
40
40
text
stringlengths
66
715k
github_urls
listlengths
0
36
[ "Geometrical exponents of contour loops on synthetic multifractal rough surfaces: multiplicative hierarchical cascade p model", "Geometrical exponents of contour loops on synthetic multifractal rough surfaces: multiplicative hierarchical cascade p model" ]
[ "S Hosseinabadi \nDepartment of Physics\nAlzahra University\nP.O.Box 1993891167TehranIran\n", "M A Rajabpour \nSISSA and INFN\nSezione di Trieste\nvia Bonomea 26534136TriesteItaly\n", "M Sadegh Movahed \nDepartment of Physics\nShahid Beheshti University\n19839EvinG.C., TehranIran\n\nSchool of Astronomy\nInstitute for Studies in theoretical Physics and Mathematics\nP.O.Box19395-5531TehranIran\n\nThe Abdus Salam International Centre for Theoretical Physics\nStrada Costiera 11I-34013TriesteItaly\n", "S M Vaez Allaei \nDepartment of Physics\nUniversity of Tehran\n14395-547TehranIran\n\nThe Abdus Salam International Centre for Theoretical Physics\nStrada Costiera 11I-34013TriesteItaly\n" ]
[ "Department of Physics\nAlzahra University\nP.O.Box 1993891167TehranIran", "SISSA and INFN\nSezione di Trieste\nvia Bonomea 26534136TriesteItaly", "Department of Physics\nShahid Beheshti University\n19839EvinG.C., TehranIran", "School of Astronomy\nInstitute for Studies in theoretical Physics and Mathematics\nP.O.Box19395-5531TehranIran", "The Abdus Salam International Centre for Theoretical Physics\nStrada Costiera 11I-34013TriesteItaly", "Department of Physics\nUniversity of Tehran\n14395-547TehranIran", "The Abdus Salam International Centre for Theoretical Physics\nStrada Costiera 11I-34013TriesteItaly" ]
[]
In this paper, we study many geometrical properties of contour loops to characterize the morphology of synthetic multifractal rough surfaces, which are generated by multiplicative hierarchical cascading processes. To this end, two different classes of multifractal rough surfaces are numerically simulated. As the rst group, singular measure multifractal rough surfaces are generated by using the p model. The smoothened multifractal rough surface then is simulated by convolving the rst group with a so-called Hurst exponent, H * . The generalized multifractal dimension of isoheight lines (contours), D(q), correlation exponent of contours, x l , cumulative distributions of areas, ξ, and perimeters, η, are calculated for both synthetic multifractal rough surfaces. Our results show that for both mentioned classes, hyperscaling relations for contour loops are the same as that of monofractal systems. In contrast to singular measure multifractal rough surfaces, H * plays a leading role in smoothened multifractal rough surfaces. All computed geometrical exponents for the rst class depend not only on its Hurst exponent but also on the set of p values. But in spite of multifractal nature of smoothened surfaces (second class), the corresponding geometrical exponents are controlled by H * , the same as what happens for monofractal rough surfaces.
10.1103/physreve.85.031113
[ "https://arxiv.org/pdf/1107.5287v2.pdf" ]
25,329,199
1107.5287
00ebf001006cbd254da27dc92ee821a6420362f3
Geometrical exponents of contour loops on synthetic multifractal rough surfaces: multiplicative hierarchical cascade p model 17 Mar 2012 S Hosseinabadi Department of Physics Alzahra University P.O.Box 1993891167TehranIran M A Rajabpour SISSA and INFN Sezione di Trieste via Bonomea 26534136TriesteItaly M Sadegh Movahed Department of Physics Shahid Beheshti University 19839EvinG.C., TehranIran School of Astronomy Institute for Studies in theoretical Physics and Mathematics P.O.Box19395-5531TehranIran The Abdus Salam International Centre for Theoretical Physics Strada Costiera 11I-34013TriesteItaly S M Vaez Allaei Department of Physics University of Tehran 14395-547TehranIran The Abdus Salam International Centre for Theoretical Physics Strada Costiera 11I-34013TriesteItaly Geometrical exponents of contour loops on synthetic multifractal rough surfaces: multiplicative hierarchical cascade p model 17 Mar 2012 In this paper, we study many geometrical properties of contour loops to characterize the morphology of synthetic multifractal rough surfaces, which are generated by multiplicative hierarchical cascading processes. To this end, two different classes of multifractal rough surfaces are numerically simulated. As the rst group, singular measure multifractal rough surfaces are generated by using the p model. The smoothened multifractal rough surface then is simulated by convolving the rst group with a so-called Hurst exponent, H * . The generalized multifractal dimension of isoheight lines (contours), D(q), correlation exponent of contours, x l , cumulative distributions of areas, ξ, and perimeters, η, are calculated for both synthetic multifractal rough surfaces. Our results show that for both mentioned classes, hyperscaling relations for contour loops are the same as that of monofractal systems. In contrast to singular measure multifractal rough surfaces, H * plays a leading role in smoothened multifractal rough surfaces. All computed geometrical exponents for the rst class depend not only on its Hurst exponent but also on the set of p values. But in spite of multifractal nature of smoothened surfaces (second class), the corresponding geometrical exponents are controlled by H * , the same as what happens for monofractal rough surfaces. I. INTRODUCTION Random phenomena in nature generate ubiquitously fractal structures which show self-similar or self-affine properties [1][2][3][4]. When the fractal structure of a system is uniform and free of irregularities, we have monofractal structure. A monofractal system can be characterized by a single scaling law with one scaling exponent in all scales. For a self-affine surface and interface, this exponent is called roughness exponent or Hurst exponent, H. Surface with larger H seems locally smoother than the surface with smaller H [2,3]. In topics ranging from biology [5,6], surface sciences [7][8][9][10], turbulence [11][12][13], diffusion-limited aggregation [14], bacterial colony growth [15], climate indicators [16] to cosmology [17], there are many surfaces and interfaces exhibit multifractal structures. A multifractal system can be considered as a combination of many different monofractal subsets [2,3]. Multifractality manifests itself in systems with different scaling properties in various regions of the system. In addition, multifractals can be described by infinite different numbers of scaling exponents h(q), where q can be a real number. The appearance of the infinite different numbers ensures that the theoretical and the numerical study of the multifractal surfaces is more complicated than those of monofractal ones. Changing one of the h(q)'s can lead to different feature in the system. One of the important characteristics of the multifractality is the presence of the singularity spectrum, f (α), which associates the Husdorff dimension f (α) to the subset of the support of the measure µ where the Hölder exponent is α; in other words f (α) = dim H {x|µ(B x (ǫ)) ∼ ǫ h }, where B x (ǫ) is an ǫ-box centered at x. A single scaling exponent can be determined for a monofractal structure, by use of various methods [2,6,[18][19][20][21][22]. Not only a spectrum of exponents but also different algorithms should be computed for a multifractal feature (power spectral, distribution method and so on) and these may give different results for a typical multifractal case [23]. Thus, the better and more complete theoretical frame work, the better our understanding, providing deeper insight to observational multifractal rough surfaces. Recently, isoheight contour lines has been utilized to explore the topography of rough surfaces and it exhibited interesting capabilities [24][25][26][27][28][29][30][31]. The contour plot consists of closed non-intersecting lines in the plane that connects points of equal heights. The fractal properties of the contour loops of the rough surfaces can be described by just the Hurst exponent [30,31]. This result was confirmed in different systems with quite different structures in recent years both experimentally and numerically. Using numerical approach, the predicted relations were confirmed in glassy interfaces and turbulence [32], in two-Dimensional fractional Brownian motion [26], in KPZ surfaces [25] and in discrete scale-invariant rough surfaces [27]. The predictions were also confirmed by using experimental data coming from the AFM analysis of WO(3) surfaces [24]. However, although there have been many studies concerning the contour lines of monofractal rough surfaces, there ares neither theoretical nor numerical inferences about the contour lines of multifractal rough surfaces. Because of the presence of numerous exponents in the multifractal surfaces, theoretical study of multifractal surfaces seems to be difficult. Moreover, in many previous methods, the exponents determined by fractal analysis, generally provide information about the average global properties, whereas geometrical analysis addresses information from point to point. J. Kondev et al. pointed out that geometrical characteristics can discriminate various monofractal rough surfaces that have a similar power spectrum [31,33]. Therefore, the geometrical properties may introduce a new opportunity to characterize multifractal surface. It is worth noting that because contour sets are the intersection of a horizontal surface in particular height fluctuation and do not reflect the full properties of fluctuations in various scales, it is not trivial that the geometrical properties of multifractal rough surfaces based on isoheight nonintersecting feature behave in a multifractal manner as well. Therefore we use a new approach to investigate these processes. In this paper, we try to investigate multifractal structures utilizing contour loops. We study the multifractal properties of a particular kind of multifractal surfaces. Two different type of synthetic multifractal rough surfaces, namely singular measure and smoothened features, are generated. Two mentioned types have a multifractal nature. Despite the complexity nature of the model, hyperscaling relation is satisfied for both categories. In addition, from contouring analysis, all of geometrical exponents for various smoothened multifractal rough surfaces are controlled by corresponding so-called Hurst exponent, H * , This is also is the same as what happens for mono-fractal cases. However, for a singular measure multifractal rough surface, geometrical exponents depend on the set of pvalues that is used to generate the underlying rough surface based on multiplicative cascade model. The structure of the paper is as follows: In the next section we will review multifractal rough surfaces. The hierarchical model to generate the surfaces will also given in this section. The multifractal detrended fluctuation analysis in two dimensions that is used to characterize the multifractal properties of rough surfaces will be explained in section III. In section IV, nonlinear scaling exponents of multifractal rough surfaces are introduced. Section V will be devoted to numerical results for determining the scaling exponents of the contour loops in multifractal rough surfaces. In the last section we will summarize our findings. II. MULTIFRACTAL ROUGH SURFACE SYNTHESIS Recently there has been an increasing interest in the notion of multifractality because of its extensive applications in different areas such as complex systems, industrial and natural phenomena. Dozens of methods for the synthesis of multifractal measures or multifrac- tal rough surfaces have been invented. One of the most common methods thath can be followed deterministically and stochastically is the multiplicative cascading process [4,11,22,34]. Some of these synthesis methods are known as the random β model [12], α model [35], log-stable models, log-infinitely divisible cascade models [36,37] and p model [11]. They were successfully applied in the studies related to rain in one dimension, clouds in two dimensions and landscapes in three dimensions as well as many other fields [36][37][38][39]. The p model method was proposed to mimic the kinetic energy dissipation field in fully developed turbulence [11]. The so-called p model represents the spatial version of weighted curdling feature and is known as conservative cascade . It is based on Richardson's picture of energy transfer from cores to fine scales base on splitting eddies in a random way [40] In this model there is no divergency in corresponding moments in contrast to the so-called hyperbolic of α model [11,41] On the other hand, many scaling exponents of mentioned model can be determined analytically; therefore, it is a proper method to simulate synthetic multifractal processes ranging from surface sciences and astronomy to high energy physics such as cosmology and particle physics e.g QCD parton shower cascades, and cosmic microwave background radiation [42][43][44]. In the context of p model simulating a synthetic one-dimensional data set, consider an interval with size L. Divide L into two parts with equal lengths. The value of the left half corresponds to the fraction 0 ≤ p ≤ 1 of a typical measure µ while the right hand segment is associated to the remaining fraction (1 − p). By increasing the resolution to 2 −n , the multiplicative process divides the population in each part in the same way (see the upper panel of Fig. 1). To simulate a mock multifractal rough surface in two dimensions, one can follow the same procedure as above. Starting from a square, one breaks it into four subsquares of the same sizes. The associated measures for each cell at this step are p 1 µ for the upper right cell, p 2 µ for the upper left cell, p 3 µ for the lower right cell and p 4 µ for the lower left cell. The conservation of probability at each cascade step is p 1 + p 2 + p 3 + p 4 = 1. This partitioning and redistribution process repeat and we obtain after many generations, say n, 2 n × 2 n cells of size l/L = 2 −n (see lower panel of Fig. 1). In the stochastic approach, the fraction of measure for each sub-cell at an arbitrary generation is determined by a random variable A with a definite probability distribution function P (A). By redistribution of measure, based on independent realization of the random A at smaller scales, one can generate a random singular measure over a substrate with size L × L as µ n (r; l) = µ n(l) i=1 A i (r), n(l) = log 2 L l → ∞,(1) where r shows the coordinate of the underlying cell with size l. In this work, we rely on the stochastic version of the cascade p model to generate the synthetic twodimensional multifractal rough surface (see Figs. 2 and 3). The probability distribution function for our approach is given by P (A) = 1 4 [δ(A − A 1 ) + δ(A − A 2 ) +δ(A − A 3 ) + δ(A − A 4 )],(2) where A 1 = p 1 , A 2 = p 2 , A 3 = p 3 , A 4 = p 4 .(3) The so-called multifractal scaling exponent, τ (q), and the generalized Hurst exponent, h(q), are quantities that represent the multifractal behaviors of rough surfaces (see section III for more details). For p model cascade, these exponents can be calculated explicitly. The scaling exponent τ (q) is defined via partition function as Z q (l) = lim l→0 n(l) i=1 |P (A i , l)| q ∼ l τ (q) .(4) Using the value of P (A), e.g. for binomial cascade model (5) where E is the dimension of the geometric support, where for our rough surfaces is E = 2. For generalized p model, the analytic expression of multifractal scaling exponent in two dimensions is given by [45] P (A) = 1 2 [δ(A − p) + δ(A − (1 − p)], one finds τ (q) = lim l→0 log(Z q (l)) log(l) = (E − 1)(q − 1) − log 2 (p q + (1 − p) q ),τ (q) = − log 2 (p q 1 + p q 2 + p q 3 + p q 4 ).(6) One can use the above theoretical expression to get the most relevant quantities of the multifractal behavior and check the reliability and the robustness of numerical method. Recently, factorial moments, G-moments, correlation integrals, void probabilities, combinants and wavelet correlations have been used to examine many interesting feature of multiplicative cascade processes [46]. But there is some ambiguity in properties of such processes that represent multifractal phenomena. On the other hand, sensitivity and accuracy of results are method dependent; consequently, it is highly proposed to simultaneously use various tools in order to ensure the reliability of given results for underlying multifractal rough features. Moreover, to make a relation between experimental data and simulation, generally, we require more than one characterization [31,47] In the next section, to investigate the multifractal properties of simulated rough surfaces in two dimensions, we will introduce the so-called multifractal detrended fluctuation analysis. III. MULTIFRACTALITY OF SYNTHESIS ROUGH SURFACE There are many different methods to determine the multiscaling properties of real as well as synthetic multifractal surfaces such as spectral analysis [48], fluctuation analysis [49], detrended fluctuation analysis (DFA) [6,50,51], wavelet transform module maxima (WTMM) [9,10,13,52,53] and discrete wavelets [54,55]. For real data sets and in the presence of noise, the multifractal DFA (MF-DFA) algorithm gives very reliable results [34,50]. Since it does not require the modulus maxima procedure therefore this method is simpler than WTMM, however, it involves a bit more effort in programming. In this work, we rely on the two dimensional multifractal detrended fluctuation analysis (MF-DFA) to determine the spectrum of the generalized Hurst exponent, h(q). We then compare given results with theoretical prediction to check the reliability of our simulation. Suppose that for a rough surface in two dimensions height of the fluctuations is represented by H(r) at coordinate r = (i, j) with resolution ∆. The MF-DFA in two dimensions has the following steps [34] Step1 : Consider a two dimensional array H(i, j) where i = 1, 2, ..., M and j = 1, 2, ..., N . Divide the H(i, j) into M s × N s non-overlapping square segments of equal sizes s × s, where M s = [ M s ] and N s = [ N s ]. Each square segment can be denoted by H ν,w such that H ν,w (i, j) = H(l 1 + i, l 2 + j) for 1 ≤ i, j ≤ s, where l 1 = (ν − 1)s and l 2 = (w − 1)s. Step 2 : For each non-overlapping segment, the cumulative sum is calculated by: Y ν,w (i, j) = i k1=1 j k2=1 H ν,w (k 1 , k 2 );(7) where 1 ≤ i, j ≤ s. Step 3 : Calculating the local trend for each segments by a least-squares of the profile, linear, quadratic or higher order polynomials can be used in the fitting procedure as follows: B ν,w (i, j) = ai + bj + c,(8)B ν,w (i, j) = ai 2 + bj 2 + c.(9) Then determine the variance for each segment as follows: D ν,w (i, j) = Y ν,w (i, j) − B ν,w (i, j),(10)F 2 ν,w (s) = 1 s 2 s i=1 s j=1 D 2 ν,w (i, j).(11) A comparison of the results for different orders of DFA allows one to estimate the type of the polynomial trends in the surface data. Step 4 : Averaging over all segments to obtain the q'th order fluctuation function F q (s) = 1 M s × N s Ms ν=1 Ns w=1 F 2 ν,w (s) q/2 1/q ,(12) where F q (s) depends on scale s for different values of q. It is easy to see that F q (s) increases with increasing s. Notice that F q (s) depends on the order q. In principle, q can take any real value except zero. For q = 0 Eq. (12) becomes F 0 (s) = exp 1 2M s × N s Ms ν=1 Ns w=1 ln F 2 ν,w (s) .(13) For q = 2 the standard DFA in two dimensions will be retrieved. Step 5 : Finally, investigate the scaling behavior of the fluctuation functions by analyzing log-log plots of F q (s) versus s for each value of q, F (s) ∼ s h(q) .(14) The Hurst exponent is given by H ≡ h(q = 2) − 1.(15) Using standard multifractal formalism [50] we have τ (q) = qh(q) − E.(16) It has been shown that for very large scales, N/4 < s, F q (s) becomes statistically unreliable because the number of segments N s for the averaging procedure in step 4 becomes very small [34]. Thus, scales N/4 < s should be excluded from the fitting procedure of determining h(q). On the other hand one should be careful also about systematic deviations from the scaling behavior in Eq. (12) that can occur for the small scales s < 10. The singularity spectrum, f (α), of a multifractal rough surface is given by the Legendre transformation of τ (q) as f (α) = qα − τ (q),(17) where α = ∂τ (q) ∂q . It is well-known that for a multifractal surface, various parts of the feature are characterized by different values of α, causing a set of Hölder exponents instead of a single α. The interval of Hölder spectrum, α ∈ [α min , α max ], can be determined by [56,57] α min = lim q→+∞ ∂τ (q) ∂q ,(18)α max = lim q→−∞ ∂τ (q) ∂q .(19) To evaluate the statistical errors due to numerical calculations we introduce posterior probability distribution function in terms of likelihood analysis. To this end, suppose the measurements and model parameters to be assigned by {X} and {Θ}, respectively.The conditional probability of the model parameters for a given observation is as follows (posterior) P (Θ|X) = L(X|Θ)P (Θ) L(X|Θ)P (Θ)dΘ .(20) here L(X|Θ) and P (Θ) are called Likelihood and prior distribution, respectively. The prior distribution containing all initial constraints regarding model parameters. Based on the central limit theorem, Likelihood function can be given by a product of gaussian functions as follows: ln L(X|Θ) ∼ −χ 2 (Θ) 2 ,(21)χ 2 (h(q)) = s [F obs. (s) − F the. (s; h(q))] 2 σ 2 obs. (s) ,(22) where F obs. (s) is computed by Eqs. (12) and (13). F the. (s; h(q)) is the fluctuation functions given by Eq. (14). The observational error is σ obs. (s). By using the fisher matrix, one can evaluate the value of the error-bar at 1σ confidence interval of h(q) [58] and F (q) ≡ ∂ 2 ln L ∂h(q) 2(23)σ(q) ≃ 1 F (q)(24) Finally we report the best value of the scaling exponent at 1σ confidence interval according to h(q) ± σ(q). Using the method mentioned in the previous section, we simulated multifractal rough surfaces and checked their multifractality nature by using the spectrum of h(q). Table I. The subindex (i ∈ [1,12]) of each H i (Hurst exponent) throughout this paper corresponds to a given set of p values reported in Table I. In addition the singularity spectrum of a typical simulated multifractal rough surface has been shown in the lower panel of Fig. 4. The q dependence of h(q) as well as the extended range of singularity spectrum demonstrate the multifractality nature of synthesis rough surfaces. Theoretical predictions of τ (q), h(q) and f (α) shown by the solid lines in the corresponding plots, are given by Eqs. (6), (16) and (17), respectively. There is a good consistency between theoretical predictions and computational values. Before going further it is worth mentioning that in the cascade p model for various sets of p values which have the same h(q = 2), in principle, there exist different h(q) spectrums. To show this point, we fixed the value of τ (q = 2) in Eq. (6) and by having e.g., p 1 and p 2 , one can compute the rest of p values according to normalization of p's. In Fig. 5 we show MF-DFA results of various sets of p values causing the same so-called h(q = 2) exponents. Subsequently, it is expected that for characterizing the geometrical properties of underlying surfaces, one must take into account full spectrum of generalized Hurst exponents. It must be pointed out that the generated surfaces have h (q ) some discontinuities (see Fig. 2). To make them smooth, a proper way is using fractionally integrated singular cascade (FISC) method [10]. In this method, the multifractal measure is transformed into a smoother multifractal rough surface by filtering the singular multifractal measure [µ(r), (Eq. (1))] in the Fourier space as H(r) = µ(r) ⊗ |r| −(1−H * ) ,(25) where ⊗ is the convolution operator and H * ∈ (0, 1) is the order of smoothness (see the right panel of Fig. 2 and lower panel of Fig. 3). In this case τ f (q) reads as τ f (q) = τ (q) + qH * ,(26) where τ (q) is given by Eq. (6). Using the correlation function, C(|r|) ∼ |r| −γ , and its Fourier transform one can derive the power spectrum scaling exponent β of the singular as well as the smoothened synthetic multifractal surfaces. To this end we demand the scaling behavior for power spectrum to be S(k) ∼ |k| −β ,(27) where k = (k x , k y ), k x = 2π ∆×N i, k y = 2π ∆×N j and (i, j) run from 1 to N = L/∆ (the pixel of system size). Subsequently the power spectrum scaling exponent is given by [10] β = 1 + 2H * − log 2 (p 2 + (1 − p) 2 ). To make more sense, in Table II we collected the correlation and power spectrum exponents of stochastic processes in one and two dimensions . Exponent 1D-fGn 1D-fBm 2D-Cascade 2D-fBm Figure 6 indicates one dimension profiles obtained along a typical horizontal cut in Fig. 2 for singular and smoothened multifractal rough surfaces. The lower panels of Fig. 6 show the power spectrums of simulated rough surfaces. The convolution does not change the multifractality nature of singular measure (see Fig. 7). In this plot one can see that the synthetic smoothened surface remains multifractal. loops are recognized as contour loops. The contour loop ensemble corresponds to contour loops of various level sets. In Fig. 2 we plotted a set of contour loop at some typical levels for singular multifractal rough surface and corresponding convolved surface with H * = 0.700. The loop length s can be defined as the total number of unit cells constructing a contour loop multiplied by lattice constant ∆. The radius of a typical loop is represented by R and it is the side of the smallest box that completely enwraps the loop. For a mono-fractal surface, these loops are usually fractal and their size distribution is characterized by a few scaling functions and scaling exponents. For example the contour line properties can be described by the loop correlation function G(r). The loop correlation function measures the probability that the two points separated by the distance r in the plane lie on the same contour. Rotational invariance of the contour lines forces G(r) to depend only on r = |r|. This function for the contours on the lattice with grid size ∆ and in the limit r ≫ ∆ has the scaling behavior γ 2 − 2H −2H 1 − 2H −1 − 2H β 2H − 1 2H + 1 2H 2H + 2G(r) ∼ r −2x l ,(29) where x l is the loop correlation exponent. It was shown numerically [27,30,31] that for all the known monofractal rough surfaces this exponent is superuniversal and equal to 1 2 . A key consequence of this result is that, the contour loops with perimeter s and radius R of such surfaces are self-similar. When these lines are scale invariant, one can determine the fractal dimension as the exponent in the perimeter-radius relation. The relation between contour length s and its radius of gyration R is s (R) ∼ R D f ,(30) where D f is the fractal dimension and R is defined by R 2 = 1 N N i=1 (x i − x c ) 2 + (y i − y c ) 2 , with x c = 1 N N i=1 x i and y c = 1 N N i=1 y i being the central mass coordinates. The D f is the fractal dimension of one contour and for mono fractal rough surfaces is given by [31]. Depending on what feature of the multifractal rough surface is under investigation one can get various types of fractal dimensions. In this paper we introduce the fractal dimension of a isoheight line, D f , and the fractal dimension of all the level set, d. The generalized form of fractal dimension can be expressed by means of partition function of underlaying feature, which is contours in this context, as D f = 3−H 2D(q) = lim l→0 1 q − 1 log(Z q (l)) log(l) ,(31) where l is the size of the cells that one uses to cover the domain and its minimum value is equal to grid size, ∆. Z q (l) is the partition function defined in Eq. (4) but here it should be constructed by using contour loops instead of height function and q can be any real number. It is easy to show that D(q = 0) = D f and D(q = 1) corresponds to the so-called entropy of underlying system [45]. For a given self-similar loop ensemble, one can define the probability distribution of contour lengthsP (s). This function is a measure for the total loops with length s and follows the power lawP (s) ∼ s −η ,(32) where η is a scaling exponent. Another interesting quantity with the scaling property is the cumulative distribution of the number of contours with area greater than A which has the following form P > (A) ∼ A − ξ 2 .(33) For mono fractal rough surfaces we have ξ = 2−H. Using the scaling property of the mono-fractal surfaces it was shown that the three exponents D f , η, ξ and x l satisfy the following hyperscaling relations [30] D f = ξ (η − 1) ,(34)D f = 2x l − 2 η − 3 .(35) Using the above relations it is easy to get the relation between η and Hurst exponent H. Before closing this Exponent Relation Description x l G(r) ∼ r −2x l Loop correlation exponent D f s (R) ∼ R D f Fractal dimension of a contour loop D(q) Eq. (31) Multifractal dimension d N (l) ∼ l −d Fractal dimension of all contour set ηP (s) ∼ s −η Length distribution exponent ξ P>(A) ∼ A − ξ 2 Area cumulative exponent section, we summarize all of the exponents introduced in this section in Table III. In the next section we will calculate all mentioned exponents by using different numerical methods for singular as well as smoothened multifractal rough surfaces and we will examine the validity of the hyperscaling relations in this context. V. NUMERICAL RESULTS In order to examine the geometrical exponents of the contour loops mentioned in Table III of synthetic multifractal rough surfaces, we have generated multifractal rough surfaces with different h(q = 2)'s using the typical measures reported in Table I. We have generated 100 ensembles of each surfaces with various sizes ranging from (2048 × 2048) to (4096 × 4096). To extract the contour loops of the mock multifractal rough surfaces at mean height, H 0 , we use two different methods, the contouring algorithm and Hoshen-Kopelman algorithm [26]. According to our results, these two methods give almost the same results for geometrical exponents. In the next subsections we present our numerical results concerning the exponents introduced in the preceding sections. A. Loop correlation function exponent The loop correlation function exponent x l is the most central exponent in mono-fractal rough surfaces. It is independent of H and is equal to 1 2 . This result has also been proven for H = 0 according to the exact solvable statistical mechanics model for contours equivalent to the critical O(2) loop model on the honeycomb lattice [31,59]. To find the correlation function from a given loop ensemble for multifractal rough surfaces, we followed the algorithm described in Ref. [31]. We calculated the loop correlation function G(r) for our multifractal rough surfaces (with system size 2048 × 2048 and averaging is done over 10 realizations). The log-log diagram of G(r)r 2x l versus r for different sets of p values (H i = h i (q = 2) − 1 for some i ∈ [1,12]) have been shown in the Fig. 8 set corresponds to a synthetic multifractal rough surface generated according to the algorithm presented in section II. Our results demonstrate that the x l exponent, not only depends on the value of the Hurst exponent but also depend on the different sets of p values (see the upper panel of Fig. 8 ). In other words, as reported in Table I ponents, x l , for these sets differ completely. On the other hand, at the level of our numerical accuracy, as shown in the lower panel of Fig. 8, the value for the smoothened multifractal surfaces correlation exponents is the same as that reported for the mono fractal rough surfaces, namely x l = 1 2 . B. Fractal dimension To calculate the fractal dimension of a contour loop, we have calculated the perimeter and radius of gyrations of different contour loops. Figure 9 shows log-log plot of s (R) versus R values for synthetic multifractal rough surfaces with typical value of Hurst exponent, H 4 = 0.608. There are two distinct regions with different slopes in the diagram; the first region is related to a large number of small loops with radius smaller than one (R < 1) with D f = 1.00 ± 0.01. This is not a relevant phenomenon and it comes from the contouring algorithm that produces lots of contour loops around very small clusters (made usually from one cell). In the second region (R > 1) the slope increases to 1.43 ± 0.02 and it keeps to follow the scaling behavior up to very large sizes. The slopes for different Hurst exponent follow the relation D f = (3 − H)/2 for mono-fractal case [31]. For various values of the Hurst exponent our computation is shown in the upper panel of Fig. 10 (see also Table IV) Table I. Lower panel: The same diagram for smoothened synthetic multifractal rough surfaces. The sample size is 4096×4096 and the ensemble average was done over 100 realizations. To make more sense, we shifted the values of s vertically for different multifractal rough surfaces. pendent form the p values (lower panel of Fig. 10). This simply means that the fractal dimension of the contour loops of the singular rough surfaces does not change with respect to the h(q = 2). In other words, in contrast to the mono fractal case, h(q = 2) alone can not represent the properties of the underlying singular multifractal rough surface. We also calculated the fractal dimension by using partition function introduced in Eqs. (4) and (31). ure 11 shows D(q) as a function of q. The q dependance of these results confirms that contour loops of synthetic singular and smooth multifractal rough surfaces are multifractal. For q = 0 at 68% confidence interval D(q = 0) = 1.46 ± 0.05. This is also in agreement with the value determined by calculating the scaling behavior of the contour sizes. In addition as we may expect this diagram demonstrates that the isoheight contour loops of underlying simulated multifractal rough surfaces behave as a multifractal feature . As mentioned , the fractal dimension of all the contours, d, differs from the fractal dimension of a contour loop D f . The fractal dimension of a contour set for mono fractal rough surfaces is given by d = 2 − H. For the smoothened multifractal rough surfaces introduced by Eq. (25), the fractal dimension of the contour set is d = 2−H * [13]. We have calculated the fractal dimension of the contour set by using the box counting method. As previously, we used a least-squares equation (Eq. (21)) to determine the slope in the log-log diagram of the number of segments that will cover the underlying feature N (l) versus length scale l for different Hurst exponents. To obtain best fit value for the slope corresponding to our data, as well as its error, we divided the data into different ranges and determined the slope by least-squares method. To do so according to likelihood function (Eq. (21)), we define χ 2 as χ 2 (d) = N i=1 [N (l i ) − N the. (l i ; d)] 2 σ(l i ) 2 ,(36) where N is the number of partitioning, namely N = In addition we checked that whether the result associated to the smoothened rough surfaces depends on the set of p values correspond to the same value of h(q = 2). Our findings confirm that d doesn't depend on different sets of p values. However, for the singular measure, d depends on the value of H and even p's used for the cascade algorithm. It has no regular behavior with respect to h(q = 2). Moreover for various sets of p values giving the same value of h(q = 2), one finds out different values for fractal dimension of all contour sets. This is quite surprising because for singular measure multifractal surface, we have H * = 0 and, therefore, if the formula 2 − H * was correct in this regime, we should have d = 2 for all the different h(q = 2)'s. We are not aware of any theoretical argument that can explain this phenomenon. C. Cumulative distribution of areas To calculate the exponent ξ we have calculated the P > (A)A ξ/2 with respect to the area of the contour loops. In Fig. 13 not only depends on h(q = 2) but also is affected by other values of h(q)'s. This finding is due to the multifractality nature of the singular measure rough surface. The same computation for the smoothened multifractal rough surfaces is shown in lower panel of Fig. 13. This results confirm that the exponent is controlled by H * , and ξ is given by the same equation as for the mono fractal rough surfaces. s P(s) s (η-1) 10 D. Probability distribution of contour length Final remark concerns the probability distribution of contour length. To this end we investigated the logarithmic diagram of P (s)s η−1 versus s. We have depicted the results for the synthetic singular as well as smoothened multifractal surfaces for various values of h(q = 2). For the smoothened multifractal rough surfaces again the exponents follow the behavior of the mono fractal surfaces (see Fig. 14). In spite of the huge difference between the geometri- cal exponents of the contour loops of mono-fractal rough surfaces and singular multifractal rough surfaces, the hyperscaling relations ξ D f = η − 1 and D f = 2x l −2 η−3 are valid up to numerical accuracy (see Table. IV and Table. V). The important factors in obtaining this hyperscaling relation concern power-law relations forP (s) and P > (A). The second hyperscaling relation comes from the following equality Both sides are proportional to the mean of the length of that portion of the contours passing through origin which lies within a radius R from the origin [30]. VI. CONCLUSION In this paper we have studied the contour lines of particular multifractal rough surfaces, namely the socalled multiplicative hierarchical cascade p model. Utilizing a stochastic cascade method [4,34], singular measure (original) and smoothened (convolved) multifractal rough surfaces with different Hurst exponents were generated. The h(q) spectrums of these two dimensional sur- faces were determined by MF-DFA method [34]. Then use of two different algorithms we generated the contour loops of the systems. Many different geometrical exponents, such as the fractal dimension of a contour loop, D f the fractal dimension of the contour set, d, the cumulative distributions of perimeters and areas and the correlation exponents, x l , were calculated for the singular and smoothened multifractal surfaces by use of different methods. We summarize the most important results given in this study as follows. Our results confirmed that, the exponent of loop correlation function, x l , for multifractal singular measure, depends on p values. On the contrary, for multifractal smoothened surfaces, this value behaves the same as that of given for mono fractal rough surfaces (see Fig. 8). The scaling exponent of the size of the contours as a function of the radius representing fractal dimension, D f , is similar for various singular multifractal rough surfaces. But the relation between D f and H * for convolved multifractal surfaces is similar to mono fractal surfaces. Nev-ertheless the contour loops have multifractal nature (see Figs. 10 and 11). The exponent of cumulative distribution of areas, ξ, for singular measure has multifractal nature. But for smoothened surface, this quantity is controlled by H * according to ξ = 2 − H * and is completely independent of p values (see Fig. 13). Consequently, in the case of singular measure surfaces, all of the exponents show significant deviations from the well-known formulas for the mono-fractal rough surfaces. They depend on the generalized Hurst exponents h(q), whreas for convolved multifractal surfaces, all geometrical exponents are controlled by H * according to a mono fractal system. We emphasize that interestingly, the hyperscaling relations, namely, ξ D f = η − 1 and D f = 2x l −2 η−3 at 1σ confidence interval, are valid for both singular and smoothened multifractal rough surfaces(see Table. IV, V). In this system which is labeled by H * , many relevant properties are controlled by a few relations that have been presented for mono-fractal cases. However singular and smoothened multifractal surfaces have multifractal nature but using geometrical analysis, they belong to different class which is a non-trivial result. Table VI contains most important results given in this paper. Finally, to make more complete this study, it is useful to extend this approach for various simulated rough surfaces by use of different methods and examine their hyperscaling relations. In addition, there are some methods to distinguish various multiplicative cascade methods such as n-point statistics [60]. FIG. 1 : 1Upper panel: Different steps of generating multifractal rough surface in one Dimension. Lower panel: The same steps for multifractal rough surface in two dimensions[11]. FIG. 2 : 2Left: Contour plot at some typical levels of a singular multifractal rough surface generated by binomial cascade multifractal method with p = 0.22(H = 0.803). The right panel indicates the contour lines of the same surface convolved with H * = 0.700. The system size is 256 × 256. FIG. 3 : 3Upper panel: A part of height fluctuations of singular measure mentioned in Fig. 2. Lower panel: The same surface convolved with H * = 0.700. The p values used for construction of surfaces with various Hurst exponents, Hi = hi(q = 2) − 1. The subindex (i ∈[1,12]) of Hi represents the label of different sets of p values. Figure 4 shows the generalized Hurst exponent and τ (q) as a function of q for various values of measure sets reported in FIG. 5 : 5(Color online) The multifractal spectrum of surfaces produced by different sets of p values but with the same h(q = 2) up to our numerical precision. FIG. 6 : 6(Color online) Upper panel: Pprofile of singular (left) and smoothened (right) multifractal rough surfaces along a typical horizontal cut in Fig. 2. Lower panel: Spectral density of mentioned mock rough surfaces. The solid lines in the lower panel corresponds to a power law fitting function and symbols are given by numerical calculation. Here we took H * = 0.700. FIG. 7 : 7IV. GEOMETRICAL EXPONENTS OF CONTOUR LOOPSFor a given multifractal rough surface with the height H(x), a level set H(x) = H 0 for different values of H 0 consists of many closed non-intersecting loops. These (Color online) Generalized Hurst exponent of singular measure for H9 = 0.802 (square symbols) and that of convolved with H * = 0.700 (circle symbols). The solid lines are from the theory. FIG . 8: (Color online) Log-log diagram of r 2x l G(r) versus r for different Hurst exponents. Upper panel corresponds to singular measure with the sets of p values reported in Table I. Lower panel is indicates loop correlation function for smoothened multifractal surface for various H * 's. In these figures we shifted the y axis vertically.The system size is 4096 × 4096. as well as shown in upper panel of Fig. (8), the sets i = 4, i = 5 and i = 6 of p values have equal Hurst exponent, nevertheless, the corresponding correlation ex-online) The log-log plot of s (R) versus R for synthetic multifractal singular rough surface for H4 = 0.608. . At 1σ confidence interval all slopes are the same. On the contrary, in the case of the contour lines of the convolved rough surfaces with arbitrary H * 's the fractal dimension of a contour line follows the formula of a mono fractal surface with H = H * , namely D f = (3 − H * )/2. It is quite interesting that these results are completely inde- FIG . 10: (Color online) Upper panel: Log-log of s (R) versus R for singular multifractal rough surfaces for various sets of p values reported in 12: (Color online) Fractal dimension of all contours of the smoothened multifractal surfaces as a function of H * . The solid line corresponds to linear fitting function. L/l N , N the. (l i ; d) ∼ l −d i and σ 2 (l i ) is the variance of the data in the corresponding range. Finally we determined the minimum χ 2 and the best slope for the data. Figure 12 corresponds to synthetic smoothened multifractal rough surfaces. s, R D f )P (s)ds. FIG. 4: (Color online) Diagrams of h(q) (upper panel) and τ (q) (middle panel) for different surfaces. We have distinguished different surfaces with their Hi = hi(q = 2)−1 coming from theTable I. The subindex (i ∈[1,12]) of each Hi (Hurst exponent) throughout this paper corresponds to a given set of p values reported inTable I. The lower panel corresponds to the singularity spectrum of a typical multifractal rough surface with H4 = 0.608. In all diagrams, symbols and solid lines correspond to results given by numerical calculation and theoretical formula, respectively.where e.g., for determining h(q) we have {X} : {F q (s)} as observations and {Θ} : {h(q)} as free parameter to be determined. Alsoh (q) 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 H 1 =0.305 H 4 =0.608 q τ (q) -10 -8 -6 -4 -2 0 2 4 6 8 10 -40 -30 -20 -10 0 10 H 1 =0.305 H 4 =0.608 α f (α ) 1 1.5 2 2.5 3 3.5 0 0.25 0.5 0.75 1 1.25 1.5 1.75 2 TABLE II : IIThe most relevant exponents concerning stochastic processes in one and two dimensions. TABLE III : IIIThe relevant exponents introduced in this paper to characterize synthetic multifractal rough surfaces. . Eachr G(r) r 2x 10 1 10 2 10 3 H 4 =0.608 H 5 =0.608 H 6 =0.608 H 10 =0.697 H 11 =0.806 H 12 =0.906 2x l 1.30 0.03 1.70 0.03 1.53 0.05 1.59 0.03 1.30 0.03 1.28 0.03 ± ± ± ± ± ± l r G(r) r 2x 10 1 10 2 10 3 H * =0.400 H * =0.500 H * =0.700 l 1.00 0.03 1.00 0.03 1.00 0.03 ± ± ± and table IV we have shown the results for various values of Hurst exponent reported in table I and averaging is done over 100 realizations. The results are quite different from what we expect for mono-fractal rough surfaces. For mono fractal rough surfaces we have ξ = 2−H.It must be pointed out that for synthetic singular multifractal rough surfaces ξ decreases by increasing H, which is the same as mono-fractal rough surfaces. In addition ξFIG. 13: (Color online) Upper panel: The cumulative distributions of the areas of the contour loops with respect to the area for the singular multifractal rough surfaces. The corresponding set of p values is given inTable I. Lower panel: The same distribution for the smoothened multifractal surfaces. For clarity, we shifted the value of y axis vertically for both diagrams.A P > (A) A 50 100 150 200 H 2 =0.404 H 3 =0.504 H 4 =0.608 H 8 =0.706 ξ / 2 ξ / 2=1.15 0.03 ξ / 2=1.08 0.03 ξ / 2=1.02 0.03 ξ / 2=0.95 0.03 ± ± ± ± A P > (A) A 10 1 10 2 10 3 H * =0.400 H * =0.600 H * =0.700 ξ / 2 ξ / 2 = 0.81 0.02 ξ / 2 = 0.70 0.02 ξ / 2 = 0.66 0.02 ± ± ± FIG. 14: (Color online) Upper panel:The perimeter distribution exponent for different sets of p values of the singular measure. Lower panel the same measure for the smoothened multifractal rough surfaces. The values of y axis are shifted vertically.1 10 2 10 3 H 2 =0.404 H 4 =0.608 H 8 =0.706 H 9 =0.802 η-1=1.60 0.03 η-1=1.45 0.03 η-1=1.35 0.03 η-1=1.27 0.03 ± ± ± ± s P(s) s (η-1) 10 1 10 2 10 3 H*=0.500 H*=0.600 H*=0.700 ± ± ± η-1=1.22 0.03 η-1=1.17 0.02 η-1=1.12 0.02 H1 = 0.305 2.67 ± 0.03 1.43 ± 0.04 2.44 ± 0.06 1.60 ± 0.10 H2 = 0.404 2.60 ± 0.03 1.41 ± 0.04 2.30 ± 0.06 1.49 ± 0.10 H3 = 0.504 2.50 ± 0.03 1.42 ± 0.04 2.16 ± 0.06 1.25 ± 0.05 H4 = 0.608 2.45 ± 0.02 1.42 ± 0.04 2.04 ± 0.06 1.30 ± 0.03 H5 = 0.608 2.74 ± 0.02 1.43 ± 0.04 2.50 ± 0.06 1.70 ± 0.03H η D f ξ 2x l H6 = 0.608 2.64 ± 0.02 1.42 ± 0.04 2.31 ± 0.06 1.53 ± 0.03 H8 = 0.706 2.35 ± 0.02 1.43 ± 0.04 1.90 ± 0.06 1.12 ± 0.03 H9 = 0.802 2.27 ± 0.02 1.44 ± 0.04 1.80 ± 0.06 1.02 ± 0.03 TABLE IV : IVDifferent geometrical exponents of the contour loops extracted from surfaces with different sets of p−values reported inTable I. Theses values completely dependent on the p values.3D f + 2x l D f η + 2 H8 = 0.706 1.35 ± 0.02 1.33 ± 0.06 5.41 ± 0.12 5.36 ± 0.10 H9 = 0.802 1.27 ± 0.02 1.25 ± 0.05 5.34 ± 0.12 5.27 ± 0.10H η − 1 ξ D f H1 = 0.305 1.67 ± 0.03 1.71 ± 0.06 5.89 ± 0.16 5.82 ± 0.12 H2 = 0.404 1.60 ± 0.03 1.63 ± 0.06 5.72 ± 0.16 5.67 ± 0.12 H3 = 0.504 1.50 ± 0.03 1.52 ± 0.06 5.51 ± 0.13 5.55 ± 0.11 H4 = 0.608 1.45 ± 0.02 1.44 ± 0.06 5.56 ± 0.12 5.48 ± 0.10 H5 = 0.608 1.74 ± 0.02 1.75 ± 0.06 5.99 ± 0.12 5.92 ± 0.11 H6 = 0.608 1.64 ± 0.02 1.63 ± 0.06 5.79 ± 0.12 5.75 ± 0.11 TABLE V : VVerification of two basic hyperscaling relations for synthetic singular measure multifractal rough surfaces. TABLE VI : VIA summarization of results given in this paper based on contouring analysis for synthetic singular and smoothened multifractal rough surfaces. Acknowledgments:We are grateful to M. Ghaseminezhad for cross checking some of the results and also for fruitful discussions. MSM and SMVA are grateful to associate office of ICTP and their hospitality. The work of SMVA was supported in part by the Research Council of the University of Tehran. D Sornette, Critical Phenomena in Natural Sciences: Chaos, Fractals, Selforganization and Disorder: Concepts and Tools. Heidelberg, GermanySpringer-VerlagD. Sornette, Critical Phenomena in Natural Sciences: Chaos, Fractals, Selforganization and Disorder: Con- cepts and Tools, Heidelberg, Germany: Springer-Verlag, (2000). . H E Stanley, P Meakin, Nature. 335H. E. Stanley and P. Meakin, Nature 335, 405-409 (1988). . M Ausloos, D H Berman, Proc. R. Soc. London A. 400331M. Ausloos and D.H. Berman, Proc. R. Soc. London A 400 , 331(1985). J Feder, Fractals. Plenum Press New York and LondonJ. Feder, Fractals , (Plenum Press New York and London, 1988). . Plamen Ch, Luis A Ivanov, Ary L Nunes Amaral, Shlomo Goldberger, Michael G Havlin, Rosenblum, R Zbigniew, H Eugene Struzik, Stanley, Nature. 399461Plamen Ch. Ivanov, Luis A. Nunes Amaral, Ary L. Gold- berger, Shlomo Havlin, Michael G. Rosenblum, Zbig- niew R. Struzik and H. Eugene Stanley, Nature 399, 461 (1999). . C.-K Peng, S V Buldyrev, S Havlin, M Simons, H E Stanley, A L Goldberger, Phys. Rev. E. 491685C.-K. Peng, S. V. Buldyrev, S. Havlin, M. Simons, H. E. Stanley and A. L. Goldberger, Phys. Rev. E 49, 1685 (1994) The science of disasters: climate disruptions, heart attacks, and market crashes. Armin Bunde, Jurgen Kropp, Hans-Joachim Schellnhuber, Springer2Armin Bunde, Jurgen Kropp, Hans-Joachim Schellnhu- ber, The science of disasters: climate disruptions, heart attacks, and market crashes, Vol. 2, (Springer, 2002); . Armin Bunde, Shlomo Havlin, Fractals in science. Springer-VerlagArmin Bunde, Shlomo Havlin, Fractals in science, (Springer-Verlag, 1994); . S Hosseinabadi, A A Masoudi, M S Movahed, Physica B. 4052072S. Hosseinabadi , A. A. Masoudi, and M. S. Movahed, Physica B 405, 2072 (2010); . Ajay Chaudhari, Gulam Rabbani, Shyi-Long Lee, Journal of the Chinese Chemical Society. 5451201Ajay Chaudhari, Gulam Rabbani and Shyi-Long Lee, Jour- nal of the Chinese Chemical Society, 54(5), 1201 (2007); . Ajay Chaudhari, Ching-Cher Sanders Yan, Shyi-Long Lee, Applied Surface Science. 238513Ajay Chaudhari, Ching-Cher Sanders Yan and Shyi-Long Lee, Applied Surface Science 238, 513 (2004); . M Vahabi, G R Jafari, N Mansour, R Karimzadeh, J Zamiranvari, Journal of Statistical Mechanics: Theory and Experiment. 3002JSTATM. Vahabi, G. R. Jafari, N. Mansour, R. Karimzadeh and J. Zami- ranvari, Journal of Statistical Mechanics: Theory and Experiment (JSTAT), P03002 (2008); . Xia Sun, Zhuxi Fu, Ziqin Wu, Physica A. 311327Xia Sun, Zhuxi Fu, Ziqin Wu, Physica A 311, 327 (2002); . Pradipta Kumar Mandal, Debnarayan Jana, Phys. Rev. E. 7761604Pradipta Kumar Mandal and Debnarayan Jana, Phys. Rev. E 77, 061604(2008). A Arneodo, N Decoster, P Kestener, S G Roux, Advances In Imaging And Electron Physics. 1261A. Arneodo, N. Decoster, P. Kestener et S.G. Roux, Ad- vances In Imaging And Electron Physics 126, 1 (2003). . S G Roux, A Arneodo, N Decoster, Eur. Phys. J. B. 15765S.G. Roux, A. Arneodo et N. Decoster, Eur. Phys. J. B 15, 765 (2000); Roux et A. Arneodo, ibid. N Decoster, S G , 15739N. Decoster, S.G. Roux et A. Ar- neodo, ibid. 15, 739 (2000); . A Arnedo, N Decoster, S G Roux, 15567A. Arnedo, N. Decoster et S.G. Roux, ibid. B 15, 567 (2000). . C Meneveau, K R Sreenivasan, Phys. Rev. Lett. 591424C. Meneveau and K. R. Sreenivasan, Phys. Rev. Lett. 59, 1424 (1987); . C Meneveau, K R Sreenivasan, J. Fluid Mech. 224429C. Meneveau and K. R. Sreenivasan, J. Fluid Mech. 224, 429 ( 1991); . K R Sreenivasan, R A Antonia, Annu. Rev. Fluid Mech. 29435K. R. Sreenivasan and R. A. Antonia, Annu. Rev. Fluid Mech. 29, 435 (1997). . R Benzi, G Paladin, A Parisi, Vulpiani, J. Phys. A: Math. Gen. 173521R Benzi, G Paladin, G Parisi and A Vulpiani, J. Phys. A: Math. Gen. 17, 3521 (1984). . J F Muzy, E Bacry, A Arneodo, Phys. Rev. Lett. 673515J. F. Muzy, E. Bacry, and A. Arneodo, Phys. Rev. Lett. 67, 3515 (1991). . T Vicsek, Physica A. 168490T. Vicsek, Physica A 168, 490(1990). . E Ben-Jacob, O Shochet, A Tenenbaum, I Cohen, A Czirok, T Vicsek, Fractals. 215E. Ben-Jacob, O. Shochet, A. Tenenbaum, I. Cohen, A. Czirok and T. Vicsek, Fractals 2, 15 (1994). . S Lovejoy, Science. 216185S. Lovejoy, Science 216, 185 (1982); R F Cahalan, Advances in Remote Sensing and Retrieval Methods. A. Deepak, H. Fleming, J. TheonHamptonDeepak Publishing371R. F. Cahalan, in Advances in Remote Sensing and Retrieval Methods, edited by A. Deepak, H. Fleming, J. Theon (Deepak Pub- lishing, Hampton, 1989), p. 371 . M S Movahed, F Ghasemi, S Rahvar, M R Tabar, Phys Rev. E. 8421103M. S. Movahed, F. Ghasemi, S. Rahvar, M. R. Tabar, Phys Rev. E 84, 021103 (2011). . A I Zad, G Kavei, M R R Tabar, S M V Allaei, J. Phys: Condens. Matter. 151889A. I. Zad, G. Kavei, M. R. R. Tabar, and S. M. V. Allaei, J. Phys: Condens. Matter 15, 1889 (2003). . J D Farmer, E Ott, J A Yorke, Physica D. 7153J. D. Farmer, E. Ott and J.A. Yorke, Physica D 7, 153(1983) ; . Peter Grassberger, ; R Itamar Procaccia, A Badii, Politi, Phys. Rev. Lett. 501661Phys. Rev. Lett.Peter Grassberger and Itamar Procaccia, Phys. Rev. Lett. 50, 346 (1983). R. Badii and A. Politi, Phys. Rev. Lett. 52, 1661 (1984). . M Chekini, M R Mohammadizadeh, S M V Allaei, App. Surf. Sci. 2577179M. Chekini, M. R. Mohammadizadeh, and S. M. V. Al- laei, App. Surf. Sci. 257, 7179 (2011). . M S Movahed, E Hermanis, Phys. A. 387915M. S. Movahed, and E. Hermanis, Phys. A 387, 915 (2008). . J W Kantelhardt, arXiv:0804.0747J. W. Kantelhardt, arXiv:0804.0747. . B Dubuc, J F Quiniou, C Roques-Carmes, C Tricot, S W Zucker, Phys. Rev. A. 391500B. Dubuc, J. F. Quiniou, C. Roques-Carmes, C. Tricot, S.W. Zucker, Phys. Rev. A 39, 1500 (1989); . T Higuchi, Physica D. 46254T. Higuchi, Physica D 46, 254 (1990); . N P Greis, H S Greenside, Phys. Rev. A. 442324N. P. Greis, H. S. Greenside, Phys. Rev. A 44, 2324 (1991); . W Li, Int. J. of Bifurcation and Chaos. 1583W. Li, Int. J. of Bifurca- tion and Chaos 1, 583 (1991); . B Lea-Cox, J S Y Wang, Fractals. 187B. Lea-Cox, J.S.Y. Wang, Fractals 1, 87 (1993). . A A Saberi, M A Rajabpour, S Rouhani, Phys. Rev. Lett. 10044504A. A. Saberi, M. A. Rajabpour, S. Rouhani, Phys. Rev. Lett. 100, 044504, (2008). . A A Saberi, M D Niry, S M Fazeli, M R Rahimi Tabar, S Rouhani, Phys. Rev. E. 7751607A. A. Saberi, M. D. Niry, S. M. Fazeli, M. R. Rahimi Tabar, and S. Rouhani, Phys. Rev. E 77, 051607 (2008). . M A Rajabpour, S M Vaez Allaei, Phys. Rev. E. 8011115M. A. Rajabpour and S. M. Vaez Allaei, Phys. Rev. E 80, 011115, (2009). . M G Nezhadhaghighi, M A Rajabpour, Phys. Rev. E. 8321122M. G. Nezhadhaghighi and M. A. Rajabpour, Phys. Rev. E 83, 021122 (2011). . A A Saberi, H Dashti-Naserabadi, S Rouhani, Phys. Rev. E. 8220101A. A. Saberi, H. Dashti-Naserabadi, and S. Rouhani, Phys. Rev. E 82, 020101 (2010). . A A Saberi, Appl. Phys. Lett. 97154102A. A. Saberi, Appl. Phys. Lett. 97, 154102 (2010). . J Kondev, C L Henley, Phys. Rev. Lett. 7423J. Kondev, C. L. Henley, Phys. Rev. Lett. 74 (23), 4580, (1995). . J Kondev, C L Henley, D G Salinas, Phys. Rev. E. 611J. Kondev, C. L. Henley and D. G. Salinas, Phys. Rev. E 61 (1), 104, (2000). . C Zeng, J Kondev, D Mcnamara, A A Middleton, Phys. Rev. Lett. 80C. Zeng, J. Kondev, D. McNamara, and A. A. Middleton, Phys. Rev. Lett. 80, 109, (1998). . Ajay Chaudhari, Ching-Cher Sanders Yan, Shyi-Long Lee, J. Phys. A : Math and Gen. 363757Ajay Chaudhari, Ching-Cher Sanders Yan and Shyi-Long Lee, J. Phys. A : Math and Gen., 36, 3757 (2003). . Gao-Feng Gu, Wei-Xing Zhou, Phys. Rev. E. 7461104Gao-Feng Gu and Wei-Xing Zhou, Phys. Rev. E 74 , 061104, (2006). D Schertzer, S Lovejoy, Turbulent and Chaotic phenomena in fluids. T. TatsumiNorth-Holland, New-yorkD. Schertzer and S. Lovejoy, In Turbulent and Chaotic phenomena in fluids, edited by T. Tatsumi, (North- Holland, New-york, 1984), pp. 505-508. . D Schertzer, S Lovejoy, J. Geophys. Res. 929693D. Schertzer, S. Lovejoy, J. Geophys. Res. 92, 9693 (1987). . D Schertzer, S Lovejoy, F Schmitt, Y Ghigirinskaya, D Marsan, Fractals. 5427D. Schertzer, S. Lovejoy, F. Schmitt, Y. Ghigirinskaya, and D. Marsan, Fractals 5, 427 (1997). D Schertzer, S Lovejoy, Fractals: Physical Origin and Consequences. L. PietroneroNew YorkPlenum49D. Schertzer, S. Lovejoy, Fractals: Physical Origin and Consequences, edited by L. Pietronero (Plenum, New York, 1989), p. 49. Fractals and Nonlinear Variability in Geophysics. J Wilson, D Schertzer, S Lovejoy, D. Schertzer, S. LovejoyKluwer185DordrechtJ. Wilson, D. Schertzer, S. Lovejoy, Fractals and Nonlin- ear Variability in Geophysics, edited by D. Schertzer, S. Lovejoy (Kluwer, Dordrecht, 1991), p. 185. . U Frisch, Turbulence , Cambridge University PressCambridgeU. Frisch, Turbulence, Cambridge University Press, Cam- bridge (1995). . U Frisch, P L Salem, M Nelkin, J. Fluid Mech. 87719U. Frisch, P. L. Salem, and M. Nelkin, J. Fluid Mech. 87, 719 (1978). . W Ochs, J Wosiek, Phys. Lett. B. 289144W. Ochs and J. Wosiek, Phys. Lett. B 289, 159 (1992); 305, 144 (1993) . Ph, J L Brax, R Meunier, Peschanski, Z. Phys. C. 62649Ph. Brax, J.L. Meunier and R.Peschanski, Z. Phys. C 62, 649 (1994). . Leandros Perivolaropoulos, Phys. Rev. D. 481530Leandros Perivolaropoulos, Phys. Rev. D 48, 1530 (1993). . C Thomas, Halsey, H Mogens, Leo P Jensen, Itamar Kadanoff, Boris I Procaccia, Shraiman, Phys. Rev. A. 331141Thomas C. Halsey, Mogens H. Jensen, Leo P. Kadanoff, Itamar Procaccia, and Boris I. Shraiman, Phys. Rev. A 33, 1141 (1986). . Martin Greiner, Peter Lipa, Peter Carruthers, Phys. Rev. E. 511948Martin Greiner, Peter Lipa and Peter Carruthers, Phys. Rev. E 51 , 1948 (1995); . A Bialas, R Peschanski, Nucl. Phys. B. 273857A. Bialas and R. Peschanski, Nucl. Phys. B 273, 703 (1986);308, 857 (1988); . C B Chiu, R C Hwa, Phys. Rev. D. 43100C. B. Chiu and R. C. Hwa, Phys. Rev. D 43, 100(1991); . S Hegyi, Phys. Lett. B. 309443S. Hegyi, Phys. Lett. B 309 443 (1993); . H C Eggers, P Lipa, P Carruthers, B Buschbeck, Phys. Rev. D. 482040H. C. Eggers, P. Lipa, P. Carruthers and B. Buschbeck, Phys. Rev. D 48, 2040 (1993). . J M Gómez-Rodríguez, A M Baró, R C Salvarezza, J. Vac. Sci. Technol. B. 9495J. M. Gómez-Rodríguez, A. M. Baró and R. C. Sal- varezza, J. Vac. Sci. Technol. B 9, 495 (1991). . H E Hurst, Trans. Am. Soc. Civ. Eng. 116H. E. Hurst, Trans. Am. Soc. Civ. Eng. 116, 770, (1951). . C K Peng, S Buldyrev, A Goldberger, S Havlin, F Sciortino, M Simons, H E Stanley, Nature. 356168C. K. Peng, S. Buldyrev, A. Goldberger, S. Havlin, F. Sciortino, M. Simons and H. E. Stanley, Nature 356, 168 (1992) . J W Kantelhardt, S A Zschiegner, E Koscielny-Bunde, S Havlin, A Bunde, H E Stanley, Physica A. 31687J. W. Kantelhardt, S. A. Zschiegner, E. Koscielny-Bunde, S. Havlin, A. Bunde, H. E. Stanley, Physica A 316, 87 (2002). . Kun Hu, Plamen Ch, Zhi Ivanov, Pedro Chen, H E Carpena, Stanley, Phys. Rev. E. 6411114Kun Hu, Plamen Ch. Ivanov, Zhi Chen, Pedro Carpena and H. E. Stanley Phys. Rev. E 64, 011114 (2001). . R Zbigniew, Struzik, P J M Arno, Siebes, Physica A. 309Zbigniew R. Struzik, Arno P. J. M. Siebes, Physica A 309, 388, (2002). . J Arrault, A Arneodo, A Davis, A Marshak, Phys. Rev. Lett. 7975J. Arrault, A. Arneodo, A. Davis and A. Marshak, Phys. Rev. Lett., 79, 75 (1997). . J W Kantelhardt, H E Roman, M Greiner, Physica A. 220219J. W. Kantelhardt, H. E. Roman and M. Greiner, Physica A 220, 219 (1995). . H E Roman, J W Kantelhardt, M Greiner, Europhys. Lett. 35641H. E. Roman, J. W. Kantelhardt and M. Greiner, Euro- phys. Lett. 35, 641 (1996). . J F Muzy, E Bacry, A Arneodo, Int. J. of Bifurcation and Chaos. 4245J.F. Muzy, E. Bacry and A. Arneodo, Int. J. of Bifurca- tion and Chaos 4, 245 (1994). . A Arneodo, E Bacry, J F Muzy, Physica A. 213232A. Arneodo, E. Bacry and J.F. Muzy, Physica A 213, 232 (1994). . B A Bassett, Y Fantaye, R Hlozek, J Kotze, arXiv:0906.0993B. A. Bassett, Y. Fantaye, R. Hlozek, J. Kotze, arXiv:0906.0993. . B Nienhuis, Phase Transitions and Critical Phenomena. C. Domb and J. L. LebowitzAcademic PressB. Nienhuis, Phase Transitions and Critical Phenomena, edited by C. Domb and J. L. Lebowitz (Academic Press, London, 1987). . Martin Greiner, C Hans, Peter Eggers, Lipa, Phys. Rev. Lett. 805333Martin Greiner, Hans C. Eggers and Peter Lipa, Phys. Rev. Lett. 80, 5333 (1998).
[]
[ "GENERATING SUMMARIES FOR METHODS OF EVENT-DRIVEN PROGRAMS: AN ANDROID CASE STUDY A PREPRINT", "GENERATING SUMMARIES FOR METHODS OF EVENT-DRIVEN PROGRAMS: AN ANDROID CASE STUDY A PREPRINT", "GENERATING SUMMARIES FOR METHODS OF EVENT-DRIVEN PROGRAMS: AN ANDROID CASE STUDY A PREPRINT", "GENERATING SUMMARIES FOR METHODS OF EVENT-DRIVEN PROGRAMS: AN ANDROID CASE STUDY A PREPRINT" ]
[ "Alireza Aghamohammadi [email protected] \nDepartment of Computer Engineering\nDepartment of Computer Engineering\nDepartment of Computer Engineering\nSharif University of Technology\nSharif University of Technology\nSharif University of Technology\n\n", "Abbas Heydarnoori [email protected] \nDepartment of Computer Engineering\nDepartment of Computer Engineering\nDepartment of Computer Engineering\nSharif University of Technology\nSharif University of Technology\nSharif University of Technology\n\n", "Maliheh Izadi [email protected] \nDepartment of Computer Engineering\nDepartment of Computer Engineering\nDepartment of Computer Engineering\nSharif University of Technology\nSharif University of Technology\nSharif University of Technology\n\n", "Alireza Aghamohammadi [email protected] \nDepartment of Computer Engineering\nDepartment of Computer Engineering\nDepartment of Computer Engineering\nSharif University of Technology\nSharif University of Technology\nSharif University of Technology\n\n", "Abbas Heydarnoori [email protected] \nDepartment of Computer Engineering\nDepartment of Computer Engineering\nDepartment of Computer Engineering\nSharif University of Technology\nSharif University of Technology\nSharif University of Technology\n\n", "Maliheh Izadi [email protected] \nDepartment of Computer Engineering\nDepartment of Computer Engineering\nDepartment of Computer Engineering\nSharif University of Technology\nSharif University of Technology\nSharif University of Technology\n\n" ]
[ "Department of Computer Engineering\nDepartment of Computer Engineering\nDepartment of Computer Engineering\nSharif University of Technology\nSharif University of Technology\nSharif University of Technology\n", "Department of Computer Engineering\nDepartment of Computer Engineering\nDepartment of Computer Engineering\nSharif University of Technology\nSharif University of Technology\nSharif University of Technology\n", "Department of Computer Engineering\nDepartment of Computer Engineering\nDepartment of Computer Engineering\nSharif University of Technology\nSharif University of Technology\nSharif University of Technology\n", "Department of Computer Engineering\nDepartment of Computer Engineering\nDepartment of Computer Engineering\nSharif University of Technology\nSharif University of Technology\nSharif University of Technology\n", "Department of Computer Engineering\nDepartment of Computer Engineering\nDepartment of Computer Engineering\nSharif University of Technology\nSharif University of Technology\nSharif University of Technology\n", "Department of Computer Engineering\nDepartment of Computer Engineering\nDepartment of Computer Engineering\nSharif University of Technology\nSharif University of Technology\nSharif University of Technology\n" ]
[]
Developers often dedicate a great amount of time to program comprehension. Program comprehension reduces the cost and time of software development and increases maintainability of a program. However, the lack of documentation makes this process cumbersome for developers. Source code summarization is one of the existing solutions to help developers understand software programs more easily. Lots of approaches have been proposed to summarize source code in recent years. A prevalent weakness of these solutions is that they do not pay much attention to interactions among elements of a software. As a result, these approaches cannot be applied to event-driven programs, such as Android applications, because they have specific features such as numerous interactions between their elements. To tackle this problem, we propose a novel approach based on deep neural networks and dynamic call graphs to generate summaries for methods of event-driven programs. First, we collect a set of comment/code pairs from Github and train a deep neural network on the set. Afterward, by exploiting a dynamic call graph, the Pagerank algorithm, and the pre-trained deep neural network, we generate summaries. We conducted an empirical evaluation with 14 real-world Android applications and 26 participants to measure the quality of our approach. The experimental results show 32.20% BLEU4 and 16.91% METEOR which are a definite improvement compared to the existing state-of-the-art techniques.
10.1016/j.jss.2020.110800
[ "https://arxiv.org/pdf/1812.04530v1.pdf" ]
54,485,163
1812.04530
1c94aa6bc6643b22f21046977bef7298a5910bb2
GENERATING SUMMARIES FOR METHODS OF EVENT-DRIVEN PROGRAMS: AN ANDROID CASE STUDY A PREPRINT December 12, 2018 Alireza Aghamohammadi [email protected] Department of Computer Engineering Department of Computer Engineering Department of Computer Engineering Sharif University of Technology Sharif University of Technology Sharif University of Technology Abbas Heydarnoori [email protected] Department of Computer Engineering Department of Computer Engineering Department of Computer Engineering Sharif University of Technology Sharif University of Technology Sharif University of Technology Maliheh Izadi [email protected] Department of Computer Engineering Department of Computer Engineering Department of Computer Engineering Sharif University of Technology Sharif University of Technology Sharif University of Technology GENERATING SUMMARIES FOR METHODS OF EVENT-DRIVEN PROGRAMS: AN ANDROID CASE STUDY A PREPRINT December 12, 2018Program Comprehension · Source Code Summarization · Neural Machine Translation · Event-Driven Programs · Deep Learning Developers often dedicate a great amount of time to program comprehension. Program comprehension reduces the cost and time of software development and increases maintainability of a program. However, the lack of documentation makes this process cumbersome for developers. Source code summarization is one of the existing solutions to help developers understand software programs more easily. Lots of approaches have been proposed to summarize source code in recent years. A prevalent weakness of these solutions is that they do not pay much attention to interactions among elements of a software. As a result, these approaches cannot be applied to event-driven programs, such as Android applications, because they have specific features such as numerous interactions between their elements. To tackle this problem, we propose a novel approach based on deep neural networks and dynamic call graphs to generate summaries for methods of event-driven programs. First, we collect a set of comment/code pairs from Github and train a deep neural network on the set. Afterward, by exploiting a dynamic call graph, the Pagerank algorithm, and the pre-trained deep neural network, we generate summaries. We conducted an empirical evaluation with 14 real-world Android applications and 26 participants to measure the quality of our approach. The experimental results show 32.20% BLEU4 and 16.91% METEOR which are a definite improvement compared to the existing state-of-the-art techniques. Consider a program presenting one Button and one TextBox in the main display page. A user types an arbitrary text and pushes the Button which leads to another page. The new page shows the user's text. Figure 3 is a source code that demonstrates the behavior of this program. When a user pushes the Button, functions sendMessage() from the MainActivity class and onCreate() from the DisplayMessageActivity class are invoked, respectively. Unlike how trivial it may seem, this is not an easy task for a Java code. Indeed, when a user pushes the Button, the Android framework calls the onClick() API related to that Button. In other words, the developer must set the onClick attribute to sendMessage for the Button in res/layout/activity_main.xml. Therefore, finding relations between elements statically is a cumbersome task. In this paper, we try to solve the problem of generating summaries for methods of event-driven programs by extracting interactions between their elements at run-time. To this end, we used a deep neural network to generate summaries. Additionally, to capture interactions at run-time, we utilized dynamic call graphs. The main contributions of this work are: 1. We propose an approach to generate summaries for methods of event-driven programs. The proposed approach exploits deep neural networks and dynamic call graphs as the key components of the solution to produce meaningful summaries which not only address the semantics of the source code but also have a well-formed grammar. 2. Unlike existing work, we introduce a novel technique for generating summaries that concentrates on run-time execution. The rest of the paper is organized as follows. In Section 2, we describe our proposed approach. In Section 3, we evaluate the proposed approach by answering seven research questions. We assess our deep neural network model using BLEU4 and METEOR metrics. Furthermore, we set up a user study to evaluate the generated summaries on real-world Android applications. Next, Section 4 presents threats to the validity of our results. In Section 5, we review related work. Finally, we conclude this paper and present potential future work in Section 6. Proposed Approach In this section, we present our approach to generate summaries for methods of event-driven programs. We consider the sendMessage() method described in Section 1 as a running example. The running example is used throughout this paper to show our process of generating summaries. As shown in Figure 4, our proposed approach consists of five steps. In the first step, we extracted a dataset of comment/code pairs from the Github repository. Github is a developing platform for open source projects. Then, applied a few preprocessing tasks on the data such as deleting blank lines, removing code snippets without summaries, and refining code-words based on the Java naming convention. In the second step, we built a deep neural network for the comment/code pairs. This model was used to generate the final summaries. In the third step, we constructed a dynamic call graph of Android applications which were selected to generate summaries for. In the fourth step, the PageRank algorithm was applied to the graph mentioned above. In the end, using our deep neural network of step two and outputs of the PageRank algorithm, we generated human-readable summaries for the selected methods of applications. In the following, we will elaborate more on the steps of our approach. Step1: Preprocess Data In this part, we elucidate the first step of our approach. First, a preprocess will be applied to comment/code pairs extracted from the Github website. Hu et al. [18], extracted more than 500 thousand comment/codes pairs from Github and applied a few heuristic methods to extract 69708 pairs from this data. Although the 500 thousand pairs are available online, the prepossessed data are not accessible. As a result, we explored their raw data as a starting point. These source codes are written in Java, and Java programs follow specific naming conventions. The main preprocess steps used in this study are: 1. First, the blank lines (\n) and tabular characters (\t) were removed and replaced by space character. 2. Afterward, we identified and tokenized words with all capital letters that came before words that had capital first-letters. For instance, The following regular expression does the above task: // SQLDatabase --> SQL Step2: Train a Deep Neural Network Recently, researchers has turned to applying deep learning methods to various fields of software engineering such as commit message generation [22,23], intention mining [24], and code search [25]. Among these fields, is code summarization via deep learning [18], which has attained promising results so far. In this part, we describe our proposed deep neural network. The deep neural network is used as a pre-built model to generate final summaries. We followed the notation described at the deeplearning.ai video tutorial [26]. The notation of our deep model is as follows: • x: set of source codes written in Java programming languages. • x (i) : ith source code in the set of x. • x (i) t : tth token in the above sequence. • T s : T is the length of the sequence s. • y: set of comments written in natural language. • y (i) : ith comment in the set of y. • y (i) t : tth term in the above sequence. Our deep neural network tries to translate x (i) = x (i) 1 , x (i) 2 , ..., x (i) Tx to y (i) = y (i) 1 , y (i) 2 , ..., y (i) Ty for every i in a comment/code pair. Figure 6 demonstrates the architecture of our proposed deep neural network. The architecture consists of three components, namely encoder, decoder, and attention mechanism. In the following, we describe each component in detail. Encoder Attention Mechanism Decoder y T y y 3 Figure 6: Architecture of the deep neural network y 2 y 1 x T x x 3 x 2 x 1 The Encoder There have been many pieces of research on the semantic representation of terms in a vector format with real numbers, namely Continuous Bag of Words (CBOW) [27], SKIP-GRAM [28][29][30], and Global Vectors (GLOVE) [31]. The benefit of this approach is that as much as the terms are semantically similar, their vectors are similar as well. Therefore, we used one embedding layer in the encoder and decoder components, for which the weights are tuned during the deep neural network learning phase. However, to reduce the learning time and to obtain more accurate weights, we used the pre-built model introduced in the Glove website [32]. One simple solution to avoid overfitting is to use dropout [33]. Dropout randomly omits neural network units. We used dropout = 0.2 in the neural network layers similar to Luong et al. research [34]. Recurrent Neural Network (RNN) is suitable for sequences of inputs [35]. RNN generates sequence y = (y 1 , y 2 , . . . , y Ty ) from input sequence x = (x 1 , x 2 , . . . , x Tx ). During the learning phase, RNN computes the weights using equation (1), h t = σ(W h [h t−1 , x t ] + b h ) y t = W h h t + b y T x = T y (Eq. 1) where function σ is calculated using equation (2): σ (z) = 1 1 + e −z (Eq. 2) Vanishing gradient is a problem in simple RNNs [36]. Vanishing gradient happens when a gradient is very small, and hinders changing values of weights and even can stop the neural network's training. To solve this issue, various methods such as Gated Recurrent Unit (GRU) [37] and Long Short-Term Memory (LSTM) [38] have been proposed. We applied the latter in this work similar to Luong et al. study [34]. LSTM is calculated using equation (3): c t = tanh (W c [h t−1 , x t ] + b c ) Γ u = σ (W u [h t−1 , x t ] + b u ) Γ f = σ (W f [h t−1 , x t ] + b f ) Γ o = σ (W o [h t−1 , x t ] + b o ) c t = Γ u * c t + Γ f * c t−1 h t = Γ o * tanh (c t ) (Eq. 3) Unidirectional RNNs use only past data. However, knowing about the future helps as well. Bidirectional RNNs (BRNN) process sequences on both directions and two different layers [39]. In the learning process, weights are computed using equation (4): − → h t = σ W− → h − → h t−1 , x t + b− → h ← − h t = σ W← − h ← − h t−1 , x t + b← − h y t = W y − → h t , ← − h t + b y (Eq. 4) Graves and Schmidhuber [40], combined bidirectional recurrent neural networks with LSTM. Moreover, one can stack layers of neural networks to build a deep network [41]. We have used a stack of BRNNs on top of the embedding layer in the encoder element. The Decoder Decoder generates summaries. We aim at finding the sequence y = (y 1 , y 2 , . . . , y Ty ) from the sequence x = (x 1 , x 2 , . . . , x Tx ), given that it applies in equation (5): arg max y = Ty t=1 P (y t |x, y 1 , y 2 , . . . , y t−1 ) (Eq. 5) One approach is to test all possible cases, which is definitely costly, with the computational complexity of O(|V | Ty ) (|V | denotes the vocabularies' set size). Another approach is to use a greedy search algorithm. These algorithms select a term that maximizes the value of P (y t |x, y 1 , y 2 , . . . , y t−1 ) in each step. However, if one utilizes a greedy search algorithm, she cannot change the term in the future. Furthermore, greedy search algorithms do not guarantee to produce good results since the co-occurrence probability of some terms is higher than others. A better solution is to exploit the beam search algorithm [42]. In the beam search algorithm, the |B| top probabilities, are recorded partially for every step. |B| denotes as the width of the beam. Suppose |B| = 3 and |V | = 100. At first, one needs to compute P (y 1 |x) for each term, that is 100 probabilities for different values in the dictionary. Then she selects the three highest probabilities. Next, for every three terms, she computes the probability P (y 2 |x, y 1 ) for all the cases in which y 2 is equal to one of the vocabularies' set. It results in |B| × |V | = 3 × 100 = 300 number of computed probabilities. She should again select the three highest probabilities among these 300 values. Next, for each of these three phrases, she computes the probability P (y 3 |x, y 1 , y 2 ) for all the cases in which y 3 is equal to one of the vocabularies' set. The same as above, she will have another 300 probabilities. In the next steps, the algorithm is applied similarly as before until the final step, in which a summary with the highest probable value is selected as the output summary. This heuristic algorithm does not necessarily optimize results; however, its computational complexity equals to O(|B| × |V |) which is immensely faster than computing all cases. It is worth mentioning that if |B| = 1, this heuristic algorithm acts like a greedy one. As |B| increases, the quality of generated summaries improves, however, the learning time rises as well. The Attention Mechanism Recently, the sequence-to-sequence model has yielded valuable results in the neural machine translation [43]. In the traditional sequence-to-sequence model [44], the decoder uses the last hidden state of the encoder as an input for generating the output sequence ( Figure 7). In other words, all the information about encoder is stored in h Tx . Encoder Decoder EOS SOS y T y y 3 y 2 y 1 Figure 7: Traditional sequence-to-sequence model Information integration in h Tx causes some problems. For example, to produce the output, it needs to remember the whole input. y t focuses on all the input terms, however, it is better to pay more attention (give more weights) to some parts of the input. To mitigate this problem, the attention mechanism was introduced. The attention mechanism was first used by Graves in his work [45]. Afterward, other researchers proposed different variations of this mechanism in their studies [46,47]. In this work, a modified version of the attention mechanism introduced by Bahdanau et al. [46], was applied. x T x x 3 x 2 x 1 h T x h 2 h 1 h 0 In the encoder element, we used a stack of BRNNs. Suppose there are l layers, and − → h [l] t and ← − h [l] t are the forward and backward states of the RNN for the term t. Therefore, h [l] t is computed as h [l] t = [ − → h [l] t , ← − h [l] t ]. We defined a context matrix denoted as C. C i is the ith column of the context matrix and is called a context vector. C i indicates how much attention the output term y i pays to the terms of input sequence x = (x 1 , x 2 , . . . , x Tx ). In other words, each of the members of x = (x 1 , x 2 , . . . , x Tx ) to what extent attributes to generating the output y i . States of the RNN of the decoder element are denoted as S i . α ij represents how much should y i of the decoder element (the ith term of the summary) pay attention to h [l] j of the decoder element (the jth term of the code). Figure 8 depicts the relationship between the attention mechanism and the decoder element in the RNN state. Figure 8: The relationship between the attention mechanism and the decoder element Figure 9 describes the structure of attention unit. According to Figure 9, the context vector C i is calculated based on equation (6). Equation (7) computes the attention weights. BEAM SEARCH WRAPPER SOFTMAX SOFTMAX SOFTMAX LSTM LSTM LSTM AU AU AU C T y C 2 C 1 y S T y−1 S 1 S 0 h [l] T x h [l] 2 h [l] 1C i = Tx j=1 α ij h [l] j (Eq. 6) α ij = exp (e ij ) Tx k=1 exp (e ik ) e ij = tanh W j h [l] j + U s S i−1 (Eq. 7) The next step is to train a model on the preprocessed data, which will be used to generate summaries in the final step. Implementation details for the deep neural network are discussed in this part. The operational environment for the deep neural network was the Ubuntu16.04. Our hardware included 40 processing cores and 64GB RAM. We used Tensorflow library to build the neural network model [48] and Pandas library to preprocess the data [49]. Embedding Figure 9: Attention unit 1 layers included vectors with dimensions of 300. If the input word already exists in the pre-built model, its weights from the model are used. Otherwise, the corresponding vector to the input word is initialized with uniformly distributed real numbers between -1 and 1. The pre-built model contains about 2.2 million words. The maximum length of summaries and codes are set to 35 and 100 tokens, respectively. In cases where the length of the input code is less than 100, remaining elements of the vector are replaced with zeros. Moreover, four words are pre-allocated namely (PADDING, 0), (UNK, 1), (SOS, 2), and (EOS, 3). UNK refers to an unknown word, which means the word does not exist in the deep neural network dictionary. Furthermore, SOS and EOS represent the start and end of each sentence, respectively. In the learning process of deep neural networks, the goal is to minimize the loss function. We have used cross-entropy loss function in this study. The set of generated summaries is represented by (y (1) , y (2) , . . . , y (N ) ), in which N is the number of generated summary and y (i) denotes as the ith generated summary. Therefore, the cross-entropy loss function can be calculated using equation (8): e iT x e i1 e i2 α iT x α i1 α i2 tanh tanh tanh h [l] T x h [l] 1 h [l] 2 SOFTMAX S i−1 C iL = N i=1 T y (i) t=1 − log P y (i) t |y (i) 1 , y (i) 2 , . . . , y (i) t−1 (Eq. 8) Exploding gradient is one of the problems with long sequences. To prevent this, given that the gradient's size is more than a specific threshold such as τ = 5, one should decrease its value using the approach introduced by Pascanu et al. [50]. In other words, its value should be updated using equation (9): g ← τ g g (Eq. 9) Adam is used for parameters' optimization [51]. As suggested Kingma and Ba study [51], we used β 1 = 0.9, β 2 = 0.999, and = 10 −8 as default input values for Adam algorithm. Overfitting is another problem that may occur while applying machine learning techniques. It happens when a technique matches too closely with a specific set of data. Therefore, rendering the aforesaid technique unfit for predicting other data sets reliably. To avoid overfitting, we randomly split the deep model's inputs into three categories, namely train (80%), valid (10%), and test (10%) set. We generated checkpoints for every epochs during the training phase. Then, the best model for validation set was selected (in terms of BLEU score) to evaluate on the final test set. Step3: Create a Dynamic Call Graph Event-Driven programs depend on the occurrence of events at run-time. Consider the running example illustrated in Section 1. The method sendMessage() is invoked every time a user clicks the Button. However, it is not a trivial task to find out why pushing the button is followed by running the sendMessage() method. In this step, we tackle this problem by leveraging the power of dynamic call graphs. Yuan et al. [52], proposed an approach to generate a dynamic call graph for Android applications. Authors created a tool named Rundroid to create these graphs. This tool not only considers invocations between methods in static time but also recognizes messages transferred between an application and the Android framework at run-time. However, the Rundroid lacks automation, and users have to manually run and test programs to generate call graphs. Therefore, to automatize this task through generating random test with desired time intervals, we used a tool developed by Google, known as Monkey [53]. Figure 10 depicts a part of dynamic call graph generated for the running example. The PageRank algorithm was developed by Page and Brin [54] to sort webpages based on their popularity in Google's search engine. McBurney et al. [20] used the same concept for measuring the importance of different methods. Similarly, we applied the PageRank algorithm to the dynamic call graph generated in the previous step. The PageRank algorithm plays a crucial role in the proposed approach. Therefore, we dedicate the following part to discussing this algorithm. Consider the dynamic call graph of Figure 10. This graph consists of 13 nodes denoted as V = {n 1 , n 2 , n 3 , . . . , n 13 } and 12 edges. Damping factor, denoted as d, indicates how likely is it for a specific node to be visited through time. It is conventional to set d = 0.85 [55]. The PageRank algorithm assigns a rank to each nodes. These ranks determine the importance of nodes in the corresponding graph. Ranks are calculated using equation (10) [54] which r i shows the rank of n i : r i = (1 − d) + d × nj ∈Bi r j l j (Eq. 10) In equation (10), l i is the number of outgoing edges from n i and B i is the set of nodes which have outgoing edges to n i . For instance, Table 1 shows results of the PageRank algorithm applied to the running example. l i B i r i l 1 = 1 B 1 = {} 0.0545 l 2 = 4 B 2 = {1} 0.1009 l 3 = 1 B 3 = {2} 0.0717 l 4 = 5 B 4 = {3} 0.1155 l 5 = 0 B 5 = {2} 0.0717 l 6 = 0 B 6 = {2} 0.0717 l 7 = 0 B 7 = {2} 0.0717 l 8 = 0 B 8 = {2} 0.0717 l 9 = 0 B 9 = {4} 0.0742 l 10 = 0 B 10 = {4} 0.0742 l 11 = 0 B 11 = {4} 0.0742 l 12 = 0 B 12 = {4} 0.0742 l 13 = 0 B 13 = {4} 0.0742 Step5: Generate Summaries In this step, final summaries are generated from the pre-trained model and ranks of nodes in the dynamic call graph produced in the second and fourth steps, respectively. To this end, methods of selected application are extracted. Suppose sendMessage() is one of these methods. First, we applied preprocessing tasks to the sendMessage() method. Figure 5a illustrates the output of this step. Then, by using the pre-trained model from the second step, a summary was produced for the preprocessed running example (Figure 5b). From the nodes in the dynamic call graph that have outgoing edges to the selected node (method), we selected the node with the highest rank. We call this node a block. If the block has a corresponding method in the source code of the program, we use that source code as an input for the pre-trained model. Otherwise, the block is related to the Android framework. In this case, we create a dummy method by adding a signature to the block. For instance, in Figure 10, there is only one node called onClick(). Since the onClick() is related to the Android framework, we created a dummy method for the onClick() element and passed it to the pre-trained model (Figure 5c). After adding the output of the latter step, the summary for the given method was generated (Figure 5d). Evaluations In this section, we present the results of both qualitative and quantitative evaluation of our proposed approach. First, the deep neural network was assessed using BLEU4 and METEOR metrics. Then, using an empirical study, we examined the usefulness of our approach in aiding developers comprehend event-driven programs. This qualitative assessment was performed on 14 Android applications. Evaluations of Deep Neural Network Here, we evaluate the deep neural network. We first present Research Questions (RQs), evaluation metrics and the evaluation process. Afterward, we discuss results of our evaluations and analyze them subsequently. Research Questions To evaluate our deep neural network, we answer the following four questions: RQ1: How much the proposed model has been successful in learning comments/codes sets? RQ2: What is precision of the generated summaries by the proposed model? RQ3: What proportion of reference summaries where retrieved as the final generated summaries? RQ4: How well has the proposed deep neural network performed regarding the other baseline deep neural networks? Evaluation Metrics Here, we investigate the evaluation metrics used in this study, namely BiLingual Evaluation Understudy (BLEU) and Metric for Evaluation of Translation with Explicit ORdering (METEOR). BLEU is used for automated evaluation of machine translation algorithms [56]. Since code summarization is a type of translation of code snippets to human-readable summaries, BLEU can be used for evaluating abstractive code summarization. To understand how BLEU is computed, consider the following example. We have the below references for assessing the generated summaries. Reference summary is the correct or the real summary from our data set for a code snippet. // Reference1 : The statement creates the intent . // Reference2 : This statement creates the intent . Furthermore, the generated summary is as follows: // Generated : the the the the the . For this example, the traditional precision metric is calculated as Precision = 5 5 = 1. Although the translation is definitely not good, precision's value is at its maximum. BLUE refines precision by valuing each term exactly for as many times as it has appeared in the reference translations. P 1 considers every term separately, so we have P 1 = 2 5 in the previous example. P 2 has a similar concept as P 1 , except that it computes the precision of bigrams. Suppose, we have the following summary (P 2 is computed as P 2 = 1 2+1 ): // Generated : creates the creates the . We compute the precision for different values of n in n-grams. The final score is calculated using equation (11). In this equation, BP is a penalty for short summaries, which are identified using equation (12). In equation (12), r and c are the lengths of reference and generated summaries, respectively. BLEU = BP × exp N i=1 log(P i ) N (Eq. 11) BP = 1 c > r exp 1 − r c c ≤ r (Eq. 12) METEOR was proposed to mitigate BLEU's shortcomings [57]. METEOR focuses mainly on recall, unlike BLEU which pays more attention to precision. METEOR is based on the term-to-term mapping of the generated summary with its corresponding reference summary. Suppose the term activity has appeared once in the generated summary but twice in the reference summary. Therefore, it will be assigned to the reference term twice. To find the most suitable mapping, METEOR uses on a heuristic. Mapping occurs at the following three cases. When two terms are exactly the same, when two terms have common stems, or when they are semantically similar. METEOR is calculated using equation (13). In equation (13), R is the refined recall, P is the refined precision and P N is the penalty (which is issued for having only unigrams). METEOR = 10RP R + P × (1 − P N ) (Eq. 13) In equation (14), C is the number of common chunks between the generated and reference summaries. P N = 0.5 × C M u 3 (Eq. 14) To better understand chunks, consider the following example, which has two common chunks: // Reference1 : The method creates an activity // Reference2 : The method then creates an activity // Generated1 : The method // Generated2 : creates an activity In cases that reference and generated summaries are identical, there is only one chunk. On the other hand, if there exists only unigrams, number of chunks equals to the number of term-to-term mappings, which is denoted as M u . Therefore, METEOR can reduces F α= 1 9 = 10RP R+P to half. Evaluations Setup Our deep neural network uses Github as an input resource. Some of the pairs did not have enough comment length. Therefore, we removed pairs with comments shorter than four words. Furthermore, some of the comments are too long to be used for training deep neural networks. Yin et al. [58], claimed that most of the summaries are less than three sentences. Moreno et al. [10], stipulated that summaries with less than 20 terms are suitable for comment generation. Consequently, comments with more than 35 words were removed from the pairs. Similarly, source codes with more than 100 tokens were removed. Some of the comments neither were written in English nor were produced by human (They were generated automatically). Finally, by applying a few minor heuristics, we selected 71257 pairs of comment/code. Table 2 describes statistical information about these pairs. Evaluations Results To answer the RQ1, we used perplexity metric [59]. Perplexity estimates how well a deep neural network can perform on a training dataset. It is calculated using Perplexity = exp(L), in which L is the cross-entropy loss function. Table 3 shows best perplexity values in the 10 last epochs. Moreover, Figure 11 presents cross-entropy loss function based on different epochs. To answer the RQ2, we used the BLEU4 metric. The maximum number of terms for generated summaries is 35, which is considered short. Therefore, based on the suggestion of Papineni, et al. [56], we used the maximum four-grams in calculating the value of this metric. To answer the RQ3, we used METEOR metric. METEOR for 200 epochs on our test dataset equaled to 12.71. Table 3 presents the values of discussed metrics for different parameters. To answer the RQ4, we applied the baseline introduced by Iyer et al. [17]. Table 4 presents results of the proposed deep neural network's performance in comparison with their approach. They generated summaries using RNNs and attention mechanism. Authors used an embedding layer, which they initialized randomly. Furthermore, they applied a beam search algorithm for generating summaries. They collected the data for their study from Stack Overflow, a well-known question and answering website (Q&A) [60]. Stack Overflow is the flagship site of the Stack Exchange Network website and is a platform for questions and answers in a wide range of software developing topics [60]. The proposed approach SQL 8.5 31.5 Quantitative Analysis of Results According to Table 3, cross-entropy loss function has decreased 1.84 per word in perplexity. This indicates that the proposed model has efficiently performed on the training dataset. Furthermore, according to Figure 11, the model has not progressed significantly after for epoch number 170. Therefore, we believe increasing the epoch number to more than 200 does not improve the performance of the model very much. Table 4 compares the proposed neural network with the baseline. According to results, our model has improved BLEU4 for both C# and SQL programming languages to the extent of more than 50%. However, baseline performed better regarding the METEOR metric. This is probably due to our model was originally designed for Java programming language and did not include any preprocessing on other languages such as SQL and C#. This is confirmed by Table 3 which presents results for Java. METEOR for Java increased to 12.71. Evaluations of Generated Summaries Here, we evaluate the usefulness of our model in aiding Android applications' comprehension using an empirical study. Research Questions To evaluate generated summaries, we investigate the following questions: RQ1 Considering the reference summaries, how accurate have been the generated summaries? RQ2 How well has performed the proposed model regarding other approaches? RQ3 How is the quality of the generated summaries? Evaluations Setup We used 14 Android applications with different sizes to evaluate the generated summaries. We selected these applications because they are open source and have been used in other researches as well [61,62]. Table 5 presents these applications. First, we randomly selected two methods from each application and wrote a summary for each one manually by carefully considering the context of those methods and interactions among elements. Afterward, a second summary was generated for each method using the proposed model. To be able to answer the RQ3, we had to know the opinions of practitioners about the generated summaries. Therefore, we designed an online questionnaire to qualitatively assess the results, and presented the above summaries to the participants to help them evaluate the generated summaries more efficiently. Our participants consisted of three groups: five Ph.D., 15 master, and six bachelor students majored in computer science, with an average of 6.4 years of general programming experience, 2.9 years of Java programming experience, and 1.3 years of Android programming experience. It took each participant on average 78 minutes to finish the questionnaire. We analyzed the generated summaries from two perspectives; their informativeness and naturalness [17]. Informativeness What proportion of the important parts of the code does the generated summary cover. Naturalness How smooth and human-readable is the generated summary. Naturalness also takes into account the syntax of each sentence. Participants scored the generated summaries for each method based on a 1-5 star scaling. Description of each score is as follows: • Informativeness: -One star: the generated summary does not describe the method's functionality to any extent. -Two stars: the generated summary documents some insignificant parts of the code and ignores the more important parts. -Three stars: some important parts of the code are neglected. -Four stars: most of important parts are covered in the summary. -Five stars: all significant and essential parts of the code are well documented. • Naturalness: -One star: the generated summary is not readable for humans. -Two stars: the generated summary is understandable, but barely. -Three stars: the generated summary is understandable, but has noticeable syntax errors. -Four stars: the generated summary is understandable, but has small syntactical errors. -Five stars: the generated summary is understandable with no syntactical errors. Evaluations Results To answer the RQ1, we calculated BLUE4 and METEOR metrics for each method. The first author of this paper has developed the CrowdSummarizer tool [2]. Since the source code was available to us, we compared our new results to the mentioned approach to answer the RQ2. We selected BLEU4 and METEOR as our evaluation metrics. The model of CrowdSummarizer [2] was applied to the extracted methods in the previous question. Additionally, the sequence-to-sequence [43] model with the attention mechanism was implemented as another approach. Tables 6 presents the comparison results of these three approaches. To answer the RQ3, Table 7 presents results of each participant in terms of informativeness and naturalness. The mean scores for informativeness and naturalness for all participants are 3.69 and 4.48, respectively. Quantitative Analysis of Results According to Table 6, average BLEU4 and METEOR of the proposed approach are 32.20 and 16.91, respectively. Our proposed model has increased BLEU4 more than one percent and METEOR more than 30 percent. Figure 13 depicts BLEU4 and METEOR distributions of the proposed and existing approaches. To investigate whether there is a significant difference between the results of our proposed approach and other existing approaches, we first calculated Cohen's d effect size [63]. The effect size is a measure to show the degree of difference between the two categories. Cohen stipulated that the value of d ≈ 0.2, d ≈ 0.5, and d ≈ 0.8 respectively suggest small, medium, and large differences between the two categories. Sawilowsky [64] extends Cohen's work and proposed a slightly different scale, that is d ≈ 0.1, d ≈ 1.2, and d ≈ 2 respectively suggest very small, very large, and huge differences between the two categories. We also applied two-tailed Student's t-test with a confidence level of 95% (α = 0.05). We define the null and alternative hypotheses as follows: H 0 There is no significant difference between the two categories. H 1 There is a significant difference between the two categories. The p-value of our proposed approach and Crowdsummarizer regarding BLEU4 and METEOR are about 10 −10 < α and 0.03 < α. Therefore, we reject the null hypothesis H 0 , meaning there is a significant difference between the results of these two approaches. According to Table 7, sample means for informativeness and naturalness are X inf = 3.69 and X nat = 4.48, respectively. We applied t-distribution to estimate mean and standard deviation of informativeness and naturalness results. Using equation (15), and the confidence level of 95% (α = 0.05), σ X inf = S inf √ N = 0.11 σ Xnat = Snat √ N = 0.05. (Eq. 15) We can compute equation (16), Equation (17) shows that with the confidence level of 95%, by increasing the number of participants, in the worst case the mean scores for informativeness and naturalness will be greater than 3.46 and 4.38, respectively. µ = X ± t (N −1=25, Qualitative Analysis of Results According to Figure 12, and our definition of informativeness and naturalness metrics, we conclude as follows: 1. Participants in 29.6 percent cases reported that generated summaries cover all essential parts of the codes. 2. Participants in 60.6 percent cases reported that in the worst case, generated summaries cover many salient features of the code. 3. Participants only in 17.3 percent cases reported that generated summaries are not related to the codes or document just trivial code snippets. 4. Participants in 39.4 percent cases reported that generated summaries have neglected a few necessary parts of the codes. 5. Participants in 62.9 percent cases reported that generated summaries are human-readable and do not have any syntactical error. 6. Participants in 89.2 percent cases reported that in the worst case, the generated summaries have minor syntactical errors. 7. Participants in 10.8 percent cases reported that generated summaries are human-readable but have major syntactical errors. 8. Finally, participants only in 3.1 percent cases reported that generated summaries are barely human-readable. Threats to Validity In this section, we review threats to the validity of our research findings, categorizing possible threats into four groups of internal, external, construct and dependability ones [65]. Internal Validity Internal validity asks whether the variables used in the proposed approach affect the outcomes and whether there are the only influential factors in the study [65]. The dynamic call graph in the Rundroid tool is constructed based on tests that are run on Android applications. These tests are run manually in the original version of the study [52]. Therefore, how the tests are run and their runtime impact results. To reduce this threat, authors generated 5000 random events using the Monkey tool to minimize human intervention in the tests. In the analysis section of our proposed approach, we evaluated the quality of final summaries extracted from 14 Android applications and 28 methods. It is reasonable that the quality of extracted methods affects results. To reduce this threat, we sampled randomly from extracted methods. Twenty-six individuals performed our qualitative assessment. Therefore, the outcome of this section depends on characteristics of the individuals taking the questionnaire, such as their mood, the time it took them to fill the questionnaire, and other natural factors. To reduce this threat, we tried to have a large number of participants. External Validity External validity includes how expandable are results of a study, can they be used in other contexts, and do cause and effect relationships hold with other conditions as well [65]. In this study, we have used the Rundroid tool to build call graphs. The Rundroid is developed for generating call graphs in the Android framework. Therefore, it is not suitable for usage in other event-driven programs. To reduce this threat, we plan to investigate and use other tools in near future. As mentioned above, 26 individuals performed our qualitative assessment of the generated summaries for the selected real-world Android applications' methods. Because the number of participants is limited, we cannot extend our results to the rest of the developers' community. To reduce this threat, we have tried to select a well-distributed sample of developers to assist in the evaluation phase. In the first and second phase of the proposed approach, we have used deep neural networks. The deep neural network architecture can be employed in other contexts, namely other natural language processing fields. Moreover, in the evaluation phase, we have only used the Android framework's examples as event-driven programs. Thus, it is not guaranteed that our approach will perform the same on other event-driven programs such as web-based programs. In future, we plan to address this threat by evaluating our model on other event-driven platforms. Construct Validity Construct validity includes theoretical concepts and discussions of the experiment and the use of appropriate evaluation metrics [65]. Theoretical concepts used in this work, have been already evaluated and proved by the academic society. The proposed approach is a combination of different methods in a new context. We have evaluated the generated summaries not only by valid and reliable quantitative metrics but also through human qualitative judgment. Results indicate that the employed approach has been successful in generating summaries. Dependability Dependability validity answers to questions such as whether the findings are compatible, and whether the experiment and its results are reproducible [65]. Compatibility We evaluated the final generated summaries quantitatively and qualitatively. As shown in previous sections, their outcomes are compatible. Reproducibility we have used deep neural networks (which are inherently based on probability) to generate the summaries of event-driven programs' methods. To reduce this threat, we have set the number of epochs to 200. This is because cross-entropy loss function is almost stable after the 170th epoch and did not decrease in our experiments. Also, the preprocessed input data is available online for other researchers at https: // github. com/ ase-sharif/ deep-code-document-pairs . It is worth mentioning that we have tried our best to minimize human intervention in all steps to make results more independent and reliable. Related Work In this section, we review three previous types of approaches to code summarization, namely information retrieval, machine learning, and crowdsourcing. According to Table 8, most of these approaches have exploited machine learning techniques. Among these techniques, topic modeling is a popular one. However, in recent years, neural networks have been used as a new path to source code summarization. More than 90 percent and about half of the existing approaches, summarize methods and classes, respectively. As for evaluation, BLEU4 and METEOR has been recently favored over precision and recall measures. The Java programming language is the most popular language used in source code summarization techniques. We present an overview of these approaches in the following. Code Summarization via Information Retrieval Information Retrieval is the process of extracting intended information from a document [66]. Sridhara et al. [8], proposed an algorithm for automatic description of Java methods. They applied preprocessing on Java methods using the Software Word Usage Model (SWUM). SWUM is a technique for displaying methods of a program in the form of noun, verb, and adverb groups. McBurney et al. [20], introduced an approach for the automatic generation of documents for Java methods based on the context. Context not only can specify tasks of a method but also it can help understand why that method exists in the first place. Their approach is based on the PageRank [54], SWUM [8], and Natural language generation system [67]. In summary, this approach uses PageRank to find the most important methods for the given context. SWUM helps determine what these most important methods do. Finally, natural language generation system generates a human-readable summary. Rodeghero et al. [19] proposed a method for choosing essential words of a code segment. They analyzed developers' eye-movements and their focused attention while writing summaries for a method, and then used their findings to weight the words subsequently. Ten developers in an 1-hour session separately read 67 Java methods, then comprehended and finally summarized them. They identified and weighted words with the most attention through analyzing results of the above experiment. Antoniol et al. [9] proposed an approach for improving traceability links between a code segment and its document. They utilized the unigram language model and Vector Space Model (VSM). Unigram language model was used to link a code and a document, and VSM to present each document as a vector of words. If a word appears in a document, a non-zero value is saved for it in its corresponding place in the vector. VSM does not restrict how one can compute these values. In this work, they presented documents and codes as a VSM. Then used cosine similarity to find traceability links between code and document Moreno et al. [10], presented an approach for summarizing Java classes. They primarily focused on each class content and tasks but did not heed the connection between classes. They first found a class's stereotypes and its methods. Then classified stereotypes into 13 groups. Afterward, using natural language rules, they generated a summary for each class based on a specific format. In the end, the authors built a tool for automatic class summary generation [68]. They extracted 166 code snippets along with their description from Google Android Guide [69] in which 52 were sampled randomly. They asked 16 developers to write a summary of these code snippets. They collected 156 summaries. Analyzing these summaries, they discovered that none of the developers used the words in code segments in their summaries of code snippets, meaning abstractive summaries are more suitable comparing to extractive summaries. Code Summarization via Machine Learning McBurney et al. [11], presented a code summarization method using hierarchical topic modeling. Topic modeling is a statistical model used for extracting groups of words (topics) from a collection of documents [11]. They applied the Hierarchical Document Topic Model (HDTM) algorithm to their data [70]. The most abstract description of program's tasks is given in the highest level of the hierarchy. As one goes down the hierarchy, descriptions become more precise and clear. Authors first formed the call graph with methods as nodes and caller-callee among methods as edges of the graph. Then, HDTM was performed on the graph. Haiduc et al. [12], considered code as text and exploited previous text summarization methods for summarizing code snippets as well. They used the VSM and Latent Semantic Indexing (LSI) [71] in their work. Authors first considered each method as a document, and then calculated cosine similarity between word vectors and their selected document for summarization. Next, they sorted the words based on their similarity scores and selected k most similar words for the given document. LSI then uses this k-word list for generating a high-level summary of the whole program. Eddy et al. [15], proposed a code summarization algorithm using hierarchical topic modeling. In fact, this study is a replica of the Haiduc et al. work [12], with the distinction that they utilized HPAM instead of VSM and LSI. Programming tools help developers hide or reveal some parts of their code. This feature is known as code folding. Fowkes et al. [14], introduced an approach for code summarization through code folding. They summarized code using code. First, they formed an Abstract Syntax Tree (AST) of a code segment. Then they built the foldable tree through labeling block types and line numbers. Finally, they identified nodes that should be hidden using the extended version of Latent Dirichlet Allocation (LDA) algorithm [72] and sub-tree optimization. Recently, researchers have been using neural networks as a method for generation summaries. Allamanis et al. [16], introduced a novel attention mechanism using Convolutional Neural Networks (CNN) [73]. Their goal was to generate the name of a method from its code. Hu et al. [18], produced descriptions for Java methods using a sequence-to-sequence model. They applied a neural network for training open-source projects on Github. To improve performance, they exploited the structured form of code and introduced a novel method to parse abstract syntax tree. Then they used the parsed abstract syntax tree as an input for the neural network. Code Summarization via CrowdSourcing Badihi et al. [2], proposed a code summarization for Java language using the power of crowdsourcing. They built a web-based system for developers and encouraged them to write summaries for various methods using gamification techniques. Then they collected these summaries and analyzed them to identify the most significant parts of methods from developers' point of view. In other words, they computed corresponding weights for different code snippets. For summarizing Java methods, they extracted essential words by using TF-IDF, and considering the exact place of each word in the code snippets, they multiplied them with their corresponding coefficient (which was calculated in the previous step). In the end, they generated a summary for each method based on the extracted important words. Nazar et al. [4], presented a code by code summarization approach using crowdsource knowledge and supervised learning. First, they extracted code snippets from the most frequently asked questions (FAQ) section of Integrated Development Environment (IDE). Then they used four developers for labeling these code segments. Each line of code is labeled as a "yes" or "no"; "yes" means use the line in summary and "no" means do not. Afterward, if there were more than one "yes" label for a line, they would use it in the summary. Next, they extracted code features using crowdsourcing. For instance, method call, initializing and defining a variable are features of the code. In all, they extracted 21 features. Then, they utilized Support Vector Machine (SVM) and Naïve Bayes algorithms to classify results, and finally generated summaries using these two supervised learning algorithms. Guerrouj et al. [5], used the context available in posts of Stack Overflow Q&A website in order to generate code summaries. Using an Island parser, they extracted identifiers from discussions about an element. This was done using an Island parser. They utilized term proximity to find context of identifiers and generated summaries based on the language model. Rahman et al. [6], proposed an approach to generate summaries to recommend to developers through analyzing discussions and comments of users on Stack Overflow posts. First, they crawled questions and answers along with their comments from the website. Then separated the code snippets from the texts in the data and analyzed part of this data as the training dataset. Then, they formed a graph based on comments linking to their previous and next comments. If a person were mentioned in a comment, there would be an edge between the two corresponding nodes in the graph. They performed the page rank algorithm on the constructed graph. They also applied sentiment analysis on the comments and excluded comments with positive sentiment. In the end, they recommended the top three comments based on their scores as possible summaries code snippets. Wong et al. [7], introduced a method for automatic documentation of codes using Github. They first crawled 1005 number of source codes of open source projects from Github. Then, using clone detection techniques, they extracted code snippets similar to the input code. Authors used a heuristic to exclude some of the extracted clones. Two groups of code clones were omitted; firstly code snippets that did not contain any method call and secondly clones that contained repetitive commands such as variable initializing. Next, they extracted documentations of the remained code snippets, sorted the documentation based on their text similarity with code snippets and finally summarized them. As reviewed above, there are many approaches to code summarization. However, they have their limitations. One major defect of existing solutions is that to the best of our knowledge they do not consider dynamic interactions among elements of a software program. As interactions are triggered at run-time, they cannot be inferred statically. Therefore, to exploit this information to generate better summaries for code snippets, one needs to investigate these codes at run-time. In this work, we utilize these valuable interactions to generate more useful summaries. Another frequent shortcoming of the existing approaches is related to their evaluations. Most of the current models are evaluated using the precision and recall metrics. As shown in Section 3, these metrics lack the validity for evaluating machine translations tasks. That is why we have used BLEU and METEOR to better evaluate the performance of our proposed model. Moreover, most of these approaches are template-based, that is they generate summaries based on predefined rules. Therefore, these summaries neglect the essential semantics of a task/code, which renders them not very useful for end users in real-world cases. In this work, we have used deep learning methods to overcome this issue and generate more meaningful summaries. Conclusions and Future Work Code summarization is a useful technique for helping developers comprehend and maintain software programs more efficiently. There are different approaches for summarizing code segments, namely utilizing information retrieval, machine learning, and crowdsourcing knowledge. However, existing approaches do not take into account the interactions between different parts of the code while the program is running. Through exploiting deep neural networks and dynamic generation of the call graph, we tried to overcome the deficiencies of previous work and generate summaries that are more suitable. Results of the proposed approach were evaluated both qualitatively and quantitatively. We used BLEU4 and METEOR metrics for quantitative assessment and an online questionnaire for assessing the informativeness and naturalness of generated summaries in developers' perspectives as means of qualitative assessment. Our results indicate that the proposed approach outperforms state-of-the-art techniques. As for future work, one of the conventional solutions while using the sequence-to-sequence models is to employ a convolutional layer in the encoding component [74]. Adding this layer helps the deep neural network attain additional information about the words around each word. The use of a convolutional layer has improved results in machine translation studies. We intend to exploit this layer in future and analyze its effect on the proposed model. Moreover, the Android framework is only one example of event-driven applications. In the future, we are going to examine other frameworks to evaluate the proposed approach and expand our findings. public class MainActivity extends App Co mp act Ac ti vit y { public static final String EXTRA_MESSAGE = " MESSAGE " ; @Override protected void onCreate ( Bundle s av edI ns ta nce St at e ) { super . onCreate ( s av edI ns ta nce St at e ) ; setContentView ( R . layout . activity_main ) ; } public void sendMessage ( View view ) { Intent intent = new Intent ( this , D i s p l a y M e s s a g e A c t i v i t y . class ) ; EditText editText = ( EditText ) findViewById ( R . id . editText ) ; String message = editText . getText () . toString () ; intent . putExtra ( EXTRA_MESSAGE , message ) void onCreate ( sa ved In st anc eS ta te ) { setContentView ( R . layout . a c t i v i t y _ d i s p l a y _ m e s s a g e ) ; Intent intent = getIntent () ; String message = intent . getStringExtra ( MainActivity . EXTRA_MESSAGE ) ; TextView textView = findViewById ( R . id . textView ) ; textView . setText ( message ) ; } } Figure 3 : 3Source code for the running example [21]: the sendMessage() method is called whenever a user clicks the button Figure 4 : 4An overview of the proposed approach for Android applications Figure 5 : 5The outputs of applying different steps of the proposed approach on the running exampleFigure 5ademonstrates the output of our running example after the first step. Figure 10 : 10Call Figure 11 : 11Cross-entropy loss values based on different epochs. Figure 12 : 12Distribution of informativeness and naturalness Figure 13 : 13α=0.05) ×σ X . (Eq. 16) Distribution of BLEU4 and METEOR Therefore, we can conclude thatσ X inf = 3.69 ± 0.23 σ Xnat = 4.48 ± 0.10. (Eq. 17) (c) Pass the highest node block as an input to the pre-trained modelDatabase // Regular Expression : [A -Z ]+(?=[ A -Z ][ a -z ]) 3. Furthermore, words with capital first-letters or all lowercase letters are extracted as well. The corresponding regular expression comes as follows: // Regular Expression : [A -Z ]?[ a -z ]+ 4. Finally, we extracted words that all their letters were capital. We also kept special tokens in the final preprocessed data. // public void send message ( view view ) // { ... start activity ( intent ) ; } public void sendMessage ( View view ) { ... startActivity ( intent ) ; } (a) Output of the first step on the running example // Sends a message to the specified service . public void sendMessage ( View view ) { ... startActivity ( intent ) ; } (b) Output of the second step on the running example // Called whenever a view has been clicked . public void onClick ( View view ) { } // Sends a message to the specified service . // Called whenever a view has been clicked . public void sendMessage ( View view ) { ... startActivity ( intent ) ; } (d) Generated summaries for the given method Table 1 : 1PageRank Results for the Running Example Table 2 : 2Statistical Information of Extracted Pairs from GithubMean Q1 Q2 Q3 # Unique tokensComment length 11.25 7 9 14 44934 Code length 33.49 15 26 47 22525 Table 3 illustrates 3BLEU4 results based on different parameters. BLEU4 for 200 epochs on our test dataset equaled to 30.91. Table 3 : 3Results of the Proposed Deep Neural Network with Different ParametersBatch size Beam width # of layers # of epochs Pre-trained model BLEU4 METEOR Perplexity 512 50 4 200 30.77 11.25 3.53 512 50 4 200 31.01 6.65 5.16 512 50 3 200 30.69 11.84 1.95 512 50 2 200 30.91 12.71 1.84 Table 4 : 4Comparing Proposed Approach to the BaselineMethod Language METEOR BLEU4 CODE-NN [17] C# 12.3 20.5 The proposed approach C# 8.4 30.9 CODE-NN [17] SQL 10.9 18.4 Table 5 : 5Set of Android Applications Used in the Empirical StudyApplication Name # of lines # of Methods # of ClassesTister 423 14 8 Hashpass 429 8 2 Munchlife 631 17 9 justsit 849 43 13 Blinkenlightsbattery 851 61 14 Autoanswer 999 50 13 Anycut 1095 60 18 Dofcalculator 1321 14 9 Divideandconquer 1824 156 28 Passwordmakerpro 2824 282 67 Trippytipper 2953 148 36 Tokenlist 3680 225 43 Httpmon 4939 392 86 Remembeer 5915 257 54 Table 6 : 6Comparison of the Proposed Approach with the Existing ApproachesApproach METEOR BLEU4 CrowdSummarizer [2] 12.44 28.16 Sequence-to-sequence [43] 12.80 31.80 The proposed approach 16.91 32.20 Table 7 : 7Paticipants' Ratings for Informativeness and NaturalnessParticipant Mean of informativeness Mean of naturalness 1 4.15 4.30 2 3.75 4.65 3 4.35 4.20 4 4.25 4.50 5 3.65 4.50 6 4.70 4.95 7 3.65 4.40 8 3.45 4.75 9 3.60 4.40 10 4.10 4.60 11 4.50 4.45 12 2.90 4.75 13 4.40 4.25 14 3.40 4.70 15 3.35 4.55 16 4.40 4.65 17 3.75 4.75 18 3.40 4.50 19 3.40 4.65 20 3.00 4.75 21 4.00 4.15 22 3.15 4.55 23 2.60 3.95 24 2.75 4.30 25 3.40 3.95 26 3.90 4.45 Figure 12 depicts the distribution of informativeness and naturalness variables. Table 8 : 8An Overview of Previous Related WorkCategory Approach Used algorithms Abstraction Evaluation Language Year Information Retrieval [9] One-gram language model + VSM Method, Class Precision, Recall Java, C++ 2002 [8] SWUM Method Qualitative Java 2010 [10] Based on natural language rules Class Qualitative Java 2013 [19] Tracking eye movements Method Qualitative Java 2014 [20] SWUM + PageRank Method Qualitative Java 2016 Machine Learning [12] VSM + LSI Method, Class Qualitative Java 2010 [13] Lexical information + LSI Method, Class, Package Pyramid Java 2010 [15] HPAM Method, Class Qualitative Java 2013 [11] HDTM Package Qualitative Java 2014 [17] RNN + Stack Overflow Method BLEU4, METEOR SQL, C# 2016 [16] CNN Method Precision, Recall Java 2016 [14] AST + LDA + Optimization Method, Class Precision, Recall Java 2017 [18] Seq2Seq + Reversed Input + Github Method BLEU4 Java 2018 Crowdsourcing [7] Clone Detection + Heuristic + Github Method Qualitative Java 2015 [6] Sentiment analysis + PageRank + Stack Overflow Method Precision, Recall Java, C# 2015 [5] Island parser + Stack Overflow Method, Class Precision Java 2015 [4] SVM + Naïve Bayes + Web-based Method Precision, Recall Java 2016 [2] VSM + Web-bases+ LDA Method, Class, Package Precision, Recall Java, C++ 2017 The idea of this figure is adapted from a video created by Halthor. The video is available online at https://youtu.be/ W2rWgXJBZhU. AcknowledgmentThe authors would like to thank the 26 subjects who participated in assessing the quality of our proposed approach. Measuring program comprehension: A large-scale field study with professionals. Xin Xia, Lingfeng Bao, David Lo, Zhenchang Xing, Ahmed E Hassan, Shanping Li, IEEE Transactions on Software Engineering. Xin Xia, Lingfeng Bao, David Lo, Zhenchang Xing, Ahmed E. Hassan, and Shanping Li. Measuring program comprehension: A large-scale field study with professionals. IEEE Transactions on Software Engineering, pages 1-26, 2017. CrowdSummarizer: Automated generation of code summaries for Java programs through crowdsourcing. Sahar Badihi, Abbas Heydarnoori, IEEE Software. 342Sahar Badihi and Abbas Heydarnoori. CrowdSummarizer: Automated generation of code summaries for Java programs through crowdsourcing. IEEE Software, 34(2):71-80, 2017. Android play location. Android play location. https://github.com/googlesamples/android-play-location/blob/ master/LocationAddress/java/app/src/main/java/com/google/android/gms/location/sample/ locationaddress/MainActivity.java. Accessed: 2018-10-14. Source code fragment summarization with small-scale crowdsourcing based features. Najam Nazar, He Jiang, Guojun Gao, Tao Zhang, Xiaochen Li, Zhilei Ren, Frontiers of Computer Science. 103Najam Nazar, He Jiang, Guojun Gao, Tao Zhang, Xiaochen Li, and Zhilei Ren. Source code fragment summari- zation with small-scale crowdsourcing based features. Frontiers of Computer Science, 10(3):504-517, 2016. Leveraging informal documentation to summarize classes and methods in context. Latifa Guerrouj, David Bourque, Peter C Rigby, Proceedings of the 37th IEEE/ACM International Conference on Software Engineering. the 37th IEEE/ACM International Conference on Software EngineeringIEEELatifa Guerrouj, David Bourque, and Peter C. Rigby. Leveraging informal documentation to summarize classes and methods in context. In Proceedings of the 37th IEEE/ACM International Conference on Software Engineering, pages 639-642. IEEE, 2015. Recommending insightful comments for source code using crowdsourced knowledge. Chanchal K Mohammad Masudur Rahman, Iman Roy, Keivanloo, Proceedings of the 15th IEEE International Working Conference on Source Code Analysis and Manipulation. the 15th IEEE International Working Conference on Source Code Analysis and ManipulationIEEEMohammad Masudur Rahman, Chanchal K. Roy, and Iman Keivanloo. Recommending insightful comments for source code using crowdsourced knowledge. In Proceedings of the 15th IEEE International Working Conference on Source Code Analysis and Manipulation, pages 81-90. IEEE, 2015. CloCom: Mining existing source code for automatic comment generation. Edmund Wong, Taiyue Liu, Lin Tan, Proceedings of the 22nd IEEE International Conference on Software Analysis, Evolution, and Reengineering. the 22nd IEEE International Conference on Software Analysis, Evolution, and ReengineeringIEEEEdmund Wong, Taiyue Liu, and Lin Tan. CloCom: Mining existing source code for automatic comment generation. In Proceedings of the 22nd IEEE International Conference on Software Analysis, Evolution, and Reengineering, pages 380-389. IEEE, 2015. Towards automatically generating summary comments for Java methods. Giriprasad Sridhara, Emily Hill, Divya Muppaneni, Lori L Pollock, K Vijay-Shanker, Proceedings of the 25th IEEE/ACM International Conference on Automated Software Engineering. the 25th IEEE/ACM International Conference on Automated Software EngineeringGiriprasad Sridhara, Emily Hill, Divya Muppaneni, Lori L. Pollock, and K. Vijay-Shanker. Towards automatically generating summary comments for Java methods. In Proceedings of the 25th IEEE/ACM International Conference on Automated Software Engineering, pages 43-52, 2010. Recovering traceability links between code and documentation. Giuliano Antoniol, Gerardo Canfora, Gerardo Casazza, Andrea De Lucia, Ettore Merlo, IEEE Transactions on Software Engineering. 2810Giuliano Antoniol, Gerardo Canfora, Gerardo Casazza, Andrea De Lucia, and Ettore Merlo. Recovering traceability links between code and documentation. IEEE Transactions on Software Engineering, 28(10):970-983, 2002. Automatic generation of natural language summaries for Java classes. Laura Moreno, Jairo Aponte, Giriprasad Sridhara, Andrian Marcus, Lori L Pollock, K Vijay-Shanker, Proceedings of the 21st IEEE International Conference on Program Comprehension. the 21st IEEE International Conference on Program ComprehensionIEEELaura Moreno, Jairo Aponte, Giriprasad Sridhara, Andrian Marcus, Lori L. Pollock, and K. Vijay-Shanker. Auto- matic generation of natural language summaries for Java classes. In Proceedings of the 21st IEEE International Conference on Program Comprehension, pages 23-32. IEEE, 2013. Improving topic model source code summarization. Paul W Mcburney, Cheng Liu, Collin Mcmillan, Tim Weninger, Proceedings of the 22nd International Conference on Program Comprehension. the 22nd International Conference on Program ComprehensionPaul W. McBurney, Cheng Liu, Collin McMillan, and Tim Weninger. Improving topic model source code summarization. In Proceedings of the 22nd International Conference on Program Comprehension, pages 291-294, 2014. On the use of automated text summarization techniques for summarizing source code. Sonia Haiduc, Jairo Aponte, Laura Moreno, Andrian Marcus, Proceedings of the 17th Working Conference on Reverse Engineering. the 17th Working Conference on Reverse EngineeringIEEESonia Haiduc, Jairo Aponte, Laura Moreno, and Andrian Marcus. On the use of automated text summarization techniques for summarizing source code. In Proceedings of the 17th Working Conference on Reverse Engineering,, pages 35-44. IEEE, 2010. Supporting program comprehension with source code summarization. Sonia Haiduc, Jairo Aponte, Andrian Marcus, Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering. the 32nd ACM/IEEE International Conference on Software EngineeringACM2Sonia Haiduc, Jairo Aponte, and Andrian Marcus. Supporting program comprehension with source code summarization. In Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering - Volume 2, pages 223-226. ACM, 2010. Autofolding for source code summarization. M Jaroslav, Pankajan Fowkes, Razvan Chanthirasegaran, Miltiadis Ranca, Mirella Allamanis, Charles A Lapata, Sutton, IEEE Transactions on Software Engineering. 4312Jaroslav M. Fowkes, Pankajan Chanthirasegaran, Razvan Ranca, Miltiadis Allamanis, Mirella Lapata, and Charles A. Sutton. Autofolding for source code summarization. IEEE Transactions on Software Engineering, 43(12):1095-1109, 2017. Evaluating source code summarization techniques: Replication and expansion. Brian P Eddy, Jeffrey A Robinson, Nicholas A Kraft, Jeffrey C Carver, Proceedings of the 21st IEEE International Conference on Program Comprehension. the 21st IEEE International Conference on Program ComprehensionIEEEBrian P. Eddy, Jeffrey A. Robinson, Nicholas A. Kraft, and Jeffrey C. Carver. Evaluating source code summari- zation techniques: Replication and expansion. In Proceedings of the 21st IEEE International Conference on Program Comprehension, pages 13-22. IEEE, 2013. A convolutional attention network for extreme summarization of source code. Miltiadis Allamanis, Hao Peng, Charles A Sutton, Proceedings of the 33nd International Conference on Machine Learning. the 33nd International Conference on Machine Learning48Miltiadis Allamanis, Hao Peng, and Charles A. Sutton. A convolutional attention network for extreme summari- zation of source code. In Proceedings of the 33nd International Conference on Machine Learning, volume 48, pages 2091-2100. JMLR.org, 2016. Summarizing source code using a neural attention model. Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, Luke Zettlemoyer, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. The Association for Computer Linguistics. the 54th Annual Meeting of the Association for Computational Linguistics. The Association for Computer LinguisticsSrinivasan Iyer, Ioannis Konstas, Alvin Cheung, and Luke Zettlemoyer. Summarizing source code using a neural attention model. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. The Association for Computer Linguistics, 2016. Deep code comment generation. Xing Hu, Ge Li, Xin Xia, David Lo, Zhi Jin, Proceedings of the 26th IEEE International Conference on Program Comprehension. the 26th IEEE International Conference on Program ComprehensionXing Hu, Ge Li, Xin Xia, David Lo, and Zhi Jin. Deep code comment generation. In Proceedings of the 26th IEEE International Conference on Program Comprehension, 2018. Improving automated source code summarization via an eye-tracking study of programmers. Paige Rodeghero, Collin Mcmillan, Paul W Mcburney, Nigel Bosch, Sidney D&apos; Mello, Proceedings of the 36th International Conference on Software Engineering. the 36th International Conference on Software EngineeringPaige Rodeghero, Collin McMillan, Paul W. McBurney, Nigel Bosch, and Sidney D'Mello. Improving automated source code summarization via an eye-tracking study of programmers. In Proceedings of the 36th International Conference on Software Engineering, pages 390-401, 2014. Automatic source code summarization of context for Java methods. W Paul, Collin Mcburney, Mcmillan, IEEE Transactions on Software Engineering. 422Paul W. McBurney and Collin McMillan. Automatic source code summarization of context for Java methods. IEEE Transactions on Software Engineering, 42(2):103-119, 2016. [21] Start another activity. https://developer.android.com/training/basics/firstapp/ starting-activity. Accessed: 2017-08-20. Automatically generating commit messages from diffs using neural machine translation. Siyuan Jiang, Ameer Armaly, Collin Mcmillan, Proceedings of the 32nd IEEE/ACM International Conference on Automated Software Engineering. the 32nd IEEE/ACM International Conference on Automated Software EngineeringSiyuan Jiang, Ameer Armaly, and Collin McMillan. Automatically generating commit messages from diffs using neural machine translation. In Proceedings of the 32nd IEEE/ACM International Conference on Automated Software Engineering, pages 135-146, 2017. Neural-machinetranslation-based commit message generation: how far are we?. Zhongxin Liu, Xin Xia, Ahmed E Hassan, David Lo, Zhenchang Xing, Xinyu Wang, Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering. the 33rd ACM/IEEE International Conference on Automated Software EngineeringZhongxin Liu, Xin Xia, Ahmed E. Hassan, David Lo, Zhenchang Xing, and Xinyu Wang. Neural-machine- translation-based commit message generation: how far are we? In Proceedings of the 33rd ACM/IEEE Interna- tional Conference on Automated Software Engineering, pages 373-384, 2018. Automating intention mining. Qiao Huang, Xin Xia, David Lo, Gail C Murphy, IEEE Transactions on Software Engineering. Qiao Huang, Xin Xia, David Lo, and Gail C. Murphy. Automating intention mining. IEEE Transactions on Software Engineering, pages 1-1, 2018. Deep code search. Xiaodong Gu, Hongyu Zhang, Sunghun Kim, Proceedings of the 40th International Conference on Software Engineering. the 40th International Conference on Software EngineeringICSEXiaodong Gu, Hongyu Zhang, and Sunghun Kim. Deep code search. In Proceedings of the 40th International Conference on Software Engineering, pages 933-944. ICSE, 2018. Recurrent neural networks and sequence models. Recurrent neural networks and sequence models. https://youtu.be/efWlOCE_6HY. Accessed:2018-10-15. Efficient estimation of word representations in vector space. Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean, arXiv:1301.3781arXiv preprintTomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. arXiv preprint, arXiv:1301.3781, 2013. Linguistic regularities in continuous space word representations. Tomas Mikolov, Yih Wen-Tau, Geoffrey Zweig, Proceedings of the 11th Conference on Human Language Technologies. the 11th Conference on Human Language TechnologiesThe Association for Computational LinguisticsTomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. Linguistic regularities in continuous space word representa- tions. In Proceedings of the 11th Conference on Human Language Technologies, pages 746-751. The Association for Computational Linguistics, 2013. word2vec explained: Deriving Mikolov et al.'s negative-sampling word-embedding method. Yoav Goldberg, Omer Levy, arXiv:1402.3722arXiv preprintYoav Goldberg and Omer Levy. word2vec explained: Deriving Mikolov et al.'s negative-sampling word-embedding method. arXiv preprint, arXiv:1402.3722, 2014. Distributed representations of words and phrases and their compositionality. Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S Corrado, Jeffrey Dean, Proceedings of the 26th Conference on Advances in Neural Information Processing Systems. the 26th Conference on Advances in Neural Information Processing SystemsTomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. Distributed representations of words and phrases and their compositionality. In Proceedings of the 26th Conference on Advances in Neural Information Processing Systems, pages 3111-3119, 2013. Glove: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher D Manning, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. Alessandro Moschitti, Bo Pang, and Walter Daelemansthe 2014 Conference on Empirical Methods in Natural Language ProcessingACLJeffrey Pennington, Richard Socher, and Christopher D. Manning. Glove: Global vectors for word representation. In Alessandro Moschitti, Bo Pang, and Walter Daelemans, editors, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1532-1543. ACL, 2014. GloVe: Global vectors for word representation. GloVe: Global vectors for word representation. https://nlp.stanford.edu/projects/glove/. Accessed: 2018-07-17. Dropout: A simple way to prevent neural networks from overfitting. Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov, Machine Learning Research. 151Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. Machine Learning Research, 15(1):1929-1958, 2014. Effective approaches to attention-based neural machine translation. Thang Luong, Hieu Pham, Christopher D Manning, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingACLThang Luong, Hieu Pham, and Christopher D. Manning. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412-1421. ACL, 2015. Backpropagation through time: what it does and how to do it. Paul Werbos, Proceedings of the IEEE. 7810Paul Werbos. Backpropagation through time: what it does and how to do it. Proceedings of the IEEE, 78(10):1550- 1560, 1990. The vanishing gradient problem during learning recurrent neural nets and problem solutions. Sepp Hochreiter, International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems. 602Sepp Hochreiter. The vanishing gradient problem during learning recurrent neural nets and problem solutions. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 6(02):107-116, 1998. Empirical evaluation of gated recurrent neural networks on sequence modeling. Junyoung Chung, Çaglar Gülçehre, Kyunghyun Cho, Yoshua Bengio, arXiv:1412.3555arXiv preprintJunyoung Chung, Çaglar Gülçehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint, arXiv:1412.3555, 2014. Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural computation. 98Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735-1780, 1997. Bidirectional recurrent neural networks. Mike Schuster, Kuldip K Paliwal, IEEE Transactions on Signal Processing. 4511Mike Schuster and Kuldip K. Paliwal. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11):2673-2681, 1997. Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Alex Graves, Jürgen Schmidhuber, Neural Networks. 185-6Alex Graves and Jürgen Schmidhuber. Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Neural Networks, 18(5-6):602-610, 2005. Speech recognition with deep recurrent neural networks. Alex Graves, Mohamed Abdel-Rahman, Geoffrey E Hinton, Proceedings of the 38th IEEE International Conference on Acoustics, Speech and Signal Processing. the 38th IEEE International Conference on Acoustics, Speech and Signal ProcessingIEEEAlex Graves, Abdel-rahman Mohamed, and Geoffrey E. Hinton. Speech recognition with deep recurrent neural networks. In Proceedings of the 38th IEEE International Conference on Acoustics, Speech and Signal Processing, pages 6645-6649. IEEE, 2013. Alex Graves, arXiv:1211.3711Sequence transduction with recurrent neural networks. arXiv preprintAlex Graves. Sequence transduction with recurrent neural networks. arXiv preprint, arXiv:1211.3711, 2012. Learning phrase representations using RNN encoder-decoder for statistical machine translation. Kyunghyun Cho, Çaglar Bart Van Merrienboer, Dzmitry Gülçehre, Fethi Bahdanau, Holger Bougares, Yoshua Schwenk, Bengio, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. the 2014 Conference on Empirical Methods in Natural Language ProcessingEMNLPKyunghyun Cho, Bart van Merrienboer, Çaglar Gülçehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1724-1734. EMNLP, 2014. Sequence to sequence learning with neural networks. Ilya Sutskever, Oriol Vinyals, Quoc V Le, Proceedings of the 27th Conference on Advances in Neural Information Processing Systems. the 27th Conference on Advances in Neural Information Processing SystemsIlya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks. In Proceedings of the 27th Conference on Advances in Neural Information Processing Systems, pages 3104-3112, 2014. Generating sequences with recurrent neural networks. Alex Graves, arXiv:1308.085012arXiv preprintAlex Graves. Generating sequences with recurrent neural networks. arXiv preprint, arXiv:1308.0850, 2013. A PREPRINT -DECEMBER 12, 2018 Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, arXiv:1409.0473arXiv preprintDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint, arXiv:1409.0473, 2014. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. Tim Salimans, Diederik P Kingma, Proceedings of the 29th Conference on Advances in Neural Information Processing Systems. the 29th Conference on Advances in Neural Information Processing Systems901Tim Salimans and Diederik P. Kingma. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. In Proceedings of the 29th Conference on Advances in Neural Information Processing Systems, page 901, 2016. Tensorflow: A system for large-scale machine learning. Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, Manjunath Kudlur, Josh Levenberg, Rajat Monga, Sherry Moore, Derek Gordon Murray, Benoit Steiner, Paul A Tucker, Vijay Vasudevan, Pete Warden, Martin Wicke, Yuan Yu, Xiaoqiang Zheng, Proceedings of the 12th Conference on Symposium on Operating Systems Design and Implementation. the 12th Conference on Symposium on Operating Systems Design and ImplementationUSENIX AssociationMartín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, Manjunath Kudlur, Josh Levenberg, Rajat Monga, Sherry Moore, Derek Gordon Murray, Benoit Steiner, Paul A. Tucker, Vijay Vasudevan, Pete Warden, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. Tensorflow: A system for large-scale machine learning. In Proceedings of the 12th Conference on Symposium on Operating Systems Design and Implementation, pages 265-283. USENIX Association, 2016. Pandas: A foundational python library for data analysis and statistics. Python for High Performance and Scientific Computing. Wes Mckinney, Wes McKinney. Pandas: A foundational python library for data analysis and statistics. Python for High Performance and Scientific Computing, pages 1-9, 2011. On the difficulty of training recurrent neural networks. Razvan Pascanu, Tomas Mikolov, Yoshua Bengio, Proceedings of the 30th International Conference on Machine Learning. the 30th International Conference on Machine Learning28Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks. In Proceedings of the 30th International Conference on Machine Learning, volume 28, pages 1310-1318. JMLR.org, 2013. Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980arXiv preprintDiederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint, arXiv:1412.6980, 2014. RunDroid: Recovering execution call graphs for android applications. Yujie Yuan, Lihua Xu, Xusheng Xiao, Andy Podgurski, Huibiao Zhu, Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering. the 2017 11th Joint Meeting on Foundations of Software EngineeringACMYujie Yuan, Lihua Xu, Xusheng Xiao, Andy Podgurski, and Huibiao Zhu. RunDroid: Recovering execution call graphs for android applications. In Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering, pages 949-953. ACM, 2017. UI/Application exerciser monkey. UI/Application exerciser monkey. https://developer.android.com/studio/test/monkey. Accessed: 2018-07-21. The PageRank citation ranking: Bringing order to the web. Lawrence Page, Sergey Brin, Rajeev Motwani, Terry Winograd, 1999-66Stanford InfoLabTechnical ReportLawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. The PageRank citation ranking: Bringing order to the web. Technical Report 1999-66, Stanford InfoLab, November 1999. The anatomy of a large-scale hypertextual web search engine. Sergey Brin, Lawrence Page, Computer Networks. 301-7Sergey Brin and Lawrence Page. The anatomy of a large-scale hypertextual web search engine. Computer Networks, 30(1-7):107-117, 1998. Bleu: A method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. the 40th Annual Meeting of the Association for Computational LinguisticsACLKishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318. ACL, 2002. METEOR: an automatic metric for MT evaluation with improved correlation with human judgments. Satanjeev Banerjee, Alon Lavie, Proceedings of the Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization. the Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or SummarizationAssociation for Computational LinguisticsSatanjeev Banerjee and Alon Lavie. METEOR: an automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 65-72. Association for Computational Linguistics, 2005. Selection and presentation practices for code example summarization. T T Annie, Martin P Ying, Robillard, Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering. the 22nd ACM SIGSOFT International Symposium on Foundations of Software EngineeringACMAnnie T. T. Ying and Martin P. Robillard. Selection and presentation practices for code example summarization. In Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, pages 460-471. ACM, 2014. An estimate of an upper bound for the entropy of english. F Peter, Stephen Della Brown, Vincent J Pietra, Jennifer C Della Pietra, Robert L Lai, Mercer, Computational Linguistics. 181Peter F. Brown, Stephen Della Pietra, Vincent J. Della Pietra, Jennifer C. Lai, and Robert L. Mercer. An estimate of an upper bound for the entropy of english. Computational Linguistics, 18(1):31-40, 1992. Design lessons from the fastest q&a site in the west. Lena Mamykina, Bella Manoim, Manas Mittal, George Hripcsak, Björn Hartmann, Proceedings of the International Conference on Human Factors in Computing Systems. the International Conference on Human Factors in Computing SystemsACMLena Mamykina, Bella Manoim, Manas Mittal, George Hripcsak, and Björn Hartmann. Design lessons from the fastest q&a site in the west. In Proceedings of the International Conference on Human Factors in Computing Systems, pages 2857-2866. ACM, 2011. Reducing combinatorics in GUI testing of android applications. Nariman Mirzaei, Joshua Garcia, Hamid Bagheri, Alireza Sadeghi, Sam Malek, Proceedings of the 38th International Conference on Software Engineering. the 38th International Conference on Software EngineeringACMNariman Mirzaei, Joshua Garcia, Hamid Bagheri, Alireza Sadeghi, and Sam Malek. Reducing combinatorics in GUI testing of android applications. In Proceedings of the 38th International Conference on Software Engineering, pages 559-570. ACM, 2016. Is mutation analysis effective at testing Android Apps?. Lin Deng, Jeff Offutt, David Samudio, Proceedings of the 17th International Conference on Software Quality, Reliability and Security. the 17th International Conference on Software Quality, Reliability and SecurityIEEELin Deng, Jeff Offutt, and David Samudio. Is mutation analysis effective at testing Android Apps? In Proceedings of the 17th International Conference on Software Quality, Reliability and Security, pages 86-93. IEEE, 2017. A power primer. Jacob Cohen, Psychological bulletin. 1121155Jacob Cohen. A power primer. Psychological bulletin, 112(1):155, 1992. New effect size rules of thumb. S Shlomo, Sawilowsky, Modern Applied Statistical Methods. 82Shlomo S Sawilowsky. New effect size rules of thumb. Modern Applied Statistical Methods, 8(2):597 -599, 2009. Validity threats in empirical software engineering research -an initial survey. Robert Feldt, Ana Magazinius, Proceedings of the 22nd International Conference on Software Engineering & Knowledge Engineering. the 22nd International Conference on Software Engineering & Knowledge EngineeringKnowledge Systems Institute Graduate SchoolRobert Feldt and Ana Magazinius. Validity threats in empirical software engineering research -an initial survey. In Proceedings of the 22nd International Conference on Software Engineering & Knowledge Engineering, pages 374-379. Knowledge Systems Institute Graduate School, 2010. Introduction to Information Retrieval. Christopher D Manning, Prabhakar Raghavan, Hinrich Schütze, Cambridge University PressNew York, NY, USAChristopher D. Manning, Prabhakar Raghavan, and Hinrich Schütze. Introduction to Information Retrieval. Cambridge University Press, New York, NY, USA, 2008. Building natural language generation systems. Ehud Reiter, Robert Dale, Cambridge university pressEhud Reiter and Robert Dale. Building natural language generation systems. Cambridge university press, 2000. Jsummarizer: An automatic generator of natural language summaries for Java classes. Laura Moreno, Andrian Marcus, Lori L Pollock, K Vijay-Shanker, Proceedings of the 21st IEEE International Conference on Program Comprehension. the 21st IEEE International Conference on Program ComprehensionIEEELaura Moreno, Andrian Marcus, Lori L. Pollock, and K. Vijay-Shanker. Jsummarizer: An automatic generator of natural language summaries for Java classes. In Proceedings of the 21st IEEE International Conference on Program Comprehension, pages 230-232. IEEE, 2013. Document-topic hierarchies from document graphs. Tim Weninger, Yonatan Bisk, Jiawei Han, Proceedings of the 21st ACM International Conference on Information and Knowledge Management. the 21st ACM International Conference on Information and Knowledge ManagementACMTim Weninger, Yonatan Bisk, and Jiawei Han. Document-topic hierarchies from document graphs. In Proceedings of the 21st ACM International Conference on Information and Knowledge Management, pages 635-644. ACM, 2012. An introduction to latent semantic analysis. K Thomas, Peter W Landauer, Darrell Foltz, Laham, Discourse Processes. 25Thomas K Landauer, Peter W. Foltz, and Darrell Laham. An introduction to latent semantic analysis. Discourse Processes, 25(2-3):259-284, 1998. Exploring content models for multi-document summarization. Aria Haghighi, Lucy Vanderwende, Proceedings of the Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics. the Human Language Technologies: Conference of the North American Chapter of the Association of Computational LinguisticsThe Association for Computational LinguisticsAria Haghighi and Lucy Vanderwende. Exploring content models for multi-document summarization. In Proceedings of the Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics, pages 362-370. The Association for Computational Linguistics, 2009. A unified architecture for natural language processing: deep neural networks with multitask learning. Ronan Collobert, Jason Weston, Proceedings of the 25th International Conference on Machine Learning. the 25th International Conference on Machine LearningACM307Ronan Collobert and Jason Weston. A unified architecture for natural language processing: deep neural networks with multitask learning. In Proceedings of the 25th International Conference on Machine Learning, volume 307, pages 160-167. ACM, 2008. Recurrent continuous translation models. Nal Kalchbrenner, Phil Blunsom, Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. the 2013 Conference on Empirical Methods in Natural Language ProcessingACLNal Kalchbrenner and Phil Blunsom. Recurrent continuous translation models. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1700-1709. ACL, 2013.
[ "https://github.com/googlesamples/android-play-location/blob/" ]
[]
[ "Gilles Guillot \nDepartment of Applied Mathematics and Computer Science\nR. Pedersens Plads\nTechnical University of Denmark\n2800Kongens LyngbyDenmark\n", "Renaud Vitalis \nCentre de Biologie et de Gestion des Populations\n34988Montferrier-sur-Lez cedexFrance\n", "Arnaud Le Rouzic \nGénome et Spéciation\nLaboratoireÉvolution\n91198Gif-sur-YvetteFrance\n", "Mathieu Gautier \nCentre de Biologie et de Gestion des Populations\n34988Montferrier-sur-Lez cedexFrance\n", "Gilles Guillot \nDepartment of Applied Mathematics and Computer Science\nR. Pedersens Plads\nTechnical University of Denmark\n2800Kongens LyngbyDenmark\n", "Renaud Vitalis \nCentre de Biologie et de Gestion des Populations\n34988Montferrier-sur-Lez cedexFrance\n", "Arnaud Le Rouzic \nGénome et Spéciation\nLaboratoireÉvolution\n91198Gif-sur-YvetteFrance\n", "Mathieu Gautier \nCentre de Biologie et de Gestion des Populations\n34988Montferrier-sur-Lez cedexFrance\n" ]
[ "Department of Applied Mathematics and Computer Science\nR. Pedersens Plads\nTechnical University of Denmark\n2800Kongens LyngbyDenmark", "Centre de Biologie et de Gestion des Populations\n34988Montferrier-sur-Lez cedexFrance", "Génome et Spéciation\nLaboratoireÉvolution\n91198Gif-sur-YvetteFrance", "Centre de Biologie et de Gestion des Populations\n34988Montferrier-sur-Lez cedexFrance", "Department of Applied Mathematics and Computer Science\nR. Pedersens Plads\nTechnical University of Denmark\n2800Kongens LyngbyDenmark", "Centre de Biologie et de Gestion des Populations\n34988Montferrier-sur-Lez cedexFrance", "Génome et Spéciation\nLaboratoireÉvolution\n91198Gif-sur-YvetteFrance", "Centre de Biologie et de Gestion des Populations\n34988Montferrier-sur-Lez cedexFrance" ]
[]
Detecting correlation between allele frequencies and environmental variables as a signature of selection. A fast computational approach for genome-wide studies.AbstractGenomic regions (or loci) displaying outstanding correlation with some environmental variables are likely to be under selection and this is the rationale of recent methods of identifying selected loci and retrieving functional information about them. To be efficient, such methods need to be able to disentangle the potential effect of environmental variables from the confounding effect of population history. For the routine analysis of genome-wide datasets, one also needs fast inference and model selection algorithms. We propose a method based on an explicit spatial model which is an instance of spatial generalized linear mixed model (SGLMM). For inference, we make use of the INLA-SPDE theoretical and computational framework developed by Rue et al.[1]and Lindgren et al.[2]. The method we propose allows one to quantify the correlation between genotypes and environmental variables. It works for the most common types of genetic markers, obtained either at the individual or at the population level. Analyzing simulated data produced under a geostatistical model then under an explicit model of selection, we show that the method is efficient. We also re-analyze a dataset relative to nineteen pine weevils (Hylobius abietis) populations across Europe. The method proposed appears also as a statistically sound alternative to the Mantel tests for testing the association between genetic and environmental variables.
null
[ "https://arxiv.org/pdf/1206.0889v2.pdf" ]
15,540,462
1206.0889
a50c005452571232fbbfce1fb655b50056e58a73
May 1, 2014 12 Aug 2013 Gilles Guillot Department of Applied Mathematics and Computer Science R. Pedersens Plads Technical University of Denmark 2800Kongens LyngbyDenmark Renaud Vitalis Centre de Biologie et de Gestion des Populations 34988Montferrier-sur-Lez cedexFrance Arnaud Le Rouzic Génome et Spéciation LaboratoireÉvolution 91198Gif-sur-YvetteFrance Mathieu Gautier Centre de Biologie et de Gestion des Populations 34988Montferrier-sur-Lez cedexFrance May 1, 2014 12 Aug 2013Preprint submitted to Spatial Statistics2SNP and AFLP datagenomicsspatial population structureMantel testmodel choiceMCMC-free methodINLAGMRF * Corresponding author Detecting correlation between allele frequencies and environmental variables as a signature of selection. A fast computational approach for genome-wide studies.AbstractGenomic regions (or loci) displaying outstanding correlation with some environmental variables are likely to be under selection and this is the rationale of recent methods of identifying selected loci and retrieving functional information about them. To be efficient, such methods need to be able to disentangle the potential effect of environmental variables from the confounding effect of population history. For the routine analysis of genome-wide datasets, one also needs fast inference and model selection algorithms. We propose a method based on an explicit spatial model which is an instance of spatial generalized linear mixed model (SGLMM). For inference, we make use of the INLA-SPDE theoretical and computational framework developed by Rue et al.[1]and Lindgren et al.[2]. The method we propose allows one to quantify the correlation between genotypes and environmental variables. It works for the most common types of genetic markers, obtained either at the individual or at the population level. Analyzing simulated data produced under a geostatistical model then under an explicit model of selection, we show that the method is efficient. We also re-analyze a dataset relative to nineteen pine weevils (Hylobius abietis) populations across Europe. The method proposed appears also as a statistically sound alternative to the Mantel tests for testing the association between genetic and environmental variables. Background Detecting signature of natural selection Natural (or Darwinian) selection is the gradual process by which biological traits (phenotypes) become either more or less common in a population as a consequence of reproduction success of the individuals that bear them. Over time, this process can result in populations that specialize for particular ecological niches and may eventually lead to the emergence of new species. The study of selection is an important aspect of evolutionary biology as it provides insight about speciation but also about the genetic response of possibly lesser magnitude to environmental variation. An important goal of such analyses consists in identifying genes or genomic regions that have been the target of selection [3,4]. Identifying such genes may provide important information about their function which may eventually help improving crops [5] and livestock [6]. Recent genotyping techniques make it possible to obtain DNA sequences at a high number of genomic locations in a growing number of both model and non-model species [7]. This opens the door to methods of identifying regions under selection, even for organisms whose genome is poorly documented (non-model organisms), but the large size of such datasets (10 4 -10 6 variables) makes the task a formidable statistical challenge. Recent methods of detecting selection So far, identifying genomic regions targeted by selection has relied extensively on the analysis of genetic data alone, based on the idea that, if local selection occurs at a given chromosome region (or locus), differentiation (genetic difference between population) will increase at this locus compared with what is theoretically expected at neutral loci [3]. To further identify the environmental characteristics associated with the observed genetic variation, a recent family of methods attempts to identify loci displaying outstanding correlation with some environmental variables. This more direct approach has the potential advantage to provide functional information about those conspicuous loci. The data required for the latter type of analyses typically consist of the genotypes of a set of individuals at various genomic loci and measurements of various environmental variables at the same sampling sites. The method amounts to quantifying the statistical dependence between allele counts and environmental variables. The most natural method to model dependence of count data on a quantitative or qualitative variable is the logistic regression, as implemented in this context by Joost et al. [8]. However, plain logistic regression assumes that allele counts among different populations or individuals are independent conditionally on the environmental variable. Doing so, logistic regression fails to capture the residual genetic dependence of neighboring individuals or populations due to their common ancestry and recent common evolutionary history. Another method to test the dependence between genetic and environmental variables is the Mantel test and its variant the partial Mantel test. These tests attempt to assess the significance of a correlation coefficient by re-sampling with permutations. They have long been popular methods in ecology and evolution. However, a recent study [9] show that they are not appropriate if the data are spatially correlated. The method proposed by Coop et al. [10] attempts to model genetic structure by including a random term in the logistic regression in a fashion similar to a Generalized Linear Mixed Model. They propose to do inference with MCMC. A recent study by De Mita et al. [11] shows that under biologically realistic conditions, accounting for structure in the data as in [10] improves the accuracy of inferences. The goal of the present paper is to extend the latter approach by rooting it in a spatially explicit model and implementing inference with an MCMC-free inference approach. The method proposed is described in the next section. Next we illustrate the method accuracy by analyzing simulated data produced first under a purely geostatistical model then under a biological model that simulates selection explicitly. We conclude by discussing our results and outlining possible extensions. Method proposed Data considered We consider a set of individuals observed at various geographical locations. Each individual is genotyped at L genetic loci. Besides, we consider that these loci are bi-allelic, i.e the sequence observed at a particular locus can be only of two types (denoted arbitrarily A/a in the sequel). We consider haploid or diploid organisms, i.e. organisms that carry either one or two copies of each chromosome. A genotype is therefore a vector with L entries in {0, 1} or {0, 1, 2} respectively. As it is frequent to sample more than one individual at each location, we denote by n il the haploid sample size of population i for locus l, that is the number of individuals at site i genotyped at locus l times the number of chromosome copies carried by the organism under study. The pine weevil dataset To illustrate the method proposed here, we will re-analyse a dataset relative to pine weevils initially produced by Conord et al. [12]. Anticipating on the results section and for the sake of fleshing out the presentation of the method in the next section, we briefly outline the main features of this dataset. It consists of 367 pine weevil individuals (Hylobius abietis) sampled in 19 geographical locations across Europe (figure 1). Each individual has been genotyped at 83 genetic markers (see below for details). q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q This dataset has been analysed by Joost et al. [8] who looked for signatures of selection by comparing spatial genetic variation to ten environmental variables. We focus here on a subset consisting of four environmental variables: average diurnal temperature range, number of days with ground frost, average monthly precipitation and average wind speed. Model Likelihood We denote by (s i ) i=1,...,I a collection of geographical coordinates, (y i ) i=1,...,I some measurements of an environmental variable obtained at these sites and (z il ) i=1,...,I l=1,...,L the number of alleles of type A at locus l observed in a population sampled at site i. We also denote by f il the local frequency of allele A at geographical location s i for locus l. We make the assumption that there is no within-population statistical structure and that for organisms harboring more than one copy of each chromosome, the various alleles carried at a locus by an individual are independent and that the allele counts are sampled from a binomial distribution: z il ∼ Binom(n il , f il )(1) where n il denotes the number of alleles sampled (or haploid sample size) at site i. The above assumes that the data at hand provide exact information about the alleles carried by each individual. This is not the case for certain genetic markers such as amplified fragment length polymorphism markers (AFLP). With this type of markers, one can only know whether an individual carries allele A or not but the number of copies carried by each individual is not known. For diploid organisms, this leads to a genotype ambiguity: the record of allele A may correspond to genotypes (a, A) or (A, A). We therefore consider an alternative likelihood for the case above where z il denotes the number of individuals at sampling site i for which allele A has been observed. Still denoting by f il the frequency of a reference allele A at locus l at geographical site i but we have now z il ∼ Binom(n il , f il ) with f il = 2f il (1 − f il ) + f 2 il(2) Latent Gaussian structure We model the dependency between an environmental variable y and the allele frequency at locus l by assuming that f il = 1 1 + exp −(x il + a l y i + b l )(3) where x il is an unobserved spatially random effect that accounts for spatial auto-correlation due to population history and (a l , b l ) are parameters that quantify the locus-specific effect of the environment variable y i . The environment variable is observed and is treated as a spatially variable explanatory variable (fixed effect). The variables x l = (x 1l , ..., x Il ) are un-observed random effects and are assumed to be independent replicates from the same Gaussian random field. Doing so, we assume the absence of linkage disequilibrium (i.e absence of statistical dependence across loci). By assuming a common distribution for all vectors x l , we inject the key information in the model that there is a characteristic spatial scale that is common to all loci and reflects the species-and area-specific population structure of the data under study. As commonly done in spatial statistics [13], we make the assumption that x is 0-mean isotropic and stationary. Further, we assume that the stationary covariance C(s, s ) = C(h) belongs to the Matérn family i.e. C(h) = σ 2 2 ν−1 Γ(ν) (κh) ν K ν (κh)(4) where K ν is the modified Bessel function of the second kind and order ν > 0, κ > 0 is a scaling parameter and σ 2 is the marginal variance. Parameter inference A key feature of the model above is that it can be handled within the theoretical and computational framework developed by Rue et al. [1] and Lindgren et al. [2]. The former develops a framework for Bayesian inference in a broad class of models enjoying a latent Gaussian structure. The latter, bridges a gap between Markov random fields and Gaussian random fields theory making it possible to combine the flexibility of Gaussian random fields for modelling and the computational efficiency of Markov random fields for inference. The approach of Lindgren et al. [2] is based on the observation that a Gaussian random field x(s) with a Matérn covariance function is the solution of the stochastic partial differential equation (κ 2 − ∆) ν/2 (τ x(s)) = W(s)(5) where ∆ is the Laplacian, κ is the scale parameter, ν controls the smoothness and τ controls the variance. In approximating x(s) by x(s) = k ψ k (s)w k (6) where Casting the present problem in the framework of INLA-SPDE opens the door to accurate and fast computations using the R package inla. For parameter inference, data relative to all loci are combined into a matrix Z = (z l ) l=1,...,L to compute the marginal posterior distribution π(κ|Z) and π(σ 2 |Z). For computational reasons [2], the smoothness parameter ν is taken equal to one. A log-Gamma prior is assumed for κ and a Normal prior is assumed for the fixed effect (a l and b l ). From these marginal posterior distributions, we derive estimates of κ and σ as posterior means. In presence of a large number of loci, implementing the above on the full dataset may become unpractical due to memory load issues. In this case we recommend to infer the parameters of the spatial covariance on a random subset of loci. Model selection For each locus, we are concerned with selecting among two competing models: a model in which the environment has an effect, i.e. where f il = 1/[1 + exp −(x il + a l y i + b l )] and a reduced model in which the environmental variable has no effect, namely a l = 0 in the previous equation. For the vector z l = (z 1l , ..., z Il ) of data at locus l, we denote π(z l |m) = π(z l |θ, m)π(θ|m)dθ the evidence or integrated likelihood of data under model m. Assessing the strength of association with environmental variables of selection can be done by computing the Bayes factor BF l = π(z l |m 1 )/π(z l |m 0 ) We compute Bayes factors BF l and estimate a l and b l locus-by-locus. In this second step However, this does not seem to affect the accuracy in the estimation of the other parameters. In particular, the slope a l in the fixed effect which quantifies the effect of the environmental variable is consistently accurately estimated. Simulations from a landscape genetic model Individual-based simulations are produced here using the computer program SimAdapt [14]. The genome of each individual consists in 120 genetically-independent bi-allelic co- Habitat q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q Locus index log Bayes factor q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q qq q q q q q qq q q q q q qqq q q q q q q q qq q q qq q q q q q q qq q q q q q q q q q q q q qq q q q q q q q q q q qq q q q q q qq q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q Geographical distance Covariance q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q Locus index Slope fixed effect q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q qq q q q q q qq q q q q q qqq q q q q q q q qq q q qq q q q q q q qq q q q q q q q q q q q q qq q q q q q q q q q q qq q q q q q qq q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q Geographical distance Correlation q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q Habitat q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q Locus index log Bayes factor q q q q q q qq q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q qq q qq q q qq q q q q q q q q q q q q q qq q q q q q q qq q q q qqq q q q q q q q q q q q q q q qq q q q q q qq q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q Geographical distance Covariance q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q Locus index Slope fixed effect q q q q q q qq q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q qq q qq q q qq q q q q q q q q q q q q q qq q q q q q q qq q q q qqq q q q q q q q q q q q q q q qq q q q q q qq q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q Geographical distance Correlation q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q Analysis of a pine weevil dataset in Europe We re-analyse this dataset with the SGLMM described above and also with a plain logistic regression. The latter analysis differs from that of Joost et al. [8] in the model selection strategy. Joost et al. [8] used a somehow ad hoc combination of two tests based respectively on the likelihood ratio and the Wald statistic. We use here Bayes factors both for the logistic regression and the SGLMM. The results for four arbitrarily chosen environmental variables out of the ten variables of the initial dataset are summarised in figure 5. With a cut-off set at BF > 3, under a SGLMM (resp. logistic regression), 3 loci (resp. 11 loci) are significantly associated with the diurnal temperature range. The number of significant loci are 0 (5), 0(10) and 0(4) for the number of days with ground frost, monthly precipitation and wind speed. In the four environmental variables considered here, only one of them is considered significantly correlated to the genetic data under the SGLMM. This is consistent with the fact that the data have been collected at highly scattered locations which makes the SGLMM better suited to "correct" the sample size for spatial auto-correlation. We note however that there is a strong agreement between the loci detected as most significantly associated with the environment in our analysis under the SGLMMM and the previous analysis of Joost et al. [8]. Conclusion The approach we propose extends existing methods in several ways: we introduce a method that (i) is spatially explicit, (ii) handles spatial coordinates either on R 2 (plan) or does not rely on MCMC computation. One limit common to the approach proposed here and that of Coop et al. [10] is that loci are assumed to be conditionally independent (no residual linkage disequilibrium not accounted for by x and y). This assumption will be clearly violated for dense SNPs datasets. This aspect requires more work for a rigorous and efficient control of false discovery. However we note that Bonferroni-type correction offers a solution to protect oneself against false positives. Moreover, potential linkage disequilibrium not accounted for has no effect affect on the ranking of the loci in terms of evidence of selection. The approach proposed can be therefore readily used to identify conspicuous loci that are likely to be the target of selection. The model we described embeds the main features of the models of Coop et al. [10] and the magnitude of the improvement in terms of inference accuracy brought by the use of an explicit spatial model depends on how much this model complies with the data at hand. We expect our model to be best suited for datasets at a scale that is large enough to observe genetic variation and spatial auto-correlation but small enough for the stationary model to make sense. The latter condition suggests that datasets collected at the continental scale may be the best targets for our approach. Figure 1 : 1Geographical locations of the nineteen pine weevil populations sampled in Europe. the ψ k (.) are basis functions with compact support, one can choose the weights w k so that the distribution of the function x(s) approximates the distribution of the solution of Eq. 5. The method of Rue et al. [1] is based on Laplace approximations of the various conditional densities involved in the inference of the hyper-parameters and latent variables. It makes use of the Markov structure of the latent variables in the computation. In contrast with MCMC, the INLA method does not compute estimates of the joint posterior distributions of hyper-parameters and latent variables but it only estimates the marginal posterior densities. of computations, the variance and scale parameters are fixed to the values inferred from the global dataset Z as explained in the previous section. The Bayes factors can be used to flag loci displaying outstanding dependence with the environmental variables and rank loci by decreasing evidence of genetic selection. 3. Analysis of simulated and real data 3.1. Simulations from a geostatistical model We analyse here data simulated under the exact model described above. We consider first a dataset of 1000 bi-allelic dominant markers (Eq. 2) for 500 individuals observed at 25 geographical sites uniformly sampled in the unit square (20 individuals per site), which are typical sample sizes encountered in molecular ecology studies. For the fixed effects (Eq. 3), we draw a l , b l and (y i ) i=1,...,I independently from a N (0, 1) distribution. The random effect x l is a Gaussian random field with a Matérn covariance function with parameters σ 2 = 2, ν = 1 and κ = 0.1. The results of inference reported figure 2 show an excellent accuracy in the inference of the underlying covariance function and also a good accuracy in the estimation of the fixed effect (parameters a l and b l ). In the inference with INLA, we use everywhere the default prior distributions. In other simulation experiments under the same geostatistical model with other combinations of parameters, we observed sometimes that the estimation of the variance parameter could be inaccurate. For example, with a range parameter κ = 0.1 Figure 2 : 2dominant markers: 100 neutral loci and 20 loci under habitat-specific selection. Alleles at non-neutral loci are specific to one of the two habitats 1 or 2. Homozygotes (A,A) in habitat 1 have a fitness of 1, while homozygotes (a,a) have a fitness of 1−s (and vice versa in habitat 2). The fitness of heterozygotes is 1 − s/2 in both habitats. Locus-specific fitnesses combine multiplicatively across loci to give the fitness of individuals. Among the selected loci, ten are subject to selection in one habitat, the ten others in the other habitat. Individuals are considered as hermaphrodites, and mate randomly in their patch, the mating probability being proportional to their fitness. Additional details on the model are provided in Rebaudo et al.[14].The landscape is a 30 × 10 grid of 300 cells, which can represent habitats 1 or 2. Each cell has a carrying capacity of 100 individuals, populations grow logistically with a rate of 0.5. The landscape is designed so that habitats are distributed across a linear East-Westgradient of habitat frequencies (see habitat and sampling locations figures 3 and 4 top-left panel), the frequency of habitat 1 being 1 at the eastern edge, and 0 at the western edge. The selection coefficient is set to s = 0.1 (each maladapted locus decreases the fitness by 10%). The probability of dispersal is set to d = 0.1 and d = 0.01 per individual and per generation, a dispersal event consisting in moving an individual by one cell (vertically or horizontally). Simulations start with a single cell at the carrying capacity (100 individuals) close to the western edge of the grid, and mimics the invasion of the landscape for 30 generations (enough to reach all cells in the landscape). At generation 30, 25 individuals (less if the patch is not populated enough) are sampled in each of 200 cells randomly located in the grid, and genotypes at both neutral and selected loci. Here there is a loose connection between the parametrization of our inference model and that of the simulation model, in particular there is no explicit covariance function that describes the spatial genetic structure. In the inference with INLA, we use everywhere the default prior distributions. What we check here is the ability of the method to detect loci that are genuinely under selection and its false positive rate. The results are summarized in figures 3 and 4 and show good performances with respect to these two tasks. Results of inference on data from geostatistical simulations. 25 geographical sites, 20 individuals per sites, 1000 AFLP markers. Top row: slope a l and intercept b l of fixed effect (Eq. 3). Bottom row: the dashed red lines depicts the true Matérn covariance and correlation functions for the hidden Gaussian fields, the continuous grey line depicts the estimated Matérn functions and the black dots the numerical result for the GMRF approximation underlying the INLA-SPDE method. Figure 3 : 3Results for data simulated from a landscape genetics model (dispersal probability=0.1 per individual and per generation). Top left: habitat (environmental variable) coded as two colors and sampling sites (triangles); Middle left and bottom left: the continuous grey line depicts the estimated Matérn functions and the black dots the numerical result for the GMRF approximation underlying the INLA-SPDE method. Right from top to bottom: Bayes factors and parameters a l and b l for the 120 loci. Dark and light green correspond to positively and negatively selected loci respectively. The loci genuinely under selection are indexed 101-120. Figure 4 : 4Results for data simulated from a landscape genetics model (dispersal probability=0.01 per individual and per generation). Top left: habitat (environmental variable) coded as two colors and sampling sites (triangles); Middle left and bottom left: the continuous grey line depicts the estimated Matérn functions and the black dots the numerical result for the GMRF approximation underlying the INLA-SPDE method. Right from top to bottom: Bayes factors and parameters a l and b l for the 120 loci. Dark and light green correspond to positively and negatively selected loci respectively. The loci genuinely under selection are indexed 101-120. S 2 2(sphere), (iii) works for both co-dominant and dominant markers, (iv) is equally well suited for individual data or allele counts aggregated at the population level, (v) does not require any calibration step on a subset of neutral loci, (vi) can handle quantitative as well as categorical environmental variables, (vii) returns Bayesian measures of model fit, and (viii) Figure 5 : 5Results of inference on the pine weevil data analyzed with a logistic regression (LR) and our Spatial Generalized Linear Mixed Model (SGLMM). Loci with a Bayes factor in favor of a SGLMM including an effect of the environment variable are flagged with a vertical dot line. Approximate Bayesian inference for latent Gaussian models by using integrated nested Laplace approximations. H Rue, S Martino, N Chopin, Journal of the Royal Statistical Society. 712series BH. Rue, S. Martino, N. Chopin, Approximate Bayesian inference for latent Gaussian models by using integrated nested Laplace approximations, Journal of the Royal Sta- tistical Society, series B 71 (2) (2009) 1-35. An explicit link between Gaussian fields and Gaussian Markov random fields: the stochastic partial differential equation approach. F Lindgren, H Rue, E Lindström, Journal of the Royal Statistical Society, series B. 734F. Lindgren, H. Rue, E. Lindström, An explicit link between Gaussian fields and Gaus- sian Markov random fields: the stochastic partial differential equation approach, Journal of the Royal Statistical Society, series B 73 (4) (2011) 423-498. Molecular signatures of natural selection. R Nielsen, Annual Review of Genetics. 39R. Nielsen, Molecular signatures of natural selection, Annual Review of Genetics 39 (2005) 197-218. Monitoring adaptive genetic responses to environmental change. M M Hansen, I Olivieri, D M Waller, E E Nielsen, Group, Molecular Ecology. 216M. M. Hansen, I. Olivieri, D. M. Waller, E. E. Nielsen, and the GEM Working Group, Monitoring adaptive genetic responses to environmental change, Molecular Ecology 21 (6) (2012) 1311-1329. A large scale screen for artificial selection in maize identifies candidate agronomic loci for domestication and crop improvement. M Yamasaki, M I Tenaillon, I Vroh, S Bi, H Schroeder, J Sanchez-Villeda, B Doebley, M Gaut, Mcmullen, Plant Cell. 1711M. Yamasaki, M. I. Tenaillon, I. Vroh Bi, S. Schroeder, H. Sanchez-Villeda, J. Doebley, B. Gaut, M. McMullen, A large scale screen for artificial selection in maize identifies candidate agronomic loci for domestication and crop improvement, Plant Cell 17 (11) (2005) 2859-2872. The genome response to artificial selection: a case study in dairy cattl. L Flori, S Fritz, F Jaffrezic, M Boussaha, I Gut, S Heath, J Foulley, M Gautier, PLoS One. 486595L. Flori, S. Fritz, F. Jaffrezic, M. Boussaha, I. Gut, S. Heath, J. Foulley, M. Gautier, The genome response to artificial selection: a case study in dairy cattl, PLoS One 4 (8) (2009) e6595. Genome-wide genetic marker discovery and genotyping using next-generation sequencing. J Davey, P Hohenlohe, P Etter, J Boone, J Catchen, M Blaxter, Nature Review Genetics. 12J. Davey, P. Hohenlohe, P. Etter, J. Boone, J. Catchen, M. Blaxter, Genome-wide ge- netic marker discovery and genotyping using next-generation sequencing, Nature Review Genetics 12 (2011) 499-510. A spatial analysis method (SAM) to detect candidate loci for selection: towards a landscape genomics approach to adaptation. S Joost, A Bonin, M Bruford, L Després, C Conord, G Erhardt, P Taberlet, Molecular Ecology. 16S. Joost, A. Bonin, M. Bruford, L. Després, C. Conord, G. Erhardt, P. Taberlet, A spa- tial analysis method (SAM) to detect candidate loci for selection: towards a landscape genomics approach to adaptation, Molecular Ecology 16 (2007) 3955-3969. Dismantling the Mantel tests. G Guillot, F Rousset, Methods in Ecology and Evolution. 44G. Guillot, F. Rousset, Dismantling the Mantel tests, Methods in Ecology and Evolution 4 (4) (2013) 336-344. Using environmental correlations to identify loci under selection. G Coop, D Witonsky, A Di Rienzo, J Pritchard, Genetics. 185G. Coop, D. Witonsky, A. Di Rienzo, J. Pritchard, Using environmental correlations to identify loci under selection, Genetics 185 (2010) 1411-1423. Detecting selection along environmental gradients: analysis of eight methods and their effectiveness for outbreeding and selfing populations. S De Mita, A Thuillet, L Gay, N Ahmadi, S Manel, J Ronfort, Y Vigouroux, Molecular Ecology. 225S. De Mita, A. Thuillet, L. Gay, N. Ahmadi, S. Manel, J. Ronfort, Y. Vigouroux, Detecting selection along environmental gradients: analysis of eight methods and their effectiveness for outbreeding and selfing populations, Molecular Ecology 22 (5) (2013) 1383-1399. Genetic structure of the forest pest Hylobius abietis on conifer plantations at different spatial scales in Europe. C Conord, G Lempérière, P Taberlet, L Després, Heredity. 97C. Conord, G. Lempérière, P. Taberlet, L. Després, Genetic structure of the forest pest Hylobius abietis on conifer plantations at different spatial scales in Europe, Heredity 97 (2006) 46-55. J Chilès, P Delfiner, Geostatistics: Modeling Spatial Uncertainty. Hoboken, NJ, USAWileyJ. Chilès, P. Delfiner, Geostatistics: Modeling Spatial Uncertainty, Wiley, Hoboken, NJ, USA, 1999. SimAdapt: An individual-based genetic model for simulating landscape management impacts on populations. F Rebaudo, A Le Rouzic, S Dupas, J Silvain, M Harry, O Dangles, Methods in Ecology and Evolution. 46F. Rebaudo, A. Le Rouzic, S. Dupas, J. Silvain, M. Harry, O. Dangles, SimAdapt: An individual-based genetic model for simulating landscape management impacts on populations, Methods in Ecology and Evolution 4 (6) (2013) 595-600.
[]
[ "Null Similar Curves with Variable Transformations in Minkowski 3-space", "Null Similar Curves with Variable Transformations in Minkowski 3-space" ]
[ "E Mehmet Önder [email protected] \nFaculty of Science and Arts\nDepartment of Mathematics\nCelal Bayar University\nMuradiye Campus45047Muradiye, ManisaTurkey\n" ]
[ "Faculty of Science and Arts\nDepartment of Mathematics\nCelal Bayar University\nMuradiye Campus45047Muradiye, ManisaTurkey" ]
[]
In this study, we define a family of null curves in Minkowski 3-space 3 1 E and called null similar curves. We obtain some properties of these special curves. We show that two null curves are null similar curves if and only if these curves form a null Bertrand pair. Moreover, we obtain that the family of null geodesics and null helices form the families of null similar curves with variable transformation.MSC: 53B30, 53C40.
null
[ "https://arxiv.org/pdf/1205.2239v1.pdf" ]
119,627,624
1205.2239
65fffaaf2dfee6706531cfc10c77e1681c92166a
Null Similar Curves with Variable Transformations in Minkowski 3-space E Mehmet Önder [email protected] Faculty of Science and Arts Department of Mathematics Celal Bayar University Muradiye Campus45047Muradiye, ManisaTurkey Null Similar Curves with Variable Transformations in Minkowski 3-space 1Cartan framenull curvesimilar curve In this study, we define a family of null curves in Minkowski 3-space 3 1 E and called null similar curves. We obtain some properties of these special curves. We show that two null curves are null similar curves if and only if these curves form a null Bertrand pair. Moreover, we obtain that the family of null geodesics and null helices form the families of null similar curves with variable transformation.MSC: 53B30, 53C40. Introduction In the study of relativity theory, a unit speed future-pointing timelike curve in a spacetime (a connected and time-oriented 4-dimensional Lorentz manifold) is thought as a locus of a material particle in the space-time. The unit-speed parameter of this curve is called the proper time of a material particle. Analogue to timelike curves, in relativity theory a futurepointing null geodesic is thought as the locus of a lightlike particle. More generally, from the differential geometric point of view, the study of null curves has its own geometric interest. Because the other curves (spacelike and timelike curves) of Lorentz space can be studied by a similar approach to that studied in positive definite Riemannian geometry. Moreover, null curves have different properties from spacelike and timelike cuvres and the results of null curve theory are not analogues to Riemannian case. So, motivated by the growing importance of null curves in mathematical physics, some special subjects of the curve theory have been studied for null curves by many mathematicians. Ferrandez, Gimenez and Lucas have defined and studied null helices in Lorentzian space forms [4]. Honda and Inoguchi have given a characterization of null cubics [5]. Later, Balgetir, Bektaş and Inoguchi have considered the notion of Bertrand curves for Cartan framed null curves and showed that null Bertrand curves are null geodesics or Cartan framed null curves with constant second curvature [1]. Moreover, Öztekin and Ergüt have given the characterizations of null Mannheim curves in 3 1 E [7]. Recently, a new definition of the special curves has been given by El-Sabbagh and Ali [3]. [3]. In this work we think the notion of similar curves for Cartan framed null curves in 3 1 E . We give some theorems characterizing these special null curves and we show that the family of null geodesics and null helices form the families of null similar curves with variable transformation. Preliminaries Let 3 1 E be a Minkowski 3-space with natural Lorentz Metric 2 2 2 1 2 3 , dx dx dx = − + + ,(1) where ) , , ( 3 2 1 x x x is a rectangular coordinate system of 3 1 E . According to this metric, in 3 1 E an arbitrary vector ( , , ) y y y y = in 3 1 E , the vector product of x and y is defined by 1 2 3 ( , , ) v v v v = can have one of three Lorentzian causal characters; it can be spacelike if , 0 v v > or 0 v = , timelike if , 0 v v < and null (lightlike) if , 0 v v = and 0 v ≠ [6].1 2 3 1 2 3 2 3 3 2 1 3 3 1 2 1 1 2 1 2 3 ( , , ) e e e x y x x x x y x y x y x y x y x y y y y − − ∧ = = − − − (2) where 1 , 0 , ij i j i j δ =  =  ≠  ′ =   ′ =   ′ = − −   (4) with da ds α = , , , 0 α α γ γ = = , , , 1 β β α γ = = (5) where β is defined by β α γ = × . The functions κ and τ are called curvature and torsion of ( ) a s , respectively. We called the vector fields , , α γ β as tangent vector field, principal normal vector field and binormal vector field, respectively [5]. After these definitions we give and prove the following theorem which will be used in the next section. . Then the null tangent vector field α satisfies a vector differential equation of third order given by 3 3 ( ) 2 ( ) 0 d d df f d d d α α ϕ ϕ α ϕ ϕ ϕ + + = ,(6) where ( ) ( ) ( ) f τ ϕ ϕ κ ϕ = . Proof: If we write derivatives given in (4) according to ϕ , we have 1 ( ) , 1 ( ) ( ) , 1 ( ) ( ) d d ds d ds d d d ds f d ds d d d ds f d ds d α α κβ β ϕ ϕ κ γ γ τβ ϕ β ϕ ϕ κ β β τα κγ ϕ α γ ϕ ϕ κ  = = =     = = =     = = − − = − −   (7) respectively, where ( ) ( ) ( ) f τ ϕ ϕ κ ϕ = . Then corresponding matrix form of (4) can be given by / 0 1 0 / 0 ( ) 0 ( ) 0 1 / d d d d f f d d α α ϕ γ ϕ ϕ β ϕ γ β ϕ             =             − −       .(8) From the first and third equations of new Frenet derivatives (8) we have 2 2 ( ) d f d α γ ϕ α ϕ   = − +     .(9) Substituting the above equation in the second equation of (8) we have desired equation (6). Null Similar Curves with Variable Transformation in ( ) ( ) ( ) ( ) b b b b b b b b b b aκ β κ β = ∫ ∫ .(15) Then from (12) and (13) we obtain ( ) (17) and (18). By differentiating (17) with respect to b s we obtain ( ) ( ) ( ) ( ) ( ) b b b b b b b a a a a( ) ( ) ( ) ( ) a a a a a b b b b b ds s s s s ds τ β τ β = ,(20) which gives us , ( ) ( ) a b b a a b b a s s τ λ β β τ = = .(21) Then from (17) and (21) β , respectively. Then a pair of curves ( , ) a b is called a (null) Bertrand pair if the vector fields a β and bβ are linearly dependent[1].Definition 2.3. A null curve ( )a a s is said to be a null helix if it has constant Cartan curvatures[4]. 3 1 E 1In this section we introduce the definition and characterizations of null similar curves with variable transformation in3 1 E . First, we give the following definition. . arc lengths such that the null tangents are the same for two curves, i.e., are called a family of null similar curves with variable transformation.If we integrate the (11) we have the following theorem.Theorem 3.1. The position vectors of the family of Cartan framed null similar curves with variable transformation can be written in the following formThen we can give the following theorems characterizing null similar curves. In the followings, whenever we talk about Cartan framed null curves ( ) , we mean that the curves have the Frenet frames and invariants as given in Definition 3.1. Theorem 3. 2 .E 2Let null similar curves with variable transformation if and only if the principal normal vectors of the curves are the same, i.e., satisfying (12) and (13). By multiplying (12) with b κ and differentiating the results equality with respect to b s we have are null similar curves with variable transformation. From Theorem 3.2, we can give the following corollary. Corollary 3. 1 .E 1Let curves with variable transformation if and only if ( ) a a s and ( ) b b s are null Bertrand curves with particular variable transformation a similar curves with variable transformation if and only if the binormal vectors of the curves are the same, i.e., with variable transformation. Then, from Definition 3.1 and Theorem 3.2 there exists a variable transformation of the arc lengths such that the tangent vectors and principal normal vectors are the same. Theorem 3. 4 .E 4Let null similar curves with variable transformation if and only if the ratio of curvatures are the same i.e., the particular variable transformation keeping equal total curvatures, i.e., with variable transformation. Then from (18) and (21) we have (23) under the variable transformation (24), and this transformation is also leads from (18) by integration. Conversely, let ( ) .1, the tangents a α and b α of the curves ( ) a a s and ( ) b b s satisfy the following vector differential equations of third They have called these new curves as similar curves with variable transformation and definedas follows: Let ( ) s α α ψ and ( ) s β β ψ be two regular curves in 3 E parameterized by arc lengths s α and s β with curvatures α κ , β κ , torsions α τ , β τ and Frenet frames { } , , T N B α α α and { } , , T N B β β β , respectively. Then, ( ) s α α ψ and ( ) s β β ψ are called similar curves with variable transformation α β λ if there exists a variable transformation ( ) s s ds α α β β β λ = ∫ , of the arc lengths such that the tangent vectors are the same for two curves, i.e., T T α β = for all corresponding values of parameters under the transformation α β λ . All curves satisfying this condition is called a family of similar curves. Moreover, they have obtained some properties of the family of similar curves Let now consider some special cases. From(13)and(21)we have ,ConclusionsA family of null curves in Minkowski 3-space 3 1 E are defined and called null similar curves. Some properties of these special curves are obtained and it is showed that null Bertrand curves are null similar curves with variable transformation. Moreover, it is obtained that null geodesics, null curves with vanishing torsion and null helices form the families of null similar curves. Null Bertrand curve in Minkowski 3-space and their characterizations. H Balgetir, M Bektaş, J Inoguchi, Note di Matematica. 231Balgetir, H., Bektaş, M., Inoguchi, J.: Null Bertrand curve in Minkowski 3-space and their characterizations, Note di Matematica, 23, n. 1, 2004, 7-13. K L Duggal, A Bejancu, Lightlike Submanifolds of Semi-Riemannian Manifolds and Applications. DordrechtKluwer Academic PublishersDuggal, K.L., Bejancu, A.: Lightlike Submanifolds of Semi-Riemannian Manifolds and Applications, Kluwer Academic Publishers, Dordrecht, (1996). M F El-Sabbagh, A T Ali, arXiv:0909.1108v1Similar Curves with Variable Transformations. math.DGEl-Sabbagh, M.F., Ali, A.T.: Similar Curves with Variable Transformations, arXiv:0909.1108v1 [math.DG]. Null helices in Lorentzian space forms. A Ferrandez, A Gimenez, P Lucas, International Journal of Modern Physics A. 16Ferrandez, A., Gimenez, A., Lucas, P.: Null helices in Lorentzian space forms, International Journal of Modern Physics A, 16 (2001), 4845-4863. On a characterization of null cubics in Minkowski 3-space. K Honda, J Inoguchi, Differential Geo.-Dynamical Systems. 51Honda, K., Inoguchi, J.: On a characterization of null cubics in Minkowski 3-space, Differential Geo.-Dynamical Systems, 5, no.1, 1-6 (2003). Semi-Riemannian Geometry with Application to Relativity. B O&apos;neill, Academic PressNew YorkO'Neill, B.: Semi-Riemannian Geometry with Application to Relativity, Academic Press, New York, (1983). Null Mannheim curves in Minkowski 3-space 3. H B Öztekin, M Ergüt, Öztekin, H.B, Ergüt, M.: Null Mannheim curves in Minkowski 3-space 3 . E , Turk Math, 35E , Turk J Math., 35 (2011) , 107-114.
[]
[ "Convergence of the uniaxial PML method for time-domain electromagnetic scattering problems", "Convergence of the uniaxial PML method for time-domain electromagnetic scattering problems" ]
[ "Changkun Wei ", "Jiaqing Yang ", "Bo Zhang " ]
[]
[ "J. Numer. Anal" ]
In this paper, we propose and study the uniaxial perfectly matched layer (PML) method for three-dimensional time-domain electromagnetic scattering problems, which has a great advantage over the spherical one in dealing with problems involving anisotropic scatterers. The truncated uniaxial PML problem is proved to be well-posed and stable, based on the Laplace transform technique and the energy method. Moreover, the L 2 -norm and L ∞ -norm error estimates in time are given between the solutions of the original scattering problem and the truncated PML problem, leading to the exponential convergence of the time-domain uniaxial PML method in terms of the thickness and absorbing parameters of the PML layer. The proof depends on the error analysis between the EtM operators for the original scattering problem and the truncated PML problem, which is different from our previous work (SIAM
10.1051/m2an/2021064
[ "https://arxiv.org/pdf/2102.01843v1.pdf" ]
231,786,736
2102.01843
119dbcd6ca6d21895324f368c58ef79ae5a67bbb
Convergence of the uniaxial PML method for time-domain electromagnetic scattering problems 2020 Changkun Wei Jiaqing Yang Bo Zhang Convergence of the uniaxial PML method for time-domain electromagnetic scattering problems J. Numer. Anal 5832020Well-posednessstabilitytime-domain electromagnetic scatteringuniaxial PMLexponential convergence In this paper, we propose and study the uniaxial perfectly matched layer (PML) method for three-dimensional time-domain electromagnetic scattering problems, which has a great advantage over the spherical one in dealing with problems involving anisotropic scatterers. The truncated uniaxial PML problem is proved to be well-posed and stable, based on the Laplace transform technique and the energy method. Moreover, the L 2 -norm and L ∞ -norm error estimates in time are given between the solutions of the original scattering problem and the truncated PML problem, leading to the exponential convergence of the time-domain uniaxial PML method in terms of the thickness and absorbing parameters of the PML layer. The proof depends on the error analysis between the EtM operators for the original scattering problem and the truncated PML problem, which is different from our previous work (SIAM Introduction This paper is concerned with the time-domain electromagnetic scattering by a perfectly conducting obstacle which is modeled by the exterior boundary value problem:                ∇ × E + µ∂ t H = 0 in (R 3 \Ω) × (0, T ), (1.1a) ∇ × H − ε∂ t E = J in (R 3 \Ω) × (0, T ), (1.1b) n × E = 0 on Γ × (0, T ), (1.1c) E(x, 0) = H(x, 0) = 0 in R 3 \Ω, (1.1d) x × (∂ t E ×x) +x × ∂ t H = o |x| −1 as |x| → ∞, t ∈ (0, T ). (1.1e) Here, E and H denote the electric and magnetic fields, respectively, Ω ⊂ R 3 is a bounded Lipschitz domain with boundary Γ and n is the unit outer normal vector to Γ. Throughout this paper, the electric permittivity ε and the magnetic permeability µ are assumed to be positive constants. Equation (1.1e) is the well-known Silver-Müller radiation condition in the time domain withx := x/|x|. Time-domain scattering problems have been widely studied recently due to their capability of capturing wide-band signals and modeling more general materials and nonlinearity, including their mathematical analysis (see, e.g., [1,11,[26][27][28]31,33,39,40] and the references quoted there). The well-posedness and stability of solutions to the problem (1.1a)-(1.1e) have been proved in [16] by employing an exact transparent boundary condition (TBC) on a large sphere. Recently, a spherical PML method has been proposed in [42] to solve the problem (1.1a)-(1.1e) efficiently, based on the real coordinate stretching technique associated with [Re(s)] −1 in the Laplace transform domain with the Laplace transform variable s ∈ C + := {s = s 1 +is 2 ∈ C : s 1 > 0, s 2 ∈ R}, and its exponential convergence has also been established in terms of the thickness and absorbing parameters of the PML layer. In this paper, we continue our previous study in [42] and propose and study the uniaxial PML method for the problem (1.1a)-(1.1e), based on the real coordinate stretching technique introduced in [42], which uses a cubic domain to define the PML problem and thus is of great advantage over the spherical one in dealing with problems involving anisotropic scatterers. We first establish the existence, uniqueness and stability estimates of the PML problem by the Laplace transform technique and the energy argument and then prove the exponential convergence in both the L 2 -norm and the L ∞ -norm in time of the time-domain uniaxial PML method. Our proof for the L 2 -norm convergence follows naturally from the error estimate between the EtM operators for the original scattering problem and its truncated PML problem established also in the paper, which is different from [42]. The L ∞ -norm convergence is obtained directly from the time-domain variational formulation of the original scattering problem and its truncated PML problem with using special test functions. The PML method was first introduced in the pioneering work [3] of Bérenger in 1994 for efficiently solving the time-dependent Maxwell's equations. Its idea is to surround the computational domain with a specially designed medium layer of finite thickness in which the scattered waves decay rapidly regardless of the wave incident angle, thereby greatly reducing the computational complexity of the scattering problem. Since then, various PML methods have been developed and studied in the literature (see, e.g., [4, 10, 23-25, 29, 35] and the references quoted there). Convergence analysis of the PML method has also been widely studied for time-harmonic acoustic, electromagnetic, and elastic wave scattering problems. For example, the exponential convergence has been established in terms of the thickness of the PML layer in [2,4,8,13,15,21,30,32] for the circular or spherical PML method and in [5-7, 14, 17, 19, 20] for the uniaxial (or Cartesian) PML method. Among them, the proof in [2] is based on the error estimate between the electric-to-magnetic (EtM) operators for the original electromagnetic scattering problem and its truncated PML problem, while the key ingredient of the proof in [13] and [14] is the decay property of the PML extensions defined by the series solution and the integral representation solution, respectively. On the other hand, there are also several works on convergence analysis of the time-domain PML method for transient scattering problems. For two-dimensional transient acoustic scattering problems, the exponential convergence was proved in [12] for the circular PML method and in [18] for the uniaxial PML method, based on the complex coordinate stretching technique. For the 3D time-domain electromagnetic scattering problem (1.1a)-(1.1e), the spherical PML method was proposed in [42] based on the real coordinate stretching technique associated with [Re(s)] −1 in the Laplace transform domain with the Laplace transform variable s ∈ C + , and its exponential convergence was established by means of the energy argument and the exponential decay estimates of the stretched dyadic Green's function for the Maxwell equations in the free space. In addition, we refer to [1] for the well-posedness and stability estimates of the time-domain PML method for the two-dimensional acoustic-elastic interaction problem, and to [41] for the convergence analysis of the PML method for the fluid-solid interaction problem above an unbounded rough surface. The remaining part of this paper is as follows. In Section 2, we introduce some basic Sobolev spaces needed in this paper. In Section 3, the well-posedness of the time-domain electromagnetic scattering problem is presented, and some important properties are given for the transparent boundary condition (TBC) in the Cartesian coordinate. In Section 4, we propose the uniaxial PML method in the Cartesian coordinate, study the well-posedness of the truncated PML problem and establish its exponential convergence. Some conclusions are given in Section 5. Functional spaces We briefly introduce the Sobolev space H(curl, ·) and its related trace spaces which are used in this paper. For a bounded domain D ⊂ R 3 with Lipschitz continuous boundary Σ, the Sobolev space H(curl, D) is defined by H(curl, D) := {u ∈ L 2 (D) 3 : ∇ × u ∈ L 2 (D) 3 } which is a Hilbert space equipped with the norm u H(curl,D) = u 2 L 2 (D) 3 + ∇ × u 2 L 2 (D) 3 1/2 . Denote by u Σ = n × (u × n)| Σ the tangential component of u on Σ, where n denotes the unit outward normal vector on Σ. By [9] we have the following bounded and surjective trace operators: γ : H 1 (D) → H 1/2 (Σ), γϕ = ϕ on Σ, γ t : H(curl, D) → H −1/2 (Div, Σ), γ t u = u × n on Σ, γ T : H(curl, D) → H −1/2 (Curl, Σ), γ T u = n × (u × n) on Σ, where γ t and γ T are known as the tangential trace and tangential components trace operators, and Div and Curl denote the surface divergence and surface scalar curl operators, respectively (for the detailed definition of H −1/2 (Div, Σ) and H −1/2 (Curl, Σ), we refer to [9]). By [9] again we know that H −1/2 (Div, Σ) and H −1/2 (Curl, Σ) form a dual pairing satisfying the integration by parts formula (u, ∇ × v) D − (∇ × u, v) D = γ t u, γ T v Σ ∀ u, v ∈ H(curl, D),(2.2) where (·, ·) D and ·, · Σ denote the L 2 -inner product on D and the dual product between H −1/2 (Div, Σ) and H −1/2 (Curl, Σ), respectively. For any S ⊂ Σ, the subspace with zero tangential trace on S is denoted as The well-posedness of the scattering problem Let Ω be contained in the interior of the cuboid B 1 := {x = (x 1 , x 2 , x 3 ) ⊤ ∈ R 3 : |x j | < L j /2, j = 1, 2, 3} with boundary Γ 1 = ∂B 1 . Denote by n 1 the unit outward normal to Γ 1 . The computational domain B 1 \Ω is denoted by Ω 1 . In this section, we assume that the current density J is compactly supported in B 1 with J ∈ H 10 (0, T ; L 2 (Ω 1 ) 3 ), ∂ j t J| t=0 = 0, j = 0, 1, 2, 3, . . . 9 (3.3) and that J is extended so that J ∈ H 10 (0, ∞; L 2 (Ω 1 ) 3 ), J H 10 (0,∞;L 2 (Ω 1 ) 3 ) ≤ C J H 10 (0,T ;L 2 (Ω 1 ) 3 ) . (3.4) Define the following time-domain transparent boundary condition (TBC) on Γ 1 : T [E Γ 1 ] = H × n 1 on Γ 1 × (0, T ) (3.5) which is essentially an electric-to-magnetic (EtM) Calderón operator. Then the original scattering problem (1.1a)-(1.1e) can be equivalently reduced into the initial boundary value problem in a bounded domain Ω 1 × (0, T ):                ∇ × E + µ∂ t H = 0 in Ω 1 × (0, T ), ∇ × H − ε∂ t E = J in Ω 1 × (0, T ), n × E = 0 on Γ × (0, T ), E(x, 0) = H(x, 0) = 0 in Ω 1 , T [E Γ 1 ] = H × n 1 on Γ 1 × (0, T ). (3.6) The well-posedness of the original scattering problem (1.1a)-(1.1e) has been established in [16] by using the transparent boundary condition on a sphere. Thus the problem (3.6) is also well-posed since it is equivalent to the problem (1.1a)-(1.1e). However, for convenience of the subsequent use in the following sections, we study the problem (3.6) directly by studying the property of the EtM operator T . For any s ∈ C + := {s = s 1 + is 2 ∈ C : s 1 > 0, s 2 ∈ R} leť E(x, s) = L (E)(x, s) = ∞ 0 e −st E(x, t)dt, H(x, s) = L (H)(x, s) = ∞ 0 e −st H(x, t)dt be the Laplace transform of E and H with respect to time t, respectively (for extensive studies on the Laplace transform, the reader is referred to [22]). Let B : H −1/2 (Curl, Γ 1 ) → H −1/2 (Div, Γ 1 ) be the EtM operator B[Ě Γ 1 ] =Ȟ × n 1 on Γ 1 ,(3.7) whereĚ andȞ satisfy the exterior Maxwell's equation in the Laplace domain        ∇ ×Ě + µsȞ = 0 in R 3 \B 1 , ∇ ×Ȟ − εsĚ = 0 in R 3 \B 1 , x × (Ě ×x) +x ×Ȟ = o 1 |x| as |x| → ∞. (3.8) It is obvious that T = L −1 • B • L . For each s ∈ C + it is known that, by the Lax-Milgram theorem the problem (3.8) has a unique solution (Ě,Ȟ) ∈ H(curl, R 3 \B 1 ) . Thus the operator B is a well-defined, continuous linear operator. Lemma 3.1. For each s ∈ C + , B : H −1/2 (Curl, Γ 1 ) → H −1/2 (Div, Γ 1 ) is bounded with the estimate B L(H −1/2 (Curl,Γ 1 ),H −1/2 (Div,Γ 1 )) |s| −1 + |s|, (3.9) where L(X, Y ) denotes the standard space of bounded linear operators from the Hilbert space X to the Hilbert space Y . Further, we have Re Bω, ω Γ 1 ≥ 0 for any ω ∈ H −1/2 (Curl, Γ 1 ), (3.10) where · Γ 1 denotes the dual product between H −1/2 (Div, Γ 1 ) and H −1/2 (Curl, Γ 1 ). Proof. First, eliminatingȞ from (3.8) and multiplying both sides of the resulting equation with (3.8) and integrating by parts the resulting equation multiplied withĚ over B R \B 1 , we obtain that V ∈ H(curl, R 3 \B 1 ) yield B[Ě Γ 1 ], γ T V Γ 1 = R 3 \B 1 (µs) −1 ∇ ×Ě · ∇ × V + εsĚ · V dx (|s| −1 + |s|) Ě H(curl,R 3 \B 1 ) V H(curl,R 3 \B 1 ) , which implies (3.9). Now, for any ω ∈ H −1/2 (Curl, Γ 1 ) suppose (Ě,Ȟ) is the solution to the problem (3.8) satisfying the boundary condition γ TĚ = ω on Γ 1 . Let B R := {x ∈ R 3 : |x| < R} contain the domain B 1 . EliminatingȞ fromB R \B 1 (µs) −1 |∇ ×Ě| 2 + εs|Ě| 2 dx − Bω, ω Γ 1 + ∂B Rx × (µs) −1 ∇ ×Ě ·Ědγ = 0. (3.11) Taking the real part of (3.11) and noting that x × (Ě ×x) −x × (µs) −1 ∇ ×Ě 2 = |x × (Ě ×x)| 2 + |x × (µs) −1 ∇ ×Ě| 2 − 2Re(x × (µs) −1 ∇ ×Ě) ·Ě, we have s 1 µ|s| 2 ∇ ×Ě 2 L 2 (B R \B 1 ) 3 + εs 1 Ě 2 L 2 (B R \B 1 ) 3 − Re Bω, ω Γ 1 + 1 2 x × (Ě ×x) 2 L 2 (∂B R ) 3 + 1 2 x × (µs) −1 ∇ ×Ě 2 L 2 (∂B R ) 3 = 1 2 x × (Ě ×x) −x × (µs) −1 ∇ ×Ě 2 L 2 (∂B R ) 3 . (3.12) By the Silver-Müller radiation condition (1.1e) in the s-domain, it is known that the right-hand side of (3.12) tends to zero as R → ∞. This implies that Re Bω, ω Γ 1 ≥ 0. The proof is thus complete. By using Lemma 3.1 and [40, Lemmas 4.5-4.6], the time-domain EtM operator T has the following positive properties which will be used in the error analysis of the time-domain PML solution. Lemma 3.2. Given ξ ≥ 0 and ω(·, t) ∈ L 2 (0, ξ; H −1/2 (Curl, Γ 1 )) it holds that Re Γ 1 ξ 0 t 0 C [ω](x, τ )dτ ω(x, t)dtdγ ≥ 0, where C = L −1 • sB • L . Lemma 3.3. Given ξ ≥ 0 and ω(·, t) ∈ L 2 (0, ξ; H −1/2 (Curl, Γ 1 )) with ω(·, 0) = 0, it holds that Re Γ 1 ξ 0 t 0 C [∂ τ ω](x, τ )dτ ∂ τ ω(x, t)dtdγ ≥ 0. We now introduce the equivalent variational formulation in the Laplace transform domain to the problem (3.6). To this end, eliminate the magnetic field H and take the Laplace transform of (3.6) to get      ∇ × [(µs) −1 ∇ ×Ě] + εsĚ = −J in Ω 1 , n ×Ě = 0 on Γ, B[Ě Γ 1 ] = −(µs) −1 ∇ ×Ě × n 1 on Γ 1 . (3.13) The variational formulation of (3.13) is then as follows: find a solutionĚ ∈ H Γ (curl, Ω 1 ) such that a(Ě, V ) = − Ω 1J · V dx, ∀ V ∈ H Γ (curl, Ω 1 ), (3.14) where the sesquilinear form a(·, ·) is defined as a(Ě, V ) = Ω 1 (sµ) −1 (∇ ×Ě) · (∇ × V )dx + εsĚ · V dx + B[Ě Γ 1 ], V Γ 1 Γ 1 . (3.15) By Lemma 3.1 it is easy to see that a(·, ·) is uniformly coercive, that is, Re[a(Ě,Ě)] s 1 |s| 2 ( ∇ ×Ě 2 L 2 (Ω 1 ) 3 + sĚ 2 L 2 (Ω 1 ) 3 ) ≥ s 1 min{|s| −2 , 1} Ě 2 H(curl,Ω 1 ) . (3.16) Then, by the Lax-Milgram theorem the problem (3.13) is well-posed for each s ∈ C + . Thus, and by the energy argument in conjunction with the inversion theorem of the Laplace transform (cf. [16]) the well-posedness of the problem (3.6) follows. In particular, T [E Γ 1 ] ∈ L 2 0, T ; H −1/2 (Div, Γ 1 ) . The uniaxial PML method In practical applications, the scattering problems may involve anisotropic scatterers. In this case, the uniaxial PML method has a big advantage over the circular or spherical PML method as it provides greater flexibility and efficiency in solving such problems. Thus, in this section, we propose and study the uniaxial PML method for solving the time-domain electromagnetic scattering problem (1.1a)-(1.1e). The PML equation in the Cartesian coordinates In this subsection, we derive the PML equation in the Cartesian coordinates. To this end, define Let Ω PML = B 2 \B 1 be the PML layer and let Ω 2 = B 2 \Ω be the truncated PML domain. See Figure 1 for the uniaxial PML geometry. B 2 := {x = (x 1 , x 2 , x 3 ) ⊤ ∈ R 3 : |x j | < L j /2 + d j , j = 1, 2, 3} with boundary Γ 2 = ∂B 2 B 1 B 2 L 1 L 2 d 1 G W d 2 G 1 G 2For x = (x 1 , x 2 , x 3 ) ⊤ ∈ R 3 , let s 1 > 0 be an arbitrarily fixed parameter and let us define the PML medium property as α j (x j ) = 1 + s −1 1 σ j (x j ), j = 1, 2, 3, where σ j (x j ) =          0, |x j | ≤ L j /2, σ j |x j | − L j /2 d j m , L j /2 < |x j | ≤ L j /2 + d j , σ j , L j /2 + d j < |x j | < ∞ (4.17) with positive constants σ j , j = 1, 2, 3, and integer m ≥ 1. In what follows, we will take the real part of the Laplace transform variable s ∈ C + to be s 1 , that is, Re(s) = s 1 . In the rest of this paper, we always make the following assumptions on the thickness of the PML layer and the parameters σ j , which are reasonable in our model: d 1 = d 2 = d 3 := d ≥ 1, L = max{L 1 , L 2 , L 3 } ≤ C 0 d,(4.18)σ 1 = σ 2 = σ 3 := σ 0 > 0 (4.19) for a fixed generic constant C 0 . Under the assumptions (4.18) and (4.19) we have L j /2+d j 0 σ j (τ )dτ = σ 0 d m + 1 , j = 1, 2, 3. (4.20) We remark that the constant assumption on d j and σ j in (4.18)-(4.19) is only to simplify the convergence analysis but not mandatory. We now introduce the real stretched Cartesian coordinates x = ( x 1 , x 2 , x 3 ) ⊤ with x j = x j 0 α j (τ )dτ, j = 1, 2, 3. (4.21) Noting that the solution of the exterior problem (3.8) in R 3 \B 1 can be derived as the integral representation [34, Theorem 12.2], we can derive the PML extension under the stretched coordinates x by following [42]. For any p ∈ H −1/2 (Div, Γ 1 ) and q ∈ H −1/2 (Div, Γ 1 ), define E(p, q)(x) := − Ψ SL (q)(x) − Ψ DL (p)(x), x ∈ R 3 \B 1 , (4.22) where the stretched single-and double-layer potentials are defined as Ψ SL (q) = Γ 1 G ⊤ (s, x, y)q(y)dγ(y), Ψ DL (p) = Γ 1 (curl y G) ⊤ (s, x, y)p(y)dγ(y). Here, the stretched dyadic Green's function is given by G(s, x, y) = Φ s (x, y)I + 1 k 2 ∇ y ∇ y Φ s (x, y), x = y, k = i √ εµs (4.23) with the stretched fundamental solution and the complex distance Φ s (x, y) = e − √ εµρs( x,y) 4πρ s ( x, y)s −1 , ρ s ( x, y) = s| x − y|. (4.24) Introduce the stretched curl operator acting on vector u = (u 1 , u 2 , u 3 ) ⊤ : curl u = ∇ × u := ∂u 3 ∂ x 2 − ∂u 2 ∂ x 3 , ∂u 1 ∂ x 3 − ∂u 3 ∂ x 1 , ∂u 2 ∂ x 1 − ∂u 1 ∂ x 2 ⊤ = A∇ × Bu with the diagonal matrices A = diag 1 α 2 α 3 , 1 α 1 α 3 , 1 α 1 α 2 and B = diag{α 1 , α 2 , α 3 }. (4.25) The PML extension in the s-domain in R 3 \B 1 of γ t (Ě)| Γ 1 and γ t (curlĚ)| Γ 1 is then defined aš E(x) = E(γ t (Ě), γ t (curlĚ)), x ∈ R 3 \B 1 . (4.26) Defineˇ H(x) := −(µs) −1 curlˇ E(x) for x ∈ R 3 \B 1 . Then it is easy to see that (ˇ E,ˇ H) satisfies the Maxwell equation in the s-domain: ∇ ס E + µsˇ H = 0, ∇ ס H − εsˇ E = 0 in R 3 \ B 1 . (4.27) Define (E PML , H PML ) := B(L −1 (ˇ E), L −1 (ˇ H)). Then (E PML , H PML ) can be viewed as the extension in the region R 3 \B 1 of the solution of the problem (1.1a)-(1.1e) since, by the fact that α j = 1 on Γ 1 for j = 1, 2, 3 we have E PML = E, H PML = H on Γ 1 . If we set E PML = E and H PML = H in Ω 1 × (0, T ), then (E PML , H PML ) satisfies the PML problem:            ∇ × E PML + µ(BA) −1 ∂ t H PML = 0 in (R 3 \Ω) × (0, T ), ∇ × H PML − ε(BA) −1 ∂ t E PML = J in (R 3 \Ω) × (0, T ), n × E PML = 0 on Γ × (0, T ), E PML (x, 0) = H PML (x, 0) = 0 in R 3 \Ω. (4.28) The truncated PML problem in the time domain is to find (E p , H p ), which is an approximation to (E, H) in Ω 1 , such that                ∇ × E p + µ(BA) −1 ∂ t H p = 0 in Ω 2 × (0, T ), ∇ × H p − ε(BA) −1 ∂ t E p = J in Ω 2 × (0, T ), n × E p = 0 on Γ × (0, T ), n 2 × E p = 0 on Γ 2 × (0, T ), E p (x, 0) = H p (x, 0) = 0 in Ω 2 . (4.29) Well-posedness of the truncated PML problem We now study the well-posedness of the truncated PML problem (4.29), employing the Laplace transform technique and a variational method. Eliminate H p and take the Laplace transform of (4.29) to obtain that      ∇ × (µs) −1 BA∇ ×Ě p + εs(BA) −1Ěp = −J in Ω 2 , n ×Ě p = 0 on Γ, n 2 ×Ě p = 0 on Γ 2 . (4.30) The variational formulation of (4.30) can be derived as follows: find a solutionĚ p ∈ H 0 (curl, Ω 2 ) such that a p Ě p , V = − Ω 1J · V dx, ∀ V ∈ H 0 (curl, Ω 2 ),(4.31) where the sesquilinear form a p (·, ·) is defined as a p Ě p , V = Ω 2 (µs) −1 BA(∇ ×Ě p ) · (∇ × V )dx + εs(BA) −1Ěp · V dx. (4.32) We have the following result on the well-posedness of the variational problem (4.31). Lemma 4.1. For each s ∈ C + with Re(s) = s 1 > 0 the variational problem (4.31) has a unique solutionĚ p ∈ H 0 (curl, Ω 2 ). Further, it holds that ∇ ×Ě p L 2 (Ω 2 ) 3 + sĚ p L 2 (Ω 2 ) 3 s −1 1 (1 + s −1 1 σ 0 ) 2 sJ L 2 (Ω 1 ) 3 . (4.33) Proof. By the definition of the diagonal matrix BA (see (4.25)) and a direct calculation it easily follows that (1 + s −1 1 σ 0 ) −2 ≤ |BA| ≤ (1 + s −1 1 σ 0 ) in Ω PML ,(4.34)(1 + s −1 1 σ 0 ) −1 ≤ |(BA) −1 | ≤ (1 + s −1 1 σ 0 ) 2 , in Ω PML . (4.35) Thus, it is derived that Re[a p (Ě p ,Ě p )] 1 (1 + s −1 1 σ 0 ) 2 s 1 |s| 2 ∇ ×Ě p L 2 (Ω 2 ) 3 + sĚ p L 2 (Ω 2 ) 3 .E p (x, t), H p (x, t)) with E p ∈ L 2 0, T ; H 0 (curl, Ω 2 ) ∩ H 1 0, T ; L 2 (Ω 2 ) 3 , H p ∈ L 2 0, T ; H 0 (curl, Ω 2 ) ∩ H 1 0, T ; L 2 (Ω 2 ) 3 and satisfying the stability estimate max t∈[0,T ] ∂ t E p L 2 (Ω 2 ) 3 + ∇ × E p L 2 (Ω 2 ) 3 + ∂ t H p L 2 (Ω 2 ) 3 + ∇ × H p L 2 (Ω 2 ) 3 (1 + σ 0 T ) 3 J H 1 (0,T ;L 2 (Ω 1 ) 3 ) . (4.37) To study the convergence of the uniaxial PML method, we introduce the EtM operator B : H −1/2 (Curl, Γ 1 ) → H −1/2 (Div, Γ 1 ) associated with the truncated PML problem (4.30) in the s-domain. Given λ ∈ H −1/2 (Div, Γ 1 ), define B(λ × n 1 ) := n 1 × (µs) −1 ∇ × u on Γ 1 ,(4.38) where u satisfies the following problem in the PML layer: ∇ × (µs) −1 BA∇ × u + εs(BA) −1 u = 0 in Ω PML , n 1 × u = λ on Γ 1 , n 2 × u = 0 on Γ 2 . (4.39) We need to show that (4.39) has unique solution, so B is well-defined. To this end, we consider the following general problem with the tangential trace ξ on Γ 2 , which is needed for the convergence analysis of the PML method: ∇ × (µs) −1 BA∇ × u + εs(BA) −1 u = 0 in Ω PML , Then the variational formulation of (4.40) is as follows: Given λ ∈ H −1/2 (Div, Γ 1 ) and ξ ∈ H −1/2 (Div, Γ 2 ), find u ∈ H(curl, Ω PML ) such that n 1 × u = λ on Γ 1 , n 2 × u = ξ on Γ 2 and a PML (u, V ) = 0, ∀ V ∈ H 0 (curl, Ω PML ). (4.42) Arguing similarly as in proving (4.36), we obtain that for any V ∈ H 0 (curl, Ω PML ), Lemma 4.4. For any λ ∈ H −1/2 (Div, Γ 1 ) and ξ ∈ H −1/2 (Div, Γ 2 ), let u be the solution to the problem (4.40). Then n 1 × u = λ on Γ 1 , n 2 × u = ξ on Γ 2 .Re a PML (V , V 1 (1 + s −1 1 σ 0 ) 2 s 1 |s| 2 ∇ × V 2 L 2 (Ω PML ) 3 + sV 2 L 2 (Ω PML ) 3 .∇ × u L 2 (Ω PML ) 3 + su L 2 (Ω PML ) 3 s −1 1 (1 + s −1 1 σ 0 ) 4 |s|(1 + |s|)( λ H −1/2 (Div,Γ 1 ) + ξ H −1/2 (Div,Γ 2 ) ). 1 (1 + s −1 1 σ 0 ) 2 s 1 |s| 2 ∇ × ω 2 L 2 (Ω PML ) 3 + sω 2 L 2 (Ω PML ) 3 Re a PML (ω, ω) (1 + s −1 1 σ 0 ) 2 |s| 1 + |s| 2 ∇ × ω 2 L 2 (Ω PML ) 3 + sω 2 L 2 (Ω PML ) 3 1/2 u 0 H(curl,Ω PML ) , yielding ∇ × ω 2 L 2 (Ω PML ) 3 + sω 2 L 2 (Ω PML ) 3 1/2 (1 + s −1 1 σ 0 ) 4 |s| 1 + |s| 2 s 1 u 0 2 H(curl,Ω PML ) . This, together with the definition of ω and the Cauchy-Schwartz inequality, implies that ∇ ×ǔ L 2 (Ω PML ) 3 + sǔ L 2 (Ω PML ) 3 (1 + s −1 1 σ 0 ) 4 |s|(1 + |s|) s 1 u 0 H(curl,Ω PML ) . The desired estimate (4.44) then follows from the trace theorem. Now, by using B the truncated PML problem (4.30) for the electric fieldĚ p can be equivalently reduced to the boundary value problem in Ω 1 : ∇ × (µs) −1 ∇ ×Ě p + εsĚ p = −J in Ω 1 , n ×Ě p = 0 on Γ, B[Ě p Γ 1 ] = n 1 × (µs) −1 ∇ ×Ě p on Γ 1 . (4.46) Similarly, for the problem (4.46) we can derive its equivalent variational formulation: findĚ p ∈ H Γ 1 (curl, Ω 1 ) such that a Ě p , V ) = − Ω 1J · V dx, ∀ V ∈ H Γ 1 (curl, Ω 1 ), (4.47) where the sesquilinear form a(·, ·) is defined as a Ě p , V := Ω 1 (µs) −1 (∇ ×Ě p ) · (∇ × V )dx + εsĚ p · V dx + B[Ě Γ 1 ], V Γ 1 Γ 1 . (4.48) By using B and the Laplace and inverse Laplace transform, the truncated PML problem (4.29) is equivalent to the initial boundary value problem in Ω 1 :                ∇ × E p + µ∂ t H p = 0 in Ω 1 × (0, T ), ∇ × H p − ε∂ t E p = J in Ω 1 × (0, T ), n × E p = 0 on Γ × (0, T ), E p (x, 0) = H p (x, 0) = 0 in Ω 1 , T [E p Γ 1 ] = H p × n 1 on Γ 1 × (0, T ). Exponential convergence of the uniaxial PML method In this subsection, we prove the exponential convergence of the uniaxial PML method. We begin with the following lemma which is useful in the proof of the exponential decay property of the stretched fundamental solution Φ s (x, y). Lemma 4.5. Let s = s 1 + is 2 with s 1 > 0, s 2 ∈ R. Then, for any x ∈ Γ 2 and y ∈ Γ 1 the complex distance ρ s defined by (4.24) satisfies |ρ s ( x, y)/s| ≥ d, Re[ρ s ( x, y)] ≥ σ 0 d m + 1 . Proof. For x ∈ Γ 2 and y ∈ Γ 1 , x j − y j = (x j − y j ) + s −1 1 x jσj (x j ), wherê σ j (x j ) = 1 x j x j 0 σ j (τ )dτ. Then, by the definition of the complex distance ρ s ( x, y) (see (4.24)) we have |ρ s ( x, y)/s| = | x − y| = ( x 1 − y 1 ) 2 + ( x 2 − y 2 ) 2 + ( x 3 − y 3 ) 2 =   3 j=1 (x j − y j ) 2 + 2s −1 1 x jσj (x j )(x j − y j ) + s −2 1 x 2 jσ 2 j (x j )   1/2 ≥ |x − y| ≥ d, where we have used the fact that x jσj (x j )(x j − y j ) ≥ 0 for x ∈ Γ 2 and y ∈ Γ 1 . In addition, Re [ρ s ( x, y)] = Re s 2 ( x 1 − y 1 ) 2 + ( x 2 − y 2 ) 2 + ( x 3 − y 3 ) 2 1/2 = s 1 ( x 1 − y 1 ) 2 + ( x 2 − y 2 ) 2 + ( x 3 − y 3 ) 2 ≥   3 j=1 x 2 jσ 2 j (x j )   1/2 . If x j = ±(L j /2 + d j ) ∈ Γ 2 ,|E(p, q)(x)| (4.50) s −2 1 d 1/2 (1 + s −1 1 σ 0 ) 2 e − √ εµσ 0 d m+1 (1 + |s|) q H −1/2 (Div,Γ 1 ) + (1 + |s| 2 ) p H −1/2 (Div,Γ 1 ) and |curl x E(p, q)(x)| (4.51) d 1/2 (1 + s −1 1 σ 0 ) 3 e − √ εµσ 0 d m+1 (1 + |s| 2 ) q H −1/2 (Div,Γ 1 ) + (1 + |s| 3 ) p H −1/2 (Div,Γ 1 ) . We now establish the L 2 -norm and L ∞ -norm error estimates in time between solutions to the original scattering problem and the truncated PML problem (4.29) in the computational domain Ω 1 . T 5 d 2 (1 + σ 0 T ) 15 e −σ 0 d √ εµ/2 J H 10 (0,T ;L 2 (Ω 1 ) 3 ) .s −1 1 (|s| −1 + |s| 3 ) ( B − B)[Ě p Γ 1 ] H −1/2 (Div,Γ 1 ) . (4.56) We now estimate the norm ( B − B)[Ě p Γ 1 ] H −1/2 (Div,Γ 1 ) . ForĚ p | Γ 1 define its PML extensioň E p in the s-domain to be the solution of the exterior problem      ∇ × [(µs) −1 ∇ × v] + εsv = 0 in R 3 \B 1 , n 1 × v = n 1 ×Ě p on Γ 1 , x × (µsv ×x) −x × ( ∇ × v) = o | x| −1 as | x| → ∞. By [34,Theorem 12.2] it is easy to see thatˇ E p has the integral representatioň E p = E(γ t (Ě p ), γ t ( curlˇ E p )). Defineˇ H p := −(µs) −1 curlˇ E p . Then (ˇ E p ,ˇ H p ) satisfies the stretched Maxwell equations in (4.27) in R 3 \B 1 . It is worth noting thatˇ H p is not the extension ofȞ p | Γ 1 . Noting that ∇ × v = A∇ × Bv, we know that Bˇ E p satisfies the problem      ∇ × (µs) −1 BA∇ × v + εs(BA) −1 v = 0 in R 3 \B 1 ,n 1 × v = n 1 ×Ě p on Γ 1 , x × (µsB −1 v ×x) −x × (A∇ × v) = o | x| −1 as | x| → ∞, where we have used the fact thatˇ E p is the extension ofĚ p | Γ 1 and B = diag{1, 1, 1} on Γ 1 . By the definition of B, and since A = diag{1, 1, 1} on Γ 1 , it is easy to see that B[Ě p Γ 1 ] = n 1 × (µs) −1 ∇ ס E p = n 1 × (µs) −1 ∇ × Bˇ E p . By the definition of B in (4.38), we obtain that ( B − B)[Ě p Γ 1 ] = n 1 × (µs) −1 ∇ × ω (4.57) where ω satisfies ∇ × (µs) −1 BA∇ × ω + εs(BA) −1 ω = 0 in Ω PML , n 1 × ω = 0 on Γ 1 , n 2 × ω = γ t (Bˇ E p ) on Γ 2 . By Lemma 4.4 and the estimate for BA and (BA) −1 in (4.34)-(4.35), we have n 1 × (µs) −1 ∇ × ω H −1/2 (Div,Γ 1 ) (1 + s −1 1 σ 0 ) 2 (µs) −1 BA∇ × ω H(curl,Ω PML ) (1 + s −1 1 σ 0 ) 2 (1 + s −1 1 σ 0 ) 2 |s| 2 ∇ × ω 2 L 2 (Ω PML ) 3 + (1 + s −1 1 σ 0 ) 4 sω 2 L 2 (Ω PML ) 3 1/2 s −1 1 (1 + s −1 1 σ 0 ) 8 (1 + |s|) 2 γ t (Bˇ E p ) H −1/2 (Div,Γ 2 ) . (4.58) Since ∇ × v = A∇ × Bv and |A −1 | ≤ (1 + σ 0 ) 2 in Ω PML , we have by the boundedness of the trace operator γ t that γ t (Bˇ E p ) H −1/2 (Div,Γ 2 ) Bˇ E p H(curl,Ω PML ) (1 + s −1 1 σ 0 ) 2 ˇ E p H( curl,Ω PML ) . (4.59) By Lemma 4.6 and the boundedness of γ T and γ t it is derived that ˇ E p 2 H( curl,Ω PML ) ≤ ( ˇ E p 2 L ∞ (Ω PML ) + curlˇ E p 2 L ∞ (Ω PML ) )|Ω PML | s −4 1 d 4 (1 + s −1 1 σ 0 ) 6 e −2 √ εµσ 0 d m+1 (1 + |s| 4 ) γ t ( curlˇ E p ) 2 H −1/2 (Div,Γ 1 ) + (1 + |s| 6 ) γ tĚ p 2 H −1/2 (Div,Γ 1 ) s −4 1 d 4 (1 + s −1 1 σ 0 ) 6 e −2 √ εµσ 0 d m+1 (1 + |s| 4 ) 2 γ Tˇ E p 2 H −1/2 (Curl,Γ 1 ) + (1 + |s| 6 ) γ tĚ p 2 H −1/2 (Div,Γ 1 ) s −4 1 d 4 (1 + s −1 1 σ 0 ) 6 e −2 √ εµσ 0 d m+1 4 l=0 s lĚp 2 H(curl, Ω 1 ) s −6 1 d 4 (1 + s −1 1 σ 0 ) 10 e −2 √ εµσ 0 d m+1 5 l=0 s lJ 2 L 2 (Ω 1 ) 3 ,(4.s −10 1 d 4 (1 + s −1 1 σ 0 ) 30 e −2 √ εµσ 0 d m+1 10 l=0 s lJ 2 L 2 (Ω 1 ) 3 . (4.61) This, together with the Parseval identity for the Laplace transform (see [22, (2.46 )]) 1 2π ∞ −∞ǔ (s) ·v(s)ds 2 = ∞ 0 e −2s 1 t u(t) · v(t)dt, for all s 1 > λ, where λ is the abscissa of convergence forǔ andv, gives U 2 L 2 (0,T ;H(curl,Ω 1 )) + V 2 L 2 (0,T ;H(curl,Ω 1 )) = T 0 U 2 H(curl,Ω 1 ) + V 2 H(curl,Ω 1 ) dt ≤ e 2s 1 T ∞ 0 e −2s 1 t U 2 H(curl,Ω 1 ) + V 2 H(curl,Ω 1 ) dt e 2s 1 T ∞ 0 s −10 1 d 4 (1 + s −1 1 σ 0 ) 30 e −2 √ εµσ 0 d m+1 10 l=0 s lJ 2 L 2 (Ω 1 ) 3 ds 2 e 2s 1 T s −10 1 d 4 (1 + s −1 1 σ 0 ) 30 e −2 √ εµσ 0 d m+1 J 2 H 10 (0,T ;L 2 (Ω 1 ) 3 ) ,(4.62) where we have used the assumptions (3.3) and (3.4) to get the last inequality. It is obvious that m should be chosen small enough to ensure rapid convergence (thus we need to take m = 1). Since s −1 1 = T in (4.62), we obtain the required estimate (4.52) by using the Cauchy-Schwartz inequality. We now prove (4.53). Since (E, H) and (E p , H p ) satisfy the equations (3.6) and (4.49), respectively, it is easy to verify that (U , V ) satisfies the problem                ∇ × U + µ∂ t V = 0 in Ω 1 × (0, T ), ∇ × V − ε∂ t U = 0 in Ω 1 × (0, T ), n × U = 0 on Γ × (0, T ), U (x, 0) = V (x, 0) = 0 in Ω 1 , V × n 1 = (T −T )[E p Γ 1 ] + T [U Γ 1 ] on Γ 1 × (0, T ). (4.63) Eliminating V yields that            ∇ × (µ −1 ∇ × U ) + ε∂ 2 t U = 0 in Ω 1 × (0, T ), n × U = 0 on Γ × (0, T ), U (x, 0) = ∂ t U (x, 0) = 0 in Ω 1 , µ −1 (∇ × U ) × n 1 + C [U Γ 1 ] = (T − T )[∂ t E p Γ 1 ] on Γ 1 × (0, T ),(4.64) where C = L −1 • sB • L . The variational problem of (4.64) is to find U ∈ H Γ (curl, Ω 1 ) for all t > 0 such that Ω 1 ε∂ 2 t U · ωdx = − Ω 1 µ −1 (∇ × U )(∇ × ω)dx (4.65) + Γ 1 (T − T )[∂ t E p Γ 1 ] · ω Γ 1 dγ − Γ 1 C [U Γ 1 ] · ω Γ 1 dγ, ∀ω ∈ H Γ (curl, Ω 1 ). For 0 < ξ < T , introduce the auxiliary function Ψ 1 (x, t) = ξ t U (x, τ )dτ, x ∈ Ω 1 , 0 ≤ t ≤ ξ. Then it is easy to verify that Ψ 1 (x, ξ) = 0, ∂ t Ψ 1 (x, t) = −U (x, t). (4.66) For any φ(x, t) ∈ L 2 0, ξ; L 2 (Ω 1 ) 3 , using integration by parts and condition (4.66), we have ξ 0 φ(x, t) · Ψ 1 (x, t)dt = ξ 0 t 0 φ(x, τ )dτ · U (x, t)dt. (4.67) Taking the test function ω = Ψ 1 in (4.65) and using (4.66) give Re ξ 0 Ω 1 ε∂ 2 t U · Ψ 1 dxdt =Re Ω 1 ξ 0 ε ∂ t (∂ t U · Ψ 1 ) + ∂ t U · U dtdx = 1 2 √ εU (·, ξ) 2 L 2 (Ω 1 ) 3 . (4.68) By (4.67) we have the estimate Re ξ 0 Ω 1 µ −1 (∇ × U ) · (∇ × Ψ 1 )dxdt = Re Ω 1 ξ 0 µ −1 (∇ × U ) · ξ t (∇ × U (x, τ ))dτ dtdx = Ω 1 µ −1 ξ 0 (∇ × U )(x, t)dt 2 dx − Re ξ 0 Ω 1 µ −1 (∇ × U ) · (∇ × Ψ 1 )dxdt, which implies that Re ξ 0 Ω 1 µ −1 (∇ × U ) · (∇ × Ψ 1 )dxdt = 1 2 Ω 1 µ −1 ξ 0 ∇ × U (x, t)dt 2 dx. (4.69) Integrating (4.65) from t = 0 to t = ξ and taking the real parts yield 1 2 √ εU (·, ξ) 2 L 2 (Ω 1 ) 3 + 1 2 Ω 1 µ −1 ξ 0 ∇ × U (x, t)dt 2 = Re ξ 0 Γ 1 (T − T )[∂ t E p Γ 1 ] · Ψ 1Γ 1 dγdt − Re ξ 0 Γ 1 C [U Γ 1 ] · Ψ 1Γ 1 dγdt. (4.70) First, using (4.67) and Lemma 3.2, we have Re ξ 0 Γ 1 C [U Γ 1 ] · Ψ 1Γ 1 dγdt = Re Γ 1 ξ 0 t 0 C [U Γ 1 ](x, τ )dτ · U Γ 1 (x, t)dt dγ ≥ 0. (4.71) Then, and by (4.67) we deduce the estimate where we have used the trace theorem to get the last inequality. The right-hand of (4.72) contains the term 1 2 √ εU (·, ξ) 2 L 2 (Ω 1 ) 3 + 1 2 Ω 1 µ −1 ξ 0 ∇ × U (x, t)dt 2 dx ≤ Re ξ 0 Γ 1 (T − T )[∂ t E p Γ 1 ] · Ψ 1Γ 1 dγdt = Re ξ 0 Γ 1 t 0 (T − T )[∂ τ E p Γ 1 ]dτ U Γ 1 (x, t)dγdt ξ 0 (T − T )[∂ t E p Γ 1 ](·, t) H −1/2 (Div,Γ 1 ) dt ξ 0 U (·, t) H(curl,Ω 1 ) dt ,ξ 0 U (·, t) H(curl,Ω 1 ) dt = ξ 0 U (·, t) 2 L 2 (Ω 1 ) 3 + ∇ × U (·, t) 2 L 2 (Ω 1 ) 3 1 2 dt which cannot be controlled by the left-hand of (4.72). To address this issue, we consider the new problem            ∇ × (µ −1 ∇ × (∂ t U )) + ε∂ 2 t (∂ t U ) = 0 in Ω 1 × (0, T ), n × ∂ t U = 0 on Γ × (0, T ), ∂ t U (x, 0) = ∂ 2 t U (x, 0) = 0 in Ω 1 , µ −1 (∇ × (∂ t U )) × n 1 + C [∂ t U Γ 1 ] = (T − T )[∂ 2 t E p Γ 1 ] on Γ 1 × (0, T ), (4.73) which is obtained by differentiating each equation of (4.64) with respect to t. By a similar argument as in deriving (4.65), we obtain the variational formulation of (4.73): find u such that for all ω ∈ H Γ (curl, Ω 1 ), Ω 1 ε∂ 2 t (∂ t U ) · ωdx = − Ω 1 µ −1 (∇ × (∂ t U ))(∇ × ω)dx + Γ 1 (T − T )[∂ 2 t E p Γ 1 ] · ω Γ 1 dγ − Γ 1 C [∂ t U Γ 1 ] · ω Γ 1 dγ. (4.74) Define the auxiliary function Ψ 2 (x, t) = ξ t ∂ τ U (x, τ )dτ, x ∈ Ω 1 , 0 ≤ t ≤ ξ. Similarly as in the derivation of (4.68)-(4.69), we conclude by integration by parts that Re ξ 0 Ω 1 ε∂ 2 t (∂ t U ) · Ψ 2 dxdt = 1 2 √ ε∂ t U (·, ξ) 2 L 2 (Ω 1 ) 3 , (4.75) Re ξ 0 Ω R µ −1 e (∇ × (∂ t U )) · (∇ × Ψ 2 )dxdt = 1 2 1 √ µ ∇ × U (·, ξ) 2 L 2 (Ω 1 ) 3 . (4.76) Choosing the test function ω = Ψ 2 in (4.74), integrating the resulting equation with respect to t from t = 0 to t = ξ and taking the real parts yield 1 2 √ ε∂ t U (·, ξ) 2 L 2 (Ω 1 ) 3 + 1 2 1 √ µ ∇ × U (·, ξ) 2 L 2 (Ω 1 ) 3 = Re ξ 0 Γ 1 (T − T )[∂ 2 t E p Γ 1 ] · Ψ 2Γ 1 dγdt − Re ξ 0 Γ 1 C [∂ t U Γ 1 ] · Ψ 2Γ 1 dγdt. (4.77) Similarly to (4.71), it follows from (4.67) and Lemma 3.3 that Re ξ 0 Γ 1 C [∂ t U Γ 1 ] · Ψ 2Γ 1 dγdt ≥ 0. Thus, and by (4.77) we have 1 2 √ ε∂ t U (·, ξ) 2 L 2 (Ω 1 ) 3 + 1 2 1 √ µ ∇ × U (·, ξ) 2 L 2 (Ω 1 ) 3 ≤ Re ξ 0 Γ 1 (T − T )[∂ 2 t E p Γ 1 ] · Ψ 2Γ 1 dγdt = Re ξ 0 Γ 1 t 0 (T − T )[∂ 2 τ E p Γ 1 ]dτ ∂ t U Γ 1 (x, t)dγdt = Re ξ 0 Γ 1 (T − T )[∂ 2 t E p Γ 1 ] · U Γ 1 (x, ξ)dγdt − Re ξ 0 Γ 1 (T − T )[∂ 2 t E p Γ 1 ]U Γ 1 (x, t)dγdt ≤ ξ 0 (T − T )[∂ 2 t E p Γ 1 ] H(Div,Γ 1 ) · U (·, ξ) H(curl,Ω 1 ) + U (·, t) H(curl,Ω 1 ) dt (4.78) Combining (4.72) and (4.78) gives U (·, ξ) 2 L 2 (Ω 1 ) 3 + ∂ t U (·, ξ) 2 L 2 (Ω 1 ) 3 + ∇ × U (·, ξ) 2 L 2 (Ω 1 ) 3 ξ 0 (T − T )[∂ t E p Γ 1 ](·, t) H −1/2 (Div,Γ 1 ) dt ξ 0 U (·, t) H(curl,Ω 1 ) dt + ξ 0 (T − T )[∂ 2 t E p Γ 1 ] H −1/2 (Div,Γ 1 ) · U (·, ξ) H(curl,Ω 1 ) + U (·, t) H(curl,Ω 1 ) dt. (4.79) Taking the L ∞ -norm of both sides of (4.79) with respect to ξ and using the Young inequality yield U 2 L ∞ (0,T ;L 2 (Ω 1 ) 3 ) + ∂ t U 2 L ∞ (0,T ;L 2 (Ω 1 ) 3 ) + ∇ × U 2 L ∞ (0,T ;L 2 (Ω 1 ) 3 ) T 2 (T − T )[∂ t E p Γ 1 ] 2 L 1 (0,T ;H −1/2 (Div,Γ 1 )) + (T − T )[∂ 2 t E p Γ 1 ] 2 L 1 (0,T ;H −1/2 (Div,Γ 1 )) , which, together with the Cauchy-Schwartz inequality, implies that U L ∞ (0,T ;L 2 (Ω 1 ) 3 ) + ∂ t U L ∞ (0,T ;L 2 (Ω 1 ) 3 ) + ∇ × U L ∞ (0,T ;L 2 (Ω 1 ) 3 ) (4.80) T 3/2 (T − T )[∂ t E p Γ 1 ] L 2 (0,T ;H −1/2 (Div,Γ 1 )) + T 1/2 (T − T )[∂ 2 t E p Γ 1 ] L 2 (0,T ;H −1/2 (Div,Γ 1 )) . We now only need to estimate the right-hand term of (4.80). By (4.57) and the definition ofT (see (4.49)) we know that (T − T )[∂ t E p From this, the definition of U and Maxwell's system (4.63) the required estimate (4.53) then follows. The proof is thus complete. Γ 1 ] = n 1 × µ −1 ∇ × v, where v satisfies the problem            ∇ × (µ −1 BA∇ × v) + ε(BA) −1 ∂ 2 t v = 0 in Ω PML × (0, T ), n 1 × v = 0 on Γ 1 × (0, T ),n 2 × v = γ t (B E p ) on Γ 2 × (0, T ), v(x, 0) = ∂ t v(x, 0) = 0 in Ω PML . Remark 4.8. The L 2 -norm error estimate (4.52) can also be obtained by integrating (4.79) with respect to ξ from 0 to T . The idea of using the uniform coercivity of the variational form in our proof of the L 2 -norm error estimate (4.52) is also known for the time-harmonic PML method. This builds a connection between our proposed time-domain PML method with the real coordinate stretching technique and the time-harmonic PML method in some sense. Conclusions In this paper, by using the real coordinate stretching technique we proposed a uniaxial PML method in the Cartesian coordinates for 3D time-domain electromagnetic scattering problems, which is of advantage over the spherical one in dealing with scattering problems involving anisotropic scatterers. The well-posedness and stability estimates of the truncated uniaxial PML problem in the time domain were established by employing the Laplace transform technique and the energy argument. The exponential convergence of the uniaxial PML method was also proved in terms of the thickness and absorbing parameters of the PML layer, based on the error estimate between the EtM operators for the original scattering problem and the truncated PML problem established in this paper via the decay estimate of the dyadic Green's function. Our method can be extended to other electromagnetic scattering problems such as scattering by inhomogeneous media or bounded elastic bodies as well as scattering in a two-layered medium. It is also interesting to study the spherical and Cartesian PML methods for time-domain elastic scattering problems, which is more challenging due to the existence of shear and compressional waves with different wave speeds. We hope to report such results in the near future. H S (curl, D) := {u ∈ H(curl, D) : γ t u = 0 on S} . In particular, if S = Σ then we write H 0 (curl, D) := H Σ (curl, D). Figure 1 : 1Geometric configuration of the uniaxial PML which is a cubic domain surrounding B 1 . Denote by n 2 the unit outward normal to Γ 2 . and uniqueness of solutions to the problem (4.31) then follow from the Lax-Milgram theorem. The estimate (4.33) can be obtained by combining (4.31), (4.36) and the Cauchy-Schwartz inequality. The proof is thus complete. To show the well-posedness of the truncated PML problem (4.29) in the time domain, we need the following lemma which is the analog of the Paley-Wiener-Schwartz theorem for the Fourier transform of the distributions with compact support in the case of Laplace transform [36, Theorem 43.1]. Lemma 4.2. [36, Theorem 43.1]. Letω(s) denote a holomorphic function in the half complex plane s 1 = Re(s) > σ 0 for some σ 0 ∈ R, valued in the Banach space E. Then the following statements are equivalent: 1) there is a distribution ω ∈ D ′ + (E) whose Laplace transform is equal toω(s), where D ′ + (E) is the space of distributions on the real line which vanish identically in the open negative half-line; 2) there is a σ 1 with σ 0 ≤ σ 1 < ∞ and an integer m ≥ 0 such that for all complex numbers s with s 1 = Re(s) > σ 1 it holds that ω(s) E (1 + |s|) m . The well-posedness and stability of the truncated PML problem (4.29) can be proved by using Lemmas 4.1 and 4.2 and the energy method (cf. [16, Theorem 3.1]). Theorem 4. 3 . 3Let s 1 = 1/T . Then the truncated PML problem (4.29) in the time domain has a unique solution ( ( 4 . 40 ) 440Define the sesquilinear form a PML : H(curl, Ω PML ) × H(curl, Ω PML ) → C as a PML (u, and the Lax-Milgram theorem it follows that the variational problem (4.42) has a unique solution. We have the following stability result for the solution to the problem (4.40). ( 4 . 44 ) 444Proof. Let u 0 ∈ H(curl, Ω PML ) be such that n 1 × u 0 = λ, n 2 × u 0 = ξ on Γ 2 . Then, by (4.42) we have ω := u − u 0 ∈ H 0 (curl, Ω PML ) and a PML (ω, V ) = −a PML (u 0 , V ), ∀ V ∈ H 0 (curl, Ω PML ).(4.45)This, combined with (4.41)-(4.43) and the Cauchy-Schwartz inequality, gives T = L −1 • B • L is the time-domain EtM operator for the PML problem. then, by (4.20) we have |x jσj (x j )| = σ 0 d/(m + 1). Thus, Re [ρ s ( x, y)] ≥ σ 0 d/(m + 1). The proof is thus complete. By Lemma 4.5, and arguing similarly as in the proof of [42, Lemma 5.3], we have similar estimates as in [42, Lemma 5.3] for the stretched dyadic Green's function G in the PML layer. Then we have the decay property of the PML extension (cf. [42, Theorem 5.4]). Lemma 4 . 6 . 46For any p, q ∈ H −1/2 (Div, Γ 1 ) let E(p, q) be the PML extension in the s-domain defined in (4.22). Then we have that for any x ∈ Ω PML , Theorem 4 . 7 . 47Let (E, H) and (E p , H p ) be the solutions of the problems (1.1a)-(1.1e) and (4.29) with s 1 = 1/T , respectively. If the assumptions (3.3) and (3.4) are satisfied, then we have the error estimates E − E p L 2 (0,T ;H(curl,Ω 1 )) + H − H p L 2 (0,T ;H(curl,Ω 1 )) E − E p L ∞ (0,T ;H(curl,Ω 1 )) + H − H p L ∞ (0,T ;H(curl,Ω 1 )) T 11/2 d 2 (1 + σ 0 T ) 15 e −σ 0 d √ εµ/2 J H 9 (0,T ;L 2 (Ω 1 ) 3 ) . We first prove (4.52). Let U = E − E p and V = H − H p and letĚ andĚ p be the solutions to the variational problems (3.14) and (4.47), respectively. Then, by (3.14) and (4.47) we get a(Ǔ ,Ǔ ) = a(Ě,Ǔ ) − a(Ě p ,Ǔ ) = ( B − B)[Ě p Γ 1 ],Ǔ Γ 1 Γ 1 . Maxwell equations in Ω 1 obtained by taking the Laplace transform of the problems (1.1a)-(1.1e) and (4.29), it follows that V H(curl,Ω 1 ) (|s| + |s| −1 ) Ǔ H(curl,Ω 1 ) . This, combined with (4.55), leads to the result Ǔ H(curl,Ω 1 ) + V H(curl,Ω 1 ) µJ −1 ∇ ×v 2 H(curl,Ω PML ) ds 2 . H 9 (0,T ;L 2 (Ω 1 ) 3 ) .By (4.80) and the above two estimates it follows on setting s 1 = 1/T and m = 1 thatU L ∞ (0,T ;L 2 (Ω 1 ) 3 ) + ∂ t U L ∞ (0,T ;L 2 (Ω 1 ) 3 ) + ∇ × U L ∞ (0,T ;L 2 (Ω 1 ) 3 ) T 11/2 d 2 (1 + σ 0 T ) 15 e − √ εµσ 0 d/2 J H 9 (0,T ;L 2 (Ω 1 ) 3 ) . AcknowledgmentsThis work was partly supported by the NNSF of China grants 11771349 and 91630309. The first author was also partly supported by the National Research Foundation of Korea (NRF-2020R1I1A1A01073356). Time-domain analysis of an acoustic-elastic interaction problem. G Bao, Y Gao, P Li, Arch. Rational Mech. Anal. 229G. Bao, Y. Gao and P. Li, Time-domain analysis of an acoustic-elastic interaction problem, Arch. Rational Mech. Anal. 229 (2018), 835-884. Convergence analysis of the perfectly matched layer problems for timeharmonic Maxwell's equations. G Bao, H Wu, SIAM. J. Numer. Anal. 43G. Bao and H. Wu, Convergence analysis of the perfectly matched layer problems for time- harmonic Maxwell's equations, SIAM. J. Numer. Anal. 43 (2005), 2121-2143. A perfectly matched layer for the absorption of electromagnetic waves. J P Bérenger, J. Comput. Phys. 114J.P. Bérenger, A perfectly matched layer for the absorption of electromagnetic waves, J. Comput. Phys. 114 (1994), 185-200. Analysis of a finite PML approximation for the three dimensional time-harmonic Maxwell and acoustic scattering problems. J H Bramble, J E Pasciak, Math. Comp. 76J.H. Bramble and J.E. Pasciak, Analysis of a finite PML approximation for the three dimensional time-harmonic Maxwell and acoustic scattering problems, Math. Comp. 76 (2007), 597-614. Analysis of a finite element PML approximation for the three dimensional time-harmonic Maxwell problem. J H Bramble, J E Pasciak, Math. Comput. 77J.H. Bramble and J.E. Pasciak, Analysis of a finite element PML approximation for the three dimensional time-harmonic Maxwell problem, Math. Comput. 77 (2008), 1-10. Analysis of a Cartesian PML approximation to the three dimensional electromagnetic wave scattering problem. J H Bramble, J E Pasciak, Int. J. Numer. Anal. Model. 9J.H. Bramble and J.E. Pasciak, Analysis of a Cartesian PML approximation to the three dimensional electromagnetic wave scattering problem, Int. J. Numer. Anal. Model. 9 (2012), 543-561. Analysis of a Cartesian PML approximation to acoustic scattering problems in R 2 and R 3. J H Bramble, J E Pasciak, J. Comput. Appl. Math. 247J.H. Bramble and J.E. Pasciak, Analysis of a Cartesian PML approximation to acoustic scattering problems in R 2 and R 3 , J. Comput. Appl. Math. 247 (2013), 209-230. Analysis of a finite PML approximation to the three dimensional elastic wave scattering problem. J H Bramble, J E Pasciak, D Trenev, Math. Comput. 79J.H. Bramble, J.E. Pasciak and D. Trenev, Analysis of a finite PML approximation to the three dimensional elastic wave scattering problem, Math. Comput. 79 (2010), 2079-2101. On traces for H(curl, Ω) in Lipschitz domains. A Buffa, M Costabel, D Sheen, J. Math. Anal. Appl. 276A. Buffa, M. Costabel and D. Sheen, On traces for H(curl, Ω) in Lipschitz domains, J. Math. Anal. Appl. 276 (2002), 845-867. A 3D perfectly matched medium from modified Maxwell's equations with stretched coordinates. W C Chew, W H Weedon, Microw. Opt. Technol. Lett. 7W.C. Chew and W.H. Weedon, A 3D perfectly matched medium from modified Maxwell's equations with stretched coordinates, Microw. Opt. Technol. Lett. 7 (1994), 599-604. Discretization of the time domain CFIE for acoustic scattering problems using convolution quadrature. Q Chen, P Monk, SIAM J. Math. Anal. 46Q. Chen and P. Monk, Discretization of the time domain CFIE for acoustic scattering problems using convolution quadrature, SIAM J. Math. Anal. 46 (2014), 3107-3130. Convergence of the time-domain perfectly matched layer method for acoustic scattering problems. Z Chen, Int. J. Numer. Anal. Model. 6Z. Chen, Convergence of the time-domain perfectly matched layer method for acoustic scattering problems, Int. J. Numer. Anal. Model. 6 (2009), 124-146. An adaptive perfectly matched layer technique for 3-D time-harmonic electromagnetic scattering problems. J Chen, Z Chen, Math. Comput. 77J. Chen and Z. Chen, An adaptive perfectly matched layer technique for 3-D time-harmonic electromagnetic scattering problems, Math. Comput. 77 (2007), 673-698. An adaptive anisotropic perfectly matched layer method for 3-D time harmonic electromagnetic scattering problems. Z Chen, T Cui, L Zhang, Numer. Math. 125Z. Chen, T. Cui and L. Zhang, An adaptive anisotropic perfectly matched layer method for 3-D time harmonic electromagnetic scattering problems, Numer. Math. 125 (2013), 639-677. An adaptive perfectly matched layer technique for time-harmonic scattering problems. Z Chen, X Liu, SIAM J. Numer. Anal. 43Z. Chen and X. Liu, An adaptive perfectly matched layer technique for time-harmonic scattering problems, SIAM J. Numer. Anal. 43 (2005), 645-671. On Maxwell equations with the transparent boundary condition. Z Chen, J C Nédélec, J. Comput. Math. 26Z. Chen and J.C. Nédélec, On Maxwell equations with the transparent boundary condition, J. Comput. Math. 26 (2008), 284-296. An adaptive uniaxial perfectly matched layer method for time-harmonic scattering problems. Z Chen, X Wu, Numer. Math. TMA. 1Z. Chen and X. Wu, An adaptive uniaxial perfectly matched layer method for time-harmonic scattering problems, Numer. Math. TMA 1 (2008), 113-137. Long-time stability and convergence of the uniaxial perfectly matched layer method for time-domain acoustic scattering problems. Z Chen, X Wu, SIAM J. Numer. Anal. 50Z. Chen and X. Wu, Long-time stability and convergence of the uniaxial perfectly matched layer method for time-domain acoustic scattering problems, SIAM J. Numer. Anal. 50 (2012), 2632-2655. Convergence of the PML method for elastic wave scattering problems. Z Chen, X Xiang, X Zhang, Math. Comp. 85Z. Chen, X. Xiang and X. Zhang, Convergence of the PML method for elastic wave scat- tering problems, Math. Comp. 85 (2016), 2687-2714. Convergence of the uniaxial perfectly matched layer method for time-harmonic scattering problems in two-layered media. Z Chen, W Zheng, SIAM J. Numer. Anal. 48Z. Chen and W. Zheng, Convergence of the uniaxial perfectly matched layer method for time-harmonic scattering problems in two-layered media, SIAM J. Numer. Anal. 48 (2010), 2158-2185. PML method for electromagnetic scattering problem in a two-layer medium. Z Chen, W Zheng, SIAM. J. Numer. Anal. 55Z. Chen and W. Zheng, PML method for electromagnetic scattering problem in a two-layer medium, SIAM. J. Numer. Anal. 55 (2017), 2050-2084. A M Cohen, Numerical Methods for Laplace Transform Inversion. SpringerA.M. Cohen, Numerical Methods for Laplace Transform Inversion, Springer, 2007. The perfectly matched layer in curvilinear coordinates. F Collino, P Monk, SIAM J. Sci. Comput. 19F. Collino and P. Monk, The perfectly matched layer in curvilinear coordinates, SIAM J. Sci. Comput. 19 (1998), 2061-2090. Absorbing boundary conditions and perfectly matched layers-An analytic time-domain performance analysis. A T Dehoop, P M Van Den, R F Berg, Remis, IEEE Trans. Magn. 38A.T. DeHoop, P.M. van den Berg, and R.F. Remis, Absorbing boundary conditions and per- fectly matched layers-An analytic time-domain performance analysis, IEEE Trans. Magn. 38 (2002), 657-660. A time domain analysis of PML models in acoustics. J Diaz, P Joly, Comput. Methods Appl. Mech. Engrg. 195J. Diaz and P. Joly, A time domain analysis of PML models in acoustics, Comput. Methods Appl. Mech. Engrg. 195 (2006), 3820-3853. Analysis of time-domain scattering by periodic structures. Y Gao, P Li, J. Differential Equations. 261Y. Gao and P. Li, Analysis of time-domain scattering by periodic structures, J. Differential Equations 261 (2016), 5094-5118. Electromagnetic scattering for time-domain Maxwell's equations in an unbounded structure. Y Gao, P Li, Math. Models Methods Appl. Sci. 27Y. Gao and P. Li, Electromagnetic scattering for time-domain Maxwell's equations in an unbounded structure, Math. Models Methods Appl. Sci. 27 (2017), 1843-1870. Analysis of transient acoustic-elastic interaction in an unbounded structure. Y Gao, P Li, B Zhang, SIAM J. Math. Anal. 49Y. Gao, P. Li, and B. Zhang, Analysis of transient acoustic-elastic interaction in an un- bounded structure, SIAM J. Math. Anal. 49 (2017), 3951-3972. Radiation boundary conditions for the numerical simulation of waves. T Hagstrom, Acta Numer. 8T. Hagstrom, Radiation boundary conditions for the numerical simulation of waves, Acta Numer. 8 (1999), 47-106. Solving time-harmonic scattering problems based on the pole condition II: Convergence of the PML method. T Hohage, F Schmidt, L Zschiedrich, SIAM J. Math. Anal. 35T. Hohage, F. Schmidt and L. Zschiedrich, Solving time-harmonic scattering problems based on the pole condition II: Convergence of the PML method, SIAM J. Math. Anal. 35 (2003), 547-560. Time-dependent fluid-structure interaction. G C Hsiao, F J Sayas, R J Weinacht, Math. Method. Appl. Sci. 40G.C. Hsiao, F.J. Sayas and R.J. Weinacht, Time-dependent fluid-structure interaction, Math. Method. Appl. Sci. 40 (2017), 486-500. On the existence and convergence of the solution of PML equations. M Lassas, E Somersalo, Computing. 60M. Lassas and E. Somersalo, On the existence and convergence of the solution of PML equations, Computing 60 (1998), 229-241. Analysis of transient electromagnetic scattering from a threedimensional open cavity. P Li, L Wang, A Wood, SIAM J. Appl. Math. 75P. Li, L. Wang and A. Wood, Analysis of transient electromagnetic scattering from a three- dimensional open cavity, SIAM J. Appl. Math. 75 (2015), 1675-1699. P Monk, Finite Element Methods for Maxwell's Equations. New YorkOxford Univ. PressP. Monk, Finite Element Methods for Maxwell's Equations, Oxford Univ. Press, New York, 2003. Advances in the theory of perfectly matched layers. F L Teixeira, W C Chew, Fast and Efficient Algorithms in Computational Electromagnetics. BostonArtech HouseF.L. Teixeira and W.C. Chew, Advances in the theory of perfectly matched layers, in: Fast and Efficient Algorithms in Computational Electromagnetics (ed. W. C. Chew et al.), Artech House, Boston, 2001, pp. 283-346. F Trèves, Basic Linear Partial Differential Equations. New YorkAcademic PressF. Trèves, Basic Linear Partial Differential Equations, Academic Press, New York, 1975. Fast and accurate computation of time-domain acoustic scattering problems with exact nonreflecting boundary conditions. L Wang, B Wang, X Zhao, SIAM J. Appl. Math. 72L. Wang, B. Wang and X. Zhao, Fast and accurate computation of time-domain acoustic scattering problems with exact nonreflecting boundary conditions, SIAM J. Appl. Math. 72 (2012), 1869-1898. G N Watson, A Treatise on The Theory of Bessel Functions. Cambridge, UKCambridge University PressG.N. Watson, A Treatise on The Theory of Bessel Functions, Cambridge University Press, Cambridge, UK, 1922. Analysis of a time-dependent fluid-solid interaction problem above a local rough surface. C Wei, J Yang, Sci. China Math. 63C. Wei and J. Yang, Analysis of a time-dependent fluid-solid interaction problem above a local rough surface, Sci. China Math. 63 (2020), 887-906. A time-dependent interaction problem between an electromagnetic field and an elastic body. C Wei, J Yang, B Zhang, Acta Math. Appl. Sin. Engl. Ser. 36C. Wei, J. Yang and B. Zhang, A time-dependent interaction problem between an electro- magnetic field and an elastic body, Acta Math. Appl. Sin. Engl. Ser. 36 (2020), 95-118. Convergence of the perfectly matched layer method for transient acoustic-elastic interaction above an unbounded rough surface. C Wei, J Yang, B Zhang, arXiv:1907.09703C. Wei, J. Yang and B. Zhang, Convergence of the perfectly matched layer method for transient acoustic-elastic interaction above an unbounded rough surface, arXiv:1907.09703, 2019. Convergence analysis of the PML method for time-domain electromagnetic scattering problems. C Wei, J Yang, B Zhang, SIAM J. Numer. Anal. 58C. Wei, J. Yang and B. Zhang, Convergence analysis of the PML method for time-domain electromagnetic scattering problems, SIAM J. Numer. Anal. 58 (2020), 1918-1940.
[]
[ "An adaptive timestepping methodology for particle advance in coupled CFD-DEM simulations", "An adaptive timestepping methodology for particle advance in coupled CFD-DEM simulations" ]
[ "Hariswaran Sitaraman \nHigh Performance Algorithms and Complex Fluids group\nComputational Science Center National Renewable Energy Laboratory\n15013 Denver West Pkwy80401GoldenCO\n", "Ray Grout \nHigh Performance Algorithms and Complex Fluids group\nComputational Science Center National Renewable Energy Laboratory\n15013 Denver West Pkwy80401GoldenCO\n" ]
[ "High Performance Algorithms and Complex Fluids group\nComputational Science Center National Renewable Energy Laboratory\n15013 Denver West Pkwy80401GoldenCO", "High Performance Algorithms and Complex Fluids group\nComputational Science Center National Renewable Energy Laboratory\n15013 Denver West Pkwy80401GoldenCO" ]
[]
An adpative integration technique for time advancement of particle motion in the context of coupled computational fluid dynamics (CFD) -discrete element method (DEM) simulations is presented in this work. CFD-DEM models provide an accurate description of multiphase physical systems where a granular phase exists in an underlying continuous medium. The time integration of the granular phase in these simulations present unique computational challenges due to large variations in time scales associated with particle collisions. The algorithm presented in this work uses a local time stepping approach to resolve collisional time scales for only a subset of particles that are in close proximity to potential collision partners, thereby resulting in substantial reduction of computational cost. This approach is observed to be 2-3X faster than traditional explicit methods for problems that involve both dense and dilute regions, while maintaining the same level of accuracy.
null
[ "https://arxiv.org/pdf/1802.09579v1.pdf" ]
119,412,564
1802.09579
5bd17859017a9064787603108b4cfd41693d7cf6
An adaptive timestepping methodology for particle advance in coupled CFD-DEM simulations Hariswaran Sitaraman High Performance Algorithms and Complex Fluids group Computational Science Center National Renewable Energy Laboratory 15013 Denver West Pkwy80401GoldenCO Ray Grout High Performance Algorithms and Complex Fluids group Computational Science Center National Renewable Energy Laboratory 15013 Denver West Pkwy80401GoldenCO An adaptive timestepping methodology for particle advance in coupled CFD-DEM simulations An adpative integration technique for time advancement of particle motion in the context of coupled computational fluid dynamics (CFD) -discrete element method (DEM) simulations is presented in this work. CFD-DEM models provide an accurate description of multiphase physical systems where a granular phase exists in an underlying continuous medium. The time integration of the granular phase in these simulations present unique computational challenges due to large variations in time scales associated with particle collisions. The algorithm presented in this work uses a local time stepping approach to resolve collisional time scales for only a subset of particles that are in close proximity to potential collision partners, thereby resulting in substantial reduction of computational cost. This approach is observed to be 2-3X faster than traditional explicit methods for problems that involve both dense and dilute regions, while maintaining the same level of accuracy. Introduction Simulations with coupled computational fluid dynamics (CFD) and discrete element method (DEM) are typically used to model physical systems that involve the motion of particles in an underlying fluid medium. These multiphase systems are observed in several industrial processes such as fluidized beds [1,2], riser reactors [3,4] , combustion and reacting systems [5,6]. The coupled CFD-DEM approach provides a more accurate representation of these physical systems compared to multiphase approaches such as two-fluid models [7] where continuum transport equations are solved for the dispersed phase. The latter approach requires substantial modeling of closure terms for the interaction between phases [7]. The CFD-DEM approach, on the other hand, provides an alternate closure for these terms based on Newton's laws to govern the motion of particles that represent the dispersed phase. The interaction between the particle and continuous phase requires models only for the fluid induced forces such as lift and drag. In academic settings, such simulations are often configured so that the particles represent real particles and the interaction with the continuous phase is captured by momentum exchange due to particle drag. The continuous phase is treated as usual by discretizing the fluid transport equations for mass, momentum and energy. However, the more accurate physical representation comes with increased computational costs for large scale industrial systems with very small particle sizes. One of the computationally expensive facets of a coupled CFD-DEM solver is the time integration of particle motion. Explicit schemes are typically preferred for time-stepping in DEM due to their accuracy, minimal storage and compute requirements [8,9]; implicit schemes in the context of DEM that involve Jacobian computations for large number of particle counts (order of 10 6 − 10 9 ) are expensive and infeasible [10]. The use of explicit methods however impose numerical stability and accuracy restrictions on time-step sizes; specifically particles that undergo collisions need to be advanced with smaller time-steps compared to non-colliding particles. The time-step is more often globally set as the limit for accuracy and stability imposed by the collisions and is typically orders of magnitude less than that required away from collisions. This work addresses this precise issue and provides a strategy to avoid the use of a global conservative small time-step size for the entire set of particles. A novel time-stepping algorithm for CFD-DEM solvers using a partitioning approach that allows for variable time-steps among particles is described and its computational performance is compared against a baseline explicit method, typically used in several CFD-DEM solvers. Mathematical model and numerical methods The CFD-DEM solver, MFIX-DEM [11], implemented on adaptive mesh refinement library AMReX [12] , will be used in this work. MFIX uses an incompressible staggered-grid solver for the continuous phase while particles are transported in a lagrangian fashion using fluid and body forces (such as drag, pressure gradient and gravity). They also undergo collisions, where a soft sphere model such as the Linear-spring-dashpot (LSD) [13] approach is used. In most coupled simulations, the fluid time-step size, ∆t f is larger than the particle time-step, ∆t p . Therefore, coupled simulations are performed using a subcycling approach as shown by the following tasks at each time-step. 1. The incompressible Navier-Stokes equations are integrated for a time-step of ∆t f for the continuous phase. 2. Particles are integrated using ∆t p time-step size for time ∆t f . (a) body forces are calculated using interpolated fluid variables onto particle positions. (b) collisional forces are computed using particle neighbor lists. (c) velocity and position are advanced using an explicit scheme. 3. Particle data is deposited onto the fluid grid using deposition kernels and volume averaging. The focus of this work is on the time integration aspect of the discrete element method. The particle position and velocity are typically advanced using an explicit second order velocity Verlet [14] scheme. In this method, velocity is advanced at half time levels, n + 1/2 while position at time level n + 1 is advanced using this updated velocity. The numerical scheme can be expressed as v i n+1/2 = v i n−1/2 + ∆t p F i n m i ; x i n+1 = x i n + ∆t p v i n+1/2 .(1) Here, v i , x i and m i represent the velocity, position vector and mass of particle i, respectively. F i represent the total force on particle i computed using position and velocity from the previous time level n. Explicit scheme with orthogonal recursive bisection (ORB) The time-step size for the explicit method is restricted by the collisional time scale. This time scale depends on the collisional spring constant, k and mass of the particle, given by τ p = m i /k. The traditional velocity Verlet scheme requires the time-step size, ∆t p to be a factor of 5-20X lower than the collision time scale τ p for stability and accuracy [2,15]. However, this restriction only applies to particles that collide during a given time-step and need not be applied to dilute zones of the computational domain. An adaptive time-stepping approach where colliding particles are resolved using collisional time scales while advancing non-colliding particles with larger time-steps can provide significant savings on computational costs. This is the motivation for our explicit ORB scheme. ORB [16,17] is one of the several possible heuristics for partitioning point clouds and identifying clusters; k-means [18] clustering and Minimum Spanning Tree (MST) [19] are all possibilities with trade offs between construction/incremental update time vs. effectiveness. ORB has advantages of being relatively quick and easy to update incrementally and has the required heuristic behavior (i.e., it will split the region in half with a cluster on each side) when groups of particles are well separated (clustered). When the partitioned domain is about the size of the cluster, ORB will split it in half which is almost the worst case scenario. The results presented in section 3 are sensitive to the interplay between the number of ORB levels and the particle cluster size/separation, and we continue to look for heuristics that have better balance of the behavior we want with speed of construction. A generic distribution of particles that occur in a DEM simulation in a cartesian domain with clustering is shown in Fig. 1(a). A decomposition of these particles using ORB is shown in Fig. 1(b) where each subset contains the particle clusters. The explicit ORB scheme advances each of these domains separately with their local time scale to the desired global time. Therefore, domains with clustering where there are colliding particles perform multiple iterations while domains with particles that may not collide advance with larger time-steps. There are three potential advantages to this approach; 1. The use of local time-step for colliding versus non-colliding particles reduces computational cost. 2. The neighbor search is restricted to each of the sub-domains as opposed to a global O(n 2 ) search. 3. Advancing subsets of particles can improve temporal locality in memory as opposed to global update of all particles one after the other. Numerical implementation In order to achieve spatial locality of fluid and particle data, the computational domain is divided into boxes that contain the fluid and co-located particles. Message passing interface (MPI) is used for parallelization where the boxes are distributed among various MPI ranks. The distribution of boxes are obtained in way that minimizes communication distance (a space-filling curve [20] approach is used) and equalizes load (a "knapsack" [21] algorithm is applied to balance load). Each rank operate over the boxes they own and update fluid and particle fields with subsequent redistribution of ghost data. ORB strategy is used here to partition the domain into boxes that balances the number of particles. The steps in the time advancement for the traditional explicit (with global minimum particle time-step) and the explicit ORB scheme is as shown in algorithms 1 and 2, respectively. The detailed steps for the fluid update where the incompressible Navier-Stokes equations are solved, have been skipped for brevity. Readers are referred to detailed algorithms and numerical methods presented by Syamlal et al. [11]. The traditional explicit method performs sub-iterations for the particle update over boxes with a global minimum time-step, while a local time-step for each box is used in the explicit ORB scheme. It should be noted that a global subcycling is also performed in the explicit ORB scheme, based on a user defined number of sub-iteration count (nsubit in Algorithm 2). This is done so as to fine-grain the update of ghost particles and their redistribution among boxes thereby reducing errors due to lag in communication. The number of sub-iterations are on the order of 5-40 for stable particle time integration; its sensitivity to errors in solution is studied in section 3 for different particle distribution scenarios. The local time-step for a particle is assigned as the collisional Algorithm 1 Traditional explicit scheme Integrate fluid equations for time ∆t f Update particle drag forces ∆t p = global minimum particle time-step t p = 0.0 while t p < ∆t f do nboxes = number of processor owned boxes update particle neighbor list for box i in nboxes do find collision forces for particles within box i use velocity-verlet time-step update for time ∆t p end for t p = t p + ∆t p redistribute ghost particles end while Deposit particle data on grid Regrid the computational domain using ORB partners within a distance of 3 particle radii (dt = τ p /50) . The sub time-step, ∆t sp (see Algorithm 2), obtained from the user defined sub-iteration count, is used as local time-step for particles that have no collision partners. The boxes are redefined (regrid) using ORB at the end of the fluid and particle update so as to capture new particle clusters in the subsequent time-step. The use of local time-step for the boxes owned by processors introduces load imbalance in terms of number of time-steps performed per processor; boxes with particle clusters perform more number of time-steps compared to dilute regions. Therefore, the "knapsack" algorithm is used to rebalance loads among processors after ORB partitioning, based on local time-stepping costs associated with each box. Results and discussion The computational performance of the explicit ORB scheme is compared with the traditional explicit method in this section for various test cases in both serial and parallel execution of the algorithm. All the cases studied in this section were run on Intel Haswell 2.3 GHz processors [22] with 24 cores per node. The sensitivity of solution to the number of sub-iterations (nsubit in Algorithm 2) in the particle update for the explicit ORB scheme is studied in detail and an optimal number for different scenarios is inferred. Serial cases Homogenous cooling system (HCS) This system consists of a cartesian domain with periodic boundaries containing particles initialized with a gaussian distribution of velocity and random position vectors. The total energy of the particles in this system decreases over time due to collisional and gas-solid drag losses. This idealized case has an analytic solution for average energy decay and such a system tends to arise in regions of gas-solid flows where there is an isotropic distribution of particles. Fig. 2(a), (b) and (c) show snapshots of particles colored by their kinetic energy Here, T is normalized by the initial temperature, T 0 and time is normalized by dp √ T 0 where dp is the particle diameter. at 0, 15 and 30 ms for a typical HCS simulation performed in a cubical domain of size 4 mm with 1222 particles, indicating the decay of energy over time. The mesh consists of 8000 cells with 20 cells along each coordinate direction. The material parameters for the gas and solid phase are described in Table 1. Fig. 2(d) shows the comparison between the computed and analytic solutions for the temperature (T = 1 3 v 2 ) decay as predicted by the Haff's law [24,25], indicating reasonable agreement. Slight deviations are observed due to the idealized assumptions (such as constant collision cross-section) used in the analytic derivation of Haff's law. Fig. 3 compares the non-dimensional temperature decay predicted by the traditional explicit and the explicit ORB scheme for varying number of particle and sub-iteration counts. The traditional explicit scheme solution is assumed to be the reference and an average temperature error is obtained for the explicit ORB method for varying number of sub-iteration counts. The error is further normalized by the time-averaged temperature so as to quantify the fractional deviation in the solution. Fig. 3(a), (c) and (e) show temperature decay solutions for varying number of particles in the domain from 100 to 1222 particles. 4 ORB levels (16 leaves) are used for the lower particle count cases (100 and 200) while 6 levels (64 leaves) are used for the simulation with 1222 particles. Large deviations from the reference solution is observed for lower number of sub-iterations in the explicit ORB scheme. The larger the sub time-step, the greater is the chance of missing collisions thereby increasing the error. Fig. 3(b), (d) and (f) show the variation of the speed-up factor and the error as a function of number of sub-iterations. As the number of sub-iterations is increased, the explicit ORB method tends towards the traditional explicit scheme and its performance is reduced while lower number of sub-iterations result in greater error in the solution. Nonetheless, a nominal speed-up factor of 2X is obtained with 10-15 sub-iterations for all the 3 cases, with errors in the range of 1-5% with respect to the traditional explicit method. Particle settling problem The settling of particles due to gravity in a cartesian domain is studied in this section. The particle distribution here transitions from being uniformly distributed to a settled bed that is densely packed, thereby testing the performance and accuracy of the explicit ORB scheme with time varying particle densities. Fig. 4 illustrates the physics of the settling simulation through time snapshots and particle averaged parameters. This simulation consists of 300 particles in a cubical box, 1.5 mm in size. The mesh consists of 1000 cells with 10 cells along each coordinate direction. The material properties used in this simulation is the same as the HCS case, as described in Table 1. Gravity is assumed to be along the y coordinate direction (top to bottom). The particles that are uniformly distributed initially, settle and lose their gravitational potential energy through drag, inter-particle and wall collisions. Fig. 4(a) to (c) show particle distribution colored by their speeds from 15 to 60 ms; the particle kinetic energy is observed to dissipate completely at the end of 60 ms. Fig. 4(d) shows the average particle speed and position along the y direction; the particle speeds are observed to rise initially during free fall and assymptotes to 0 after longer time periods. Fig. 5 shows the effect of sub-iteration count in the explicit ORB scheme on the solution accuracy. Fig. 5(a) and (c) show the average particle speed as a function of time for simulations with 100 and 300 particles, respectively, for varying number of sub-iteration counts. These simulations are performed with 4 ORB levels (16 leaves). A similar trend with respect to the HCS test case is observed; larger number of sub-iterations reduce the error in solution as shown in Fig. 5(b) and (d), respectively. The error in the explicit ORB solution (Fig. 5(b) and (d)) is computed based on the average particle speed variation with respect to the reference traditional explicit scheme solution. The speed-up factor achieved with the use of explicit ORB scheme is lower in this case due to the highly collisional distribution of particles. There is no speed-up observed using the explicit ORB scheme for maintaining solution errors within 1-5%. The explicit ORB scheme thus performs better when there are dilute regions in the domain. The settling of particles in this problem rapidly transitions the system into a dense distribution, thereby enforcing small particle time-steps in all boxes. The traditional explicit scheme is recovered for dense distributions where a global small time-step is the same as the local time-step. Parallel cases Riser flow This test case is representative of industrial systems such as circulating fluidized bed (CFB) riser reactors [26] that are applied to catalytic cracking and combustion. This simulation consists of a rectangular column with a 6 mm x 6 mm cross section and a height of 1 cm, with 14,000 particles seeded uniformly in the domain with periodic boundary conditions at all faces. A constant pressure gradient of 2 Pa along the axial direction is imposed. Simulations are performed on 64 MPI ranks with 7 ORB levels (maximum of 128 leaves) on a mesh of 51,200 cells (32 x 50 x 32 grid). The material properties from Table 1 along with the Koch-Hill drag model [27] is used in these simulations. Fig. 6 shows the snapshots of particle distribution and fluid pressure; the favorable bottom-to-top pressure gradient facilitates movement of particles predominantly along the axis. The particles are recycled back into the domain via the periodic boundaries and are steadily accelerated due to the constant pressure gradient. The average particle speed along the axial direction as a function of time is shown in Fig. 7(a) for the reference traditional explicit time-stepping as well as the explicit ORB scheme with varying number of sub-iterations. The average speeds are observed to approach a constant slope with respect to time indicating a steady-state acceleration. There is reasonable agreement among the explicit ORB cases with higher error for lower number of sub-iteration counts. steady-state within 0.15 seconds; the explicit ORB scheme is able to achieve the averaged steady-state value for all sub-iteration counts. Fig. 7(c) shows the average particle position along the axial direction as a function of time. A fluctuating steady-state about the middle of the column (0.005 m from the bottom) is attained for all the cases with reasonable agreement. Fig. 7(d) shows the sensitivity of sub-iteration count on solution accuracy and the speed-up obtained using the explicit ORB method. The steady-state acceleration is chosen as the parameter of interest for computing errors with respect to the reference traditional explicit method. The errors are within 1-2% for about 35 sub-iterations along with a 2X speed-up using the explicit ORB scheme. Fluidized bed This case consists of a rectangular column with a 1.6 mm x 1.6 mm cross section and a height of 1 cm, with 10,000 particles seeded initially close to the bottom of the column, on a mesh of 51,200 cells (32 x 50 x 32 grid). A constant velocity inflow boundary condition is applied to the bottom face with a fixed normal gas velocity of 1.5 cm/s; the lateral surfaces are assumed to be no-slip walls and pressure outflow boundary condition is used at the top. This simulation uses the same material properties as in the riser case (section 5.1) and is perfomed using 16 MPI ranks with 6 ORB levels (64 leaves). Fig. 8 shows the snapshots of particle distribution in the domain at four time points between 0 and 400 ms, indicating top-to-bottom mixing, typically seen in fluidized bed reactors. 9 quantifies the accuracy and performance of the explicit ORB scheme for varying number of sub-iteration counts. The fluidized bed is a chaotic system with dense particle distributions; a fluctuating steady-state for the particle speeds is achieved in about 100 ms ( Fig. 9(a)) with intermittent peaks in average particle speeds over time. The explicit ORB scheme solutions show comparable trends with respect to the traditional explicit scheme and approach similar averaged particle speeds at steady-state. Fig. 9(b) shows the variation of pressure difference between the top and bottom of the column; a fluctuating steady-state is attained within 50 ms. All of the explicit ORB scheme cases exhibit the same trend as the reference explicit solution with the nominally similar average pressure drop at steady-state. The variation of average axial particle position with time is shown Fig. 9(c) indicating agreement among the explicit ORB scheme cases with respect to the reference solution. Fig. 9(d) shows the variation of speed-up and error in steady-state timeaveraged particle speed for varying sub-iteration counts in the explicit ORB scheme. A speed-up of 1.1-1.3X is seen for 12-20 sub-iteration counts with solution errors on the order of 12-14%. The number of sub-iterations needs to be around 50 to bound the errors within 10% for which no speed-up is observed. This case reinforces the need for dilute regions in the problem for optimal performance of the explicit ORB scheme. The dense particle distributions in this problem will require the global resolution of collisional time scales among all particles thereby negating any performance improvements from the explicit ORB scheme. Conclusions and future work An adaptive timestepping method is developed to speed up CFD-DEM simulations using an orthogonal recursive bisection based approach. This algorithm was implemented in a parallel CFD-DEM solver and its performance was compared against the baseline explicit scheme with a global time step. Four different test cases with dilute and dense particle distributions were studied to quantify the efficacy of this method. The dilute cases (homogenous cooling system and riser flow) showed a 2-3X speed-up relative to baseline explicit scheme, while maintaining minimal solution errors. The dense cases on the other hand (settling and fluidized bed) showed minimal speed-up to maintain low solution errors. The current method provides a means of setting the timestep appropriately on a particle cluster basis, but caution should be used away from collisions to ensure that accuracy requirements (e.g., based on path line integration considerations) are still met. Other bisection strategies such as k-means clustering that respect clustering and probable collision partners will be studied in the context of relatively denser distributions. A relative distance based time step estimate instead of a bimodal distribution used in this work, will be considered to prevent collision misses in dense systems. Figure 1 : 1(a) generic distribution of particles with clustering observed in CFD-DEM simulations and (b) partitioning of particle clusters using orthogonal recursive bisection (ORB). Figure 2 : 2Snapshots of particle specific kinetic energy (k = 1 2 |v| 2 ) in m 2 /s 2 for the HCS simulation at (a) 0 , (b) 15 and (c) 30 ms, respectively. (d) shows the simulated and analytic solutions for particle averaged temperature (T = 1 3 v 2 ) decay as a function of time. Figure 3 :Figure 4 :Figure 5 : 345Computed solution of temperature decay using the explicit ORB method with varying number of sub-iterations for (a) 100, (c) 200 and (e) 1222 particles, respectively. Figures (b), (d) and (f) show the variation of error in non-dimensional temperature between the explicit ORB and the traditional explicit method along with the speed-up achieved for varying number of sub-iteration counts. Snapshots of particle speed (|v|) in m/s for the particle settling simulation at (a) 15 , (b) 30 and (c) 60 ms, respectively. (d) shows the particle averaged position along the direction of gravity (y) and speed vs time. Computed solution of average particle speed using the traditional explicit and explicit ORB method with varying number of sub-iterations for (a) 100 and (c) 300, respectively. (b) and (d) show the variation of error in particle averaged speed between the explicit ORB and the traditional explicit method along with the speed-up achieved for varying number of sub-iteration counts. Figure 6 : 6Snapshots of particle distribution colored by their speed in m/s for the riser flow simulation at (a) 300 ms and (b) 400 ms, respectively. (c) shows the distribution of pressure (Pa) in the domain at t=400 ms. Figure 7 :Figure 8 : 78Fig. 7(b) shows the pressure difference between the top and bottom of the column as a function of time for this case. This parameter approaches a fluctuating (a) particle averaged speed along the axial direction, (b) pressure difference between riser top and bottom and (c) is the particle averaged position along the axis as a function of time and varying sub-iteration counts. (d) plots the variation of speed-up and error in particle averaged acceleration with respect to the traditional explicit method for varying number of sub-iterations in the explicit ORB scheme. Snapshots of particle distribution colored by their speed in m/s for the fluidized bed simulation at (a) 0, (b) 100, (c) 200 and (d) 400 ms, respectively. Figure 9 : 9(a) particle averaged speed, (b) top to bottom pressure difference and (c) particle averaged axial position in the fluidized bed as a function of time for the traditional explicit and the explicit ORB scheme with varying sub-iteration counts. (d) is the variation of speed-up and error in steady-state particle averaged speed solution for varying number of sub-iterations. Fig. Fig. 9 quantifies the accuracy and performance of the explicit ORB scheme for varying number of sub-iteration counts. The fluidized bed is a chaotic system with dense particle distributions; a fluctuating steady-state for the particle speeds is achieved in about 100 ms (Fig. 9(a)) with intermittent peaks in average particle speeds over time. The explicit ORB scheme solutions show comparable trends with respect to the traditional explicit scheme and approach similar averaged particle speeds at steady-state. Fig. 9(b) shows the variation of pressure difference between the top and bottom of the column; a fluctuating steady-state is attained within 50 ms. All of the explicit ORB scheme cases exhibit the same trend as the reference explicit solution with the nominally similar average pressure drop at steady-state. The variation of average axial particle position with time is shown Fig. 9(c) indicating agreement among the explicit ORB scheme cases with respect to the reference solution. Fig. 9(d) shows the variation of speed-up and error in steady-state timeaveraged particle speed for varying sub-iteration counts in the explicit ORB scheme. A speed-up of 1.1-1.3X is seen for 12-20 sub-iteration counts with solution errors on the order of 12-14%. The number of sub-iterations needs to be around 50 to bound the errors within 10% for which no speed-up is observed. This case reinforces the need for dilute regions in the problem for optimal performance of the explicit ORB scheme. The dense particle distributions in this problem will require the global resolution of collisional time scales among all particles thereby negating any performance improvements from the explicit ORB scheme. Algorithm 2 explicit ORB scheme nsubit = number of sub-iterations Integrate fluid equations for time ∆t f Update particle drag forces nboxes = number of processor owned boxes ∆t sp = ∆t f nsubit initialize array "ltstep" of size nboxes for it in nsubit do update particle neighbor list for box i in nboxes do ltstep[i]=minimum particle time-step within box i end for for box i in nboxes do t p = 0.0 while t p < ∆t sp do find collision forces for particles within box i use velocity-verlet time-step update for time, ltstep[i] t p = t p + ltstep[i] end while end for redistribute ghost particles end for Deposit particle data on grid Regrid the computational domain using ORBSimulation parmeters Value Particle diameter (d p ) 100 µm Particle density (ρ p ) 1000 kg/m 3 Gas density (ρ g ) 1 kg/m 3 Gas viscosity (µ g ) 2 × 10 −5 Pa sec Collisional spring constant (k) 10 N/m Restitution coefficient (e) 0.8 Fluid time-step (∆t f ) 0.0001 sec Drag model BVK model [23] Table 1 : 1Simulation parameters for the homogenous cooling system case time scale scaled by Courant number of 0.02, when there are potential collision AcknowledgmentsThis research was supported by the Department of Energy for project titled "MFIX-DEM enhancement for industry relevant flows". The authors acknowledge helpful discussions with their collaborators at Colorado University, Boulder (Christine Hrenya, Thomas Hauser, Peiyuan Liu, Aaron Holt and Dane Skow), Ann Almgren from Lawrence Berkeley National Laboratory, Jordan Musser from National Energy Technology Laboratory and Deepthi Vaidhynathan from National Renewable Energy Laboratory. The U.S. Government retains and the publisher, by accepting the article for publication, acknowledges that the U.S. Government retains a nonexclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this work, or allow others to do so, for U.S. Government purposes. Experimental segregation profiles in bubbling gas-fluidized beds. G G Joseph, J Leboreiro, C M Hrenya, A R Stevens, AIChE journal. 5311G. G. Joseph, J. Leboreiro, C. M. Hrenya, A. R. Stevens, Experimental segregation profiles in bubbling gas-fluidized beds, AIChE journal 53 (11) (2007) 2804-2813. Discrete particle simulation of twodimensional fluidized bed. Y Tsuji, T Kawaguchi, T Tanaka, Powder technology. 771Y. Tsuji, T. Kawaguchi, T. Tanaka, Discrete particle simulation of two- dimensional fluidized bed, Powder technology 77 (1) (1993) 79-87. Cfd modeling of gas-solid flow and cracking reaction in two-stage riser fcc reactors. X Lan, C Xu, G Wang, L Wu, J Gao, Chemical Engineering Science. 6417X. Lan, C. Xu, G. Wang, L. Wu, J. Gao, Cfd modeling of gas-solid flow and cracking reaction in two-stage riser fcc reactors, Chemical Engineering Science 64 (17) (2009) 3847-3858. Applying the direct quadrature method of moments to improve multiphase fcc riser reactor simulation. A Dutta, D Constales, G J Heynderickx, Chemical engineering science. 83A. Dutta, D. Constales, G. J. Heynderickx, Applying the direct quadrature method of moments to improve multiphase fcc riser reactor simulation, Chemical engineering science 83 (2012) 93-109. Trends in multiphase modeling and simulation of sprays. R Kolakaluri, S Subramaniam, M V Panchagnula, International Journal of Spray and Combustion Dynamics. 64R. Kolakaluri, S. Subramaniam, M. V. Panchagnula, Trends in multiphase modeling and simulation of sprays, International Journal of Spray and Combustion Dynamics 6 (4) (2014) 317-356. Numerical investigation and modeling of reacting gas-solid flows in the presence of clusters. J Capecelatro, P Pepiot, O Desjardins, Chemical Engineering Science. 122J. Capecelatro, P. Pepiot, O. Desjardins, Numerical investigation and mod- eling of reacting gas-solid flows in the presence of clusters, Chemical Engi- neering Science 122 (2015) 403-415. Cluster patterns in circulating fluidized beds predicted by numerical simulation (discrete particle model versus twofluid model). Y Tsuji, T Tanaka, S Yonemura, Powder Technology. 953Y. Tsuji, T. Tanaka, S. Yonemura, Cluster patterns in circulating fluidized beds predicted by numerical simulation (discrete particle model versus two- fluid model), Powder Technology 95 (3) (1998) 254-264. Selection of an appropriate time integration scheme for the discrete element method (dem). H Kruggel-Emden, M Sturm, S Wirtz, V Scherer, Computers & Chemical Engineering. 3210H. Kruggel-Emden, M. Sturm, S. Wirtz, V. Scherer, Selection of an ap- propriate time integration scheme for the discrete element method (dem), Computers & Chemical Engineering 32 (10) (2008) 2263-2279. Numerical comparison of some explicit time integration schemes used in dem, fem/dem and molecular dynamics. E Rougier, A Munjiza, N John, International journal for numerical methods in engineering. 616E. Rougier, A. Munjiza, N. John, Numerical comparison of some explicit time integration schemes used in dem, fem/dem and molecular dynamics, International journal for numerical methods in engineering 61 (6) (2004) 856-879. Assessment of implicit and explicit algorithms in numerical simulation of granular matter. K Samiei, LuxembourgUniversity of Luxembourg, LuxembourgPh.D. thesisK. Samiei, Assessment of implicit and explicit algorithms in numerical sim- ulation of granular matter, Ph.D. thesis, University of Luxembourg, Lux- embourg, Luxembourg (2012). Mfix documentation: Theory guide. M Syamlal, W Rogers, T J Obrien, DOE/METC-95/1013 and NTIS/DE95000031National Energy Technology Laboratory, Department of EnergyTechnical NoteM. Syamlal, W. Rogers, T. J. OBrien, Mfix documentation: Theory guide, National Energy Technology Laboratory, Department of Energy, Technical Note DOE/METC-95/1013 and NTIS/DE95000031. CASTRO: A New Compressible Astrophysical Solver. I. Hydrodynamics and Selfgravity. A S Almgren, V E Beckner, J B Bell, M S Day, L H Howell, C C Joggerst, M J Lijewski, A Nonaka, M Singer, M Zingale, 10.1088/0004-637X/715/2/1221arXiv:1005.0114Astrophysical Journal. 715A. S. Almgren, V. E. Beckner, J. B. Bell, M. S. Day, L. H. Howell, C. C. Joggerst, M. J. Lijewski, A. Nonaka, M. Singer, M. Zingale, CASTRO: A New Compressible Astrophysical Solver. I. Hydrodynamics and Self- gravity, Astrophysical Journal 715 (2010) 1221-1238. arXiv:1005.0114, doi:10.1088/0004-637X/715/2/1221. Documentation of open-source mfix-dem software for gas-solids flows. R Garg, J Galvin, T Li, S Pannala, 31R. Garg, J. Galvin, T. Li, S. Pannala, Documentation of open-source mfix-dem software for gas-solids flows, From URL https://mfix. netl. doe. gov/documentation/dem doc 2012-1. pdf.(Accessed: 31 March 2014). Computer" experiments" on classical fluids. i. thermodynamical properties of lennard-jones molecules. L Verlet, Physical review. 159198L. Verlet, Computer" experiments" on classical fluids. i. thermodynamical properties of lennard-jones molecules, Physical review 159 (1) (1967) 98. Selecting a suitable time step for discrete element simulations that use the central difference time integration scheme. C O&apos;sullivan, J D Bray, Engineering Computations. 212C. O'Sullivan, J. D. Bray, Selecting a suitable time step for discrete ele- ment simulations that use the central difference time integration scheme, Engineering Computations 21 (2/3/4) (2004) 278-303. A partitioning strategy for nonuniform problems on multiprocessors. M J Berger, S H Bokhari, IEEE Transactions on Computers. 5M. J. Berger, S. H. Bokhari, A partitioning strategy for nonuniform prob- lems on multiprocessors, IEEE Transactions on Computers (5) (1987) 570- 580. Stackless kd-tree traversal for high performance gpu ray tracing. S Popov, J Günther, H.-P Seidel, P Slusallek, Computer Graphics Forum. 26Wiley Online LibraryS. Popov, J. Günther, H.-P. Seidel, P. Slusallek, Stackless kd-tree traver- sal for high performance gpu ray tracing, in: Computer Graphics Forum, Vol. 26, Wiley Online Library, 2007, pp. 415-424. An efficient k-means clustering algorithm: Analysis and implementation. T Kanungo, D M Mount, N S Netanyahu, C D Piatko, R Silverman, A Y Wu, IEEE transactions on pattern analysis and machine intelligence. 247T. Kanungo, D. M. Mount, N. S. Netanyahu, C. D. Piatko, R. Silverman, A. Y. Wu, An efficient k-means clustering algorithm: Analysis and imple- mentation, IEEE transactions on pattern analysis and machine intelligence 24 (7) (2002) 881-892. On the history of the minimum spanning tree problem. R L Graham, P Hell, Annals of the History of Computing. 71R. L. Graham, P. Hell, On the history of the minimum spanning tree prob- lem, Annals of the History of Computing 7 (1) (1985) 43-57. Sur une courbe, qui remplit toute une aire plane. G Peano, Mathematische Annalen. 361G. Peano, Sur une courbe, qui remplit toute une aire plane, Mathematische Annalen 36 (1) (1890) 157-160. Parallelization of structured, hierarchical adaptive mesh refinement algorithms. C A Rendleman, V E Beckner, M Lijewski, W Crutchfield, J B Bell, Computing and Visualization in Science. 33C. A. Rendleman, V. E. Beckner, M. Lijewski, W. Crutchfield, J. B. Bell, Parallelization of structured, hierarchical adaptive mesh refinement algo- rithms, Computing and Visualization in Science 3 (3) (2000) 147-157. [22] Products formerly haswell., http://ark.intel.com/products/ codename/42174/Haswell. Drag force of intermediate reynolds number flow past mono-and bidisperse arrays of spheres. R Beetstra, M A Van Der Hoef, J Kuipers, AIChE journal. 532R. Beetstra, M. A. van der Hoef, J. Kuipers, Drag force of intermediate reynolds number flow past mono-and bidisperse arrays of spheres, AIChE journal 53 (2) (2007) 489-501. Grain flow as a fluid-mechanical phenomenon. P Haff, Journal of Fluid Mechanics. 134P. Haff, Grain flow as a fluid-mechanical phenomenon, Journal of Fluid Mechanics 134 (1983) 401-430. The homogeneous cooling state as a verification test for kinetic-theory-based continuum models of gas-solid flows. W Fullmer, C Hrenya, Journal of Verification, Validation and Uncertainty Quantification. W. Fullmer, C. Hrenya, The homogeneous cooling state as a verification test for kinetic-theory-based continuum models of gas-solid flows, Journal of Verification, Validation and Uncertainty Quantification. H Zhang, W.-X Huang, J.-X Zhu, Gas-solids flow behavior: Cfb riser vs. downer. 47H. Zhang, W.-X. Huang, J.-X. Zhu, Gas-solids flow behavior: Cfb riser vs. downer, AIChE Journal 47 (9) (2001) 2000-2011. The first effects of fluid inertia on flows in ordered and random arrays of spheres. R J Hill, D L Koch, A J Ladd, Journal of Fluid Mechanics. 448R. J. Hill, D. L. Koch, A. J. Ladd, The first effects of fluid inertia on flows in ordered and random arrays of spheres, Journal of Fluid Mechanics 448 (2001) 213-241.
[]
[ "ls1 mardyn: The massively parallel molecular dynamics code for large systems", "ls1 mardyn: The massively parallel molecular dynamics code for large systems" ]
[ "Christoph Niethammer \nHigh Performance Computing Center Stuttgart (HLRS), Germany ‡ Laboratory of Engineering Thermodynamics (LTD), Univ. of Kaiserslautern, Germany ¶ Scientific Computing in Computer Science (SCCS)\nMünchenTUGermany\n", "Stefan Becker ", "Martin Bernreuther \nHigh Performance Computing Center Stuttgart (HLRS), Germany ‡ Laboratory of Engineering Thermodynamics (LTD), Univ. of Kaiserslautern, Germany ¶ Scientific Computing in Computer Science (SCCS)\nMünchenTUGermany\n", "Martin Buchholz ", "Wolfgang Eckhardt ", "Alexander Heinecke ", "Stephan Werth ", "Hans-Joachim Bungartz ", "Colin W Glass \nHigh Performance Computing Center Stuttgart (HLRS), Germany ‡ Laboratory of Engineering Thermodynamics (LTD), Univ. of Kaiserslautern, Germany ¶ Scientific Computing in Computer Science (SCCS)\nMünchenTUGermany\n", "Hans Hasse ", "Jadran Vrabec \nThermodynamics and Energy Technology (ThEt)\nUniv. of Paderborn\nGermany\n", "Martin Horsch [email protected] ", "\nHigh Performance Computing Center Stuttgart\nNobelstr. 1970569Stuttgart, Germany\n", "\nLaboratory of Engineering Thermodynamics\nUniversity of Kaiserslautern\nErwin-Schrödinger-Str. 4467663Kaiserslautern, Germany, MünchenTU\n", "\nChair for Scientific Computing in Computer Science\nBoltzmannstr. 385748Garching, Germany\n", "\nLaboratory of Thermodynamics and Energy Technology\nUniversity of Paderborn\nWarburger Str. 10033098PaderbornGermany\n" ]
[ "High Performance Computing Center Stuttgart (HLRS), Germany ‡ Laboratory of Engineering Thermodynamics (LTD), Univ. of Kaiserslautern, Germany ¶ Scientific Computing in Computer Science (SCCS)\nMünchenTUGermany", "High Performance Computing Center Stuttgart (HLRS), Germany ‡ Laboratory of Engineering Thermodynamics (LTD), Univ. of Kaiserslautern, Germany ¶ Scientific Computing in Computer Science (SCCS)\nMünchenTUGermany", "High Performance Computing Center Stuttgart (HLRS), Germany ‡ Laboratory of Engineering Thermodynamics (LTD), Univ. of Kaiserslautern, Germany ¶ Scientific Computing in Computer Science (SCCS)\nMünchenTUGermany", "Thermodynamics and Energy Technology (ThEt)\nUniv. of Paderborn\nGermany", "High Performance Computing Center Stuttgart\nNobelstr. 1970569Stuttgart, Germany", "Laboratory of Engineering Thermodynamics\nUniversity of Kaiserslautern\nErwin-Schrödinger-Str. 4467663Kaiserslautern, Germany, MünchenTU", "Chair for Scientific Computing in Computer Science\nBoltzmannstr. 385748Garching, Germany", "Laboratory of Thermodynamics and Energy Technology\nUniversity of Paderborn\nWarburger Str. 10033098PaderbornGermany" ]
[]
The molecular dynamics simulation code ls1 mardyn is presented. It is a highly scalable code, optimized for massively parallel execution on supercomputing architectures, and currently holds the world record for the largest molecular simulation with over four trillion particles. It enables the application of pair potentials to length and time scales which were previously out of scope for molecular dynamics simulation. With an efficient dynamic load balancing scheme, it delivers high scalability even for challenging heterogeneous configurations.Presently, multi-center rigid potential models based on Lennard-Jones sites, point charges and higher-order polarities are supported. Due to its modular design, ls1 mardyn can be extended to new physical models, methods, and algorithms, allowing future users to tailor it to suit their respective needs. Possible applications include scenarios with complex geometries, e.g.for fluids at interfaces, as well as non-equilibrium molecular dynamics simulation of heat and mass transfer.
10.1021/ct500169q
[ "https://arxiv.org/pdf/1408.4599v1.pdf" ]
6,017,995
1408.4599
49d6993dc0674ad1e910af5610e559973b4d1c33
ls1 mardyn: The massively parallel molecular dynamics code for large systems 20 Aug 2014 Christoph Niethammer High Performance Computing Center Stuttgart (HLRS), Germany ‡ Laboratory of Engineering Thermodynamics (LTD), Univ. of Kaiserslautern, Germany ¶ Scientific Computing in Computer Science (SCCS) MünchenTUGermany Stefan Becker Martin Bernreuther High Performance Computing Center Stuttgart (HLRS), Germany ‡ Laboratory of Engineering Thermodynamics (LTD), Univ. of Kaiserslautern, Germany ¶ Scientific Computing in Computer Science (SCCS) MünchenTUGermany Martin Buchholz Wolfgang Eckhardt Alexander Heinecke Stephan Werth Hans-Joachim Bungartz Colin W Glass High Performance Computing Center Stuttgart (HLRS), Germany ‡ Laboratory of Engineering Thermodynamics (LTD), Univ. of Kaiserslautern, Germany ¶ Scientific Computing in Computer Science (SCCS) MünchenTUGermany Hans Hasse Jadran Vrabec Thermodynamics and Energy Technology (ThEt) Univ. of Paderborn Germany Martin Horsch [email protected] High Performance Computing Center Stuttgart Nobelstr. 1970569Stuttgart, Germany Laboratory of Engineering Thermodynamics University of Kaiserslautern Erwin-Schrödinger-Str. 4467663Kaiserslautern, Germany, MünchenTU Chair for Scientific Computing in Computer Science Boltzmannstr. 385748Garching, Germany Laboratory of Thermodynamics and Energy Technology University of Paderborn Warburger Str. 10033098PaderbornGermany ls1 mardyn: The massively parallel molecular dynamics code for large systems 20 Aug 2014* To whom correspondence should be addressed The molecular dynamics simulation code ls1 mardyn is presented. It is a highly scalable code, optimized for massively parallel execution on supercomputing architectures, and currently holds the world record for the largest molecular simulation with over four trillion particles. It enables the application of pair potentials to length and time scales which were previously out of scope for molecular dynamics simulation. With an efficient dynamic load balancing scheme, it delivers high scalability even for challenging heterogeneous configurations.Presently, multi-center rigid potential models based on Lennard-Jones sites, point charges and higher-order polarities are supported. Due to its modular design, ls1 mardyn can be extended to new physical models, methods, and algorithms, allowing future users to tailor it to suit their respective needs. Possible applications include scenarios with complex geometries, e.g.for fluids at interfaces, as well as non-equilibrium molecular dynamics simulation of heat and mass transfer. Introduction The molecular dynamics (MD) simulation code ls1 mardyn (large systems 1: molecular dynamics) is presented here. The ls1 mardyn program is an interdisciplinary endeavor, whose contributors have backgrounds from engineering, computer science and physics, aiming at studying challenging scenarios with up to trillions of molecules. In the considered systems, the spatial distribution of the molecules may be heterogeneous and subject to rapid unpredictable change. This is reflected by the algorithms and data structures as well as a highly modular software engineering approach. The source code of ls1 mardyn is made publicly available as free software under a two-clause BSD license. 1 Molecular modelling and simulation has become a powerful computational method 2,3 and is applied to a wide variety of areas such as thermodynamic properties of fluids, 4 phase equilibria, 5,6 interfacial properties, 7 phase transitions, 8,9 transport coefficients, 10 adsorption, 11,12 mechanical properties of solids, 13 flow phenomena, 14,15 polymer properties, 16 protein folding, 17,18 or selfassembly. 19 The sound physical basis of the approach makes it extremely versatile. For a given force field, the phase space can be explored by molecular dynamics simulation under a variety of boundary conditions, which allows gathering information on all thermodynamic states and processes on the molecular level. If required, external forces (e.g. an electric field) can be imposed in addition to the intermolecular interactions. MD simulation has an extremely high temporal and spatial resolution of the order of 10 −15 seconds and 10 −11 meters, respectively. This resolution is useful for studying physical phenomena at small length scales, such as the structure of fluid interfaces. With a time discretization on the femtosecond scale, rapid processes are immediately accessible, while slower processes may require particularly devised sampling techniques such as metadynamics. 20 The number of molecules is also a challenge for molecular simulation. While systems of practical interest contain extremely large numbers of molecules, e.g. of the order of 10 23 , the largest ensembles that can be handled today are of the order of 10 12 molecules. 21 This limitation is usually addressed by focusing on representative subvolumes, containing a limited number of molecules, to which an appropriate set of boundary conditions is applied. Depending on the type of information that is determined, e.g. transport properties 22 or phase equilibria 5,6 of bulk phases, a number of molecules of the order of 1 000 may be sufficient. However, non-equilibrium scenarios such as condensation 23,24 or mass transfer through nanoporous membranes 15,25 require much larger simulation volumes. There are so many scalable MD codes available that a comprehensive discussion would be beyond the scope of the present paper. For the development of MD codes, as for any software, there are trade-offs between generality and optimization for a single purpose, which no particular implementation can completely evade. Several popular MD simulation environments are tailored for studying biologically relevant systems, with typical application scenarios including conformational sampling of macromolecules in aqueous solution. The relaxation processes of such systems are often several orders of magnitude slower than for simple fluids, requiring an emphasis on sampling techniques and long simulation times, but not necessarily on large systems. The AMBER package, 26 for instance, scales well for systems containing up to 400 000 molecules, facilitating MD simulations that reach the microsecond time scale. 27 Similarly, GRO-MACS 28,29 and NAMD, 30 which also have a focus on biosystems, have been shown to perform quite efficiently on modern HPC architectures as well. Tinker was optimized for biosystems with polarizable force fields, 31 whereas CHARMM, 32 which was co-developed by Nobel prize winner Martin Karplus, is suitable for coupling classical MD simulation of macromolecules with quantum mechanics. 33 The LAMMPS program 34-37 as well as DL_POLY, 38 which scales well for homogeneous fluid systems with up to tens of millions of molecules, and ESPResSo, 39 which emphasizes its versatility and covers both molecular and mesoscopic simulation approaches, are highly performant codes which aim at a high degree of generality, including many classes of pair potentials and methods. The ms2 program performs well for the simulation of vapor-liquid equilibria and other thermodynamic properties, 4 but is limited to relatively small numbers of molecules. The IMD code, 40,41 which has twice before held the MD simulation world record in terms of system size, has a focus on multi-body potentials for solids. With ls1 mardyn, which is presented here, a novel MD code is made available to the public. It is more specialized than most of the molecular simulation programs mentioned above. In particular, it is restricted to rigid molecules, and only constant volume ensembles are supported, so that the pressure cannot be specified in advance. Electrostatic long-range interactions, beyond the cutoff radius, are considered by the reaction field method, 42 which cannot be applied to systems containing ions. However, ls1 mardyn is highly performant and scalable. Holding the present world record in simulated system size, 21 it is furthermore characterized by a modular structure, facilitating a high degree of flexibility within a single code base. Thus, ls1 mardyn is not only a simulation engine, but also a framework for developing and evaluating simulation algorithms, e.g. different thermostats or parallelization schemes. Therefore, its software structure supports alternative implementations for methods in most parts of the program, including core parts such as the numerical integration of the equations of motion. The C++ programming language was used, including low level optimizations for particular HPC systems. In this way, ls1 mardyn has been proven to run efficiently on a variety of architectures, from ordinary workstations to massivelyparallel supercomputers. In a fluid system, neighborhood relations between molecules are always subject to rapid change. Thus, the neighbor molecules have to be redetermined throughout the simulation. For this purpose, ls1 mardyn employs a linked-cell data structure, [43][44][45] which is efficiently parallelized by spatial domain decomposition. 46,47 Thereby, the simulation volume is divided into subvolumes that are assigned to different processes. Interactions with molecules in adjacent subvolumes are explicitly accounted for by synchronized halo regions. 3 Using ls1 mardyn, a wide range of simulation scenarios can be addressed, and pre-release versions of ls1 mardyn have already been successfully applied to a variety of topics from chemical and process engineering: Nucleation in supersaturated vapors 24,[48][49][50] was considered with a particular focus on systems with a large number of particles. 23,51,52 On the SuperMUC, over four trillion molecules were simulated. 21 The vapor-liquid surface tension and its dependence on size and curvature was characterized. 49,[53][54][55][56] The ls1 mardyn program was furthermore employed to investigate fluid flow through nanoporous membrane materials 57 and adsorption phenomena such as the fluid-solid contact angle in dependence on the fluid-wall interaction. 12,58 Scenario generators for ls1 mardyn are available both internally, i.e. without hard disk input/output, and as external executables. The internal generators create the initial configuration directly in memory, which is distributed among the processes, facilitating a better scalability for massively-parallel execution. A generalized output plugin interface can be used to extract any kind of information during the simulation and to visualize the simulation trajectory with MegaMol 59,60 and other compatible tools. This paper is organized as follows: Section 2 describes molecular models which are available in ls1 mardyn. Section 3 introduces the underlying computational methods. The implemented load balancing approach is discussed in detail in Section 4. A performance analysis of ls1 mardyn is presented in Section 5, including results obtained on two of the fastest HPC systems. Interaction models in ls1 mardyn Molecular motion has two different aspects: External degrees of freedom, corresponding to the translation and rotation with respect to the molecular center of mass, as well as internal degrees of freedom that describe the conformation of the molecule. In ls1 mardyn, molecules are modeled as rigid rotators, disregarding internal degrees of freedom and employing effective pair potentials for the intermolecular interaction. This modeling approach is suitable for all small molecules which do not exhibit significant conformational transitions. An extension of the code to internal degrees of freedom is the subject of a presently ongoing development, which is not discussed here. The microcanonical (NV E), canonical (NV T ) and grand-canonical (µV T ) ensembles are supported, whereby the temperature is (for NV T and µV T ) kept constant by velocity rescaling. The Lennard-Jones (LJ) potential U LJ (r) = 4ε σ r 12 − σ r 6 ,(1) with the size parameter σ and the energy parameter ε, is used to account for repulsive and dispersive interactions. It can also be employed in a truncated and shifted (LJTS) version. 2 LJ potential parameters for the unlike interaction, i.e. the pair potential acting between molecules of different species, are determined by the Lorentz and Berthelot combination rules, 61-63 which can be further adjusted by binary interaction parameters. [64][65][66] Point charges and higher-order point polarities up to second order (i.e. dipoles and quadrupoles), are implemented to model electrostatic interactions in terms of a multipole expansion. 67,68 This allows an efficient computational handling while sufficient accuracy is maintained for the full range of thermophysical properties. 69 The Tersoff potential 70 can be used within ls1 mardyn in order to accurately describe a variety of solid materials. 71,72 As a multi-body potential, it is computationally more expensive than electrostatics and the LJ potential. for an example, see 1. Boltzmann constant k B = 1 Coulomb constant k C = (4πε 0 ) −1 = 1 Unit length l 0 = 1 a H (Bohr's radius) = 5.29177 × 10 −11 m Elementary charge q 0 = 1 e = 9.64854 × 10 9 C/mol Unit mass m 0 = 1 000 u = 1 kg/mol Unit density ρ 0 = 1/l 3 0 = 11 205.9 mol/l Unit energy E 0 = k C q 2 0 /l 0 = 4.35946 × 10 −18 J Unit temperature T 0 = E 0 /k B = 315 775 K Unit pressure p 0 = ρ 0 E 0 = 2.94211 × 10 13 Pa Unit time t 0 = l 0 m 0 /E 0 = 3.26585 × 10 −14 s Unit velocity v 0 = l 0 /t 0 = 1620.35 m/s Unit acceleration a 0 = l 0 /t 2 0 = 4.96148 × 10 −15 m/s 2 Unit dipole moment D 0 = l 0 q 0 = 2.54176 D Unit quadrupole moment Q 0 = l 2 0 q 0 = 1.34505 DÅ Data structures and numerical integration The computational core of every MD program is the calculation of forces and torques acting on the molecules, which are based on molecular models for the physical interactions. The choice of a suitable model depends on many factors, like the material to be simulated, the physical effects studied or the desired accuracy. Different models may require substantially different algorithms for the numerical solution of Newton's equations of motion. This can even necessitate major changes in the software structure, e.g. when long-range interactions have to be considered explicitly in addition to short-range interactions, or when models with internal degrees of freedom are used instead of rigid molecular models. In the present version of ls1 mardyn, only short-range interactions up to a specified cut-off radius are explicitly computed. The long-range contribution to the pressure and the energy is approximated by isotropic cut-off corrections, i.e. by a mean-field integral for the dispersive interaction, which is supplemented by the reaction field method 42 for dipolar molecules. Calculating short range interactions in dynamic systems requires an efficient algorithm for finding neighbors. For this purpose, ls1 mardyn employs an adaptive linked-cell algorithm. 73 The basic linked-cell algorithm divides the simulation volume into a grid of equally sized cubic r i t + δt 2 =ṙ i t − δt 2 + δtr i (t),(2)r i (t + δt) = r i (t) + δtṙ i t + δt 2 .(3) For molecules which are not rotationally symmetric, the equations for angular momentum j and orientation q (with q being a quaternion) 2 are applied as well. In analogy to Eqs. (2) and (3) for the translational motion, the rotational motion is described by j i t + δt 2 = j i t − δt 2 + δt τ i (t),(4)q i (t + δt) = q i (t) + δt dq i t + δt 2 ,(5) where τ i is the torque divided by the rotational moment of inertia. Parallelization and load balancing 4.1 Load balancing based on domain decomposition A parallelization scheme using spatial domain decomposition divides the simulation volume into a finite number of subvolumes, which are distributed to the available processing units. Usually, the number of subvolumes and the number of processing units are equal. This method scales linearly with the number of molecules and is therefore much better suited for large systems than other methods like force or atom decomposition. 34,46,77 For heterogeneous scenarios, it is not straightforward that all processes carry a similar share of the total workload. In simulation scenarios containing coexisting liquid and vapor phases, the local density within the simulation volume can differ significantly, e.g. by a factor of 1 000. The number of pairwise force calculations scales quadratically with the density. Therefore, the computational costs for two subvolumes of equal size may differ by a factor of a million, resulting in many idle processes unless an efficient load balancing scheme is employed. Depending on the simulation scenario, it may be sufficient to apply a static load balancing scheme which is adjusted only once, or to rebalance the decomposition dynamically, e.g. every 100 to 1 000 time steps. Like in other parts of ls1 mardyn, an interface class is available for the domain decomposition, allowing for the generic implementation of different decomposition approaches and therefore facilitating the implementation of load-balancing strategies based on domain decomposition. Several strategies were implemented in ls1 mardyn and evaluated for nucleation processes. 73 The strategy based on trees turned out to be the most efficient one. It is available in the current version of ls1 mardyn as described in the remainder of the present section. Load costs The purpose of load balancing is to decompose and distribute the simulation volume such that all processes need the same computing time. Such a decomposition requires a method to guess or measure the load that corresponds to a specific subvolume. The linked-cell algorithm, which is used to identify neighbor molecules, introduces a division of the simulation volume into cells. These cells are the basic volume units for which the load is determined. On the basis of the computational cost for each of the cells, a load balancing algorithm can group cells together such that p subvolumes of equal computational cost are created, where p is the number of processing units. In 2, a 2D example is given for a simulation volume divided into 8 × 8 cells. This volume is being partitioned along cell boundaries into two subvolumes which will then be assigned to different processes. The implementation in ls1 mardyn requires each subvolume to cover at least two cells in each spatial dimension. In a typical simulation, the largest part of the computational cost is caused by the force and distance calculations. If N i and N j denote the number of molecules in cells i and j, respectively, the number of distance calculations n d (i) for cell i can be estimated by n d (i) ≈ N i 2 N i + ∑ j ∈ neigh(i) N j .(6) The first term in Eq. Tree-based decomposition The distribution of cells to processes is in principle straightforward. One way is to bring the cells into a linear order (e.g. row-wise), walk through the ordered list and sum up the load. Having reached 1/p of the total load, the cells may be grouped together to a subvolume and assigned to a process, ensuring that all processes carry a similar load. The problem with this naive approach is that it creates subvolumes with large surface to volume ratios. A homogeneous system with a cubic volume containing 100 × 100 × 100 cells, distributed to 100 processes, would for instance be decomposed to 100 subvolumes with the thickness of a single cell so that all cells would be boundary cells. In such a case, the additional costs for boundary handling and communication are prohibitively high. To overcome this problem, a hierarchical decomposition scheme was implemented in ls1 mardyn. This decomposition is similar to k-d trees, 78 which are known to achieve a good performance in general simulation tasks 79 as well as in the special case of particle simulations. 80,81 The simulation volume is recursively bisected into subvolumes with similar load by planes which are perpendicular to alternating coordinate axes. 82 To determine the optimal devision plane, the load distribution for every possible division plane is computed and the one resulting in the minimal load imbalance is selected. This procedure is recursively repeated until a subvolume is assigned to each process. In case of extremely large simulation volumes, however, initial decompositions are determined following a simplified procedure, until a sufficiently small subvolume size is reached. Thereby, the volume is decomposed into equally sized subvolumes, and the number of processes per subvolume is assigned according to the estimated load for the respective subvolume. Performance Targeting large-scale parallel runs, ls1 mardyn has been designed for both good single-core and parallel efficiency. While the code was written in a portable way, which allows to build and execute the program on every standard Linux or Unix system, we focus here on the HPC systems given in 2 for the performance analysis. In the following sections, we especially explain the influence of the compiler used to build ls1 mardyn on its performance, the overhead of the parallelization as well as its scalability. Sequential performance The compiler used to build the code has a large impact on its performance. 3 shows results obtained with a serial version of ls1 mardyn employing different compilers on the SB and NH partitions of laki as well as on hermit. The test scenarios were a LJ vapor (at kT /ε = 0.7 and ρσ 3 = 0.044) consisting of 40 000 molecules and ethylene oxide in a liquid state (at T = 285 K and ρ = 19.4 mol/l) with 65 536 molecules. As can be seen, the sequential program runs fastest on the Sandy Bridge based laki system and built with the GNU compiler. Unless noted otherwise, the GNU compiler was also used for all further studies discussed below. Sequential to parallel overhead For the scalability evaluation of ls1 mardyn, different target scenarios with a varying degree of complexity were considered, produced by the internal scenario generators, cf. 5. • Homogeneous liquid: Ethylene oxide at a density of ρ = 16.9 mol/l and a temperature of T = 375 K. The molecular model for ethylene oxide consists of three LJ sites and one point dipole. 69 • Droplet: Simulation scenario containing a LJTS nanodroplet (cut-off radius r c = 2.5 σ ) surrounded by a supersaturated vapor at a reduced temperature of kT /ε = 0.95. • Planar interface: Simulation of a planar vapor-liquid interface of the LJTS fluid (cut-off radius r c = 2.5 σ ) at a reduced temperature of kT /ε = 0.95. In the scenarios, the number of molecules was varied. They were simulated on the platforms given in 2 for 1 000 time steps and with disabled final I/O. Parallelization is associated with additional complexity due to communication and synchronization between the different execution paths of the program. In comparison with sequential exe-cution on a single processing unit, this introduces an overhead. To determine the magnitude of this overhead for ls1 mardyn, the planar interface scenario with N = 102 400 LJ sites was executed over 1 000 time steps on the hermit system, both with the sequential and the MPI parallel version of the code, but using only a single process. Execution of the sequential program took 530.9 s, while the MPI parallel version took 543.4 s. This indicates that the overhead due to imperfect concurrency amounts to around 2% only. Scalability Scaling studies were carried out with the homogeneous liquid scenario on the entire hermit system, using the standard domain decomposition method, i.e. all processes were assigned equal volumes. The results presented in 6 show that ls1 mardyn scales favorably in the present case. Trillion particle simulation A version of ls1 mardyn was optimized for simulating single-site LJ particles on the SuperMUC system, 21 one of the largest x86 systems worldwide with 147 500 cores and a theoretical peak performance of more than 3 PFLOPS. It is based on a high-performance FDR-10 InfiniBand inter- To evaluate the performance with respect to strong scaling behavior, a scenario with N = 9.5 × 10 8 particles was studied, which fits into the memory of two nodes, as 18 GB per node are needed. Thereby, a cut-off radius of r c = 5 σ was employed. 9 shows that a very good scaling was achieved for up to 32 768 cores using 65 536 threads. Built with the Intel compiler, the implementation delivered a sustained performance of 113 GFLOPS, corresponding to 8 % single-precision peak performance at a parallel efficiency of 53% compared to 32 cores (64 threads). In addition, a weak scaling analysis with N = 1.6 × 10 7 particles per node was performed, where a peak performance of 12.9% or 183 TFLOPS was achieved at a parallel efficiency of 96 % when scaling from 1 to 32 768 cores. As the kernel was implemented using AVX128, the same scenario was executed on the Cray As can be seen in 9, the scalability on hermit is superior, particularly for strong scaling. The Gemini interconnect allows for higher bandwidth and lower latency for MPI communications than the FDR-10 InfiniBand interconnect of SuperMUC. Furthermore, a 3D torus network is more favorable for the communication pattern of ls1 mardyn than the tree topology of SuperMUC, where the nodes belonging to each island (8 192 cores) communicate via a fully connected network, while for inter-island communication four nodes have to share a single uplink. This can also be seen in 9, where the scalability noticeably drops when going from 8 192 to 16 384 processes. As described by Eckhardt et al., 21 a larger weak scaling benchmark on the whole SuperMUC was performed with that code version. Simulating 4.125 × 10 12 molecules, to our knowledge the largest MD simulation to date, with a cut-off radius of r c = 3.5 σ , one time step took roughly 40 s. For this scenario, a speedup of 133 183 compared to a single core with an absolute performance of 591.2 TFLOPS was achieved, which corresponds to 9.4 % peak performance efficiency. Conclusions The massively parallel MD simulation code ls1 mardyn was introduced and presented. The ls1 mardyn program is designed to simulate homogeneous and heterogeneous fluid systems containing very large numbers of molecules. Fluid molecules are modeled as rigid rotators consisting of multiple interaction sites, enabling simulations of a wide variety of scenarios from noble gases to complex fluid systems under confinement. The code, which presently holds the world record for the largest MD simulation, was evaluated on large-scale HPC architectures. It was found to scale almost perfectly on over 140 000 cores for homogeneous scenarios. The dynamic load balancing capability of ls1 mardyn was tested with different scenarios, delivering a significantly improved scalability for challenging, highly heterogeneous systems. It can be concluded that ls1 mardyn, which is made publicly available as free software, 1 represents the state of the art in MD simulation. It can be recommended for large-scale applications, and particularly for processes at fluid interfaces, where highly heterogeneous and time-dependent particle distributions may occur. Due to the modularity of its code base, future work can adjust ls1 mardyn to newly emerging HPC architectures and further extend the range of available molecular modeling approaches and simulation methods. In this way, ls1 mardyn aims at driving the progress of molecular simulation in general, paving the way to the micrometer length scale and the microsecond time scale for computational molecular engineering. Any system of units can be used in ls1 mardyn as long as it is algebraically consistent and includes the Boltzmann constant k B = 1 as well as the Coulomb constant k C = 1/(4πε o ) = 1 among its basic units. Thereby, expressions for quantities related to temperature and the electrostatic interactions are simplified. The units of size, energy and charge are related (by Coulomb's law and the Coulomb constant unit) and cannot be specified independently of each other. A temperature is converted to energy units by using k B = 1, and vice versa. In this way, all other units are determined; Figure 1 : 1However, smaller cells also cause an additional effort, since 125 instead of 27 cells have to be traversed. This is only beneficial for regions with high density, where the cost of cell traversal is small compared to the cost of distance calculation. Moreover, many applications of molecular dynamics, such as processes at interfaces, are characterized by a heterogeneous distribution of the molecules and thus by a varying density throughout the domain. To account for this, adaptive cell sizes depending on the local density 73 are (optionally) used in ls1 mardyn, cf. 1. Due to periodic boundary conditions, molecules leaving the simulation volume on one side re-enter it on the opposite side, and molecules near the volume boundary interact with those on the opposite side of the volume.After the calculation of all pairwise interactions, the resulting force and torque acting on each molecule is obtained by summation. Newton's equations of motion are solved numerically for all molecules to obtain the configuration in the next time step. Most common methods to integrate these equations are single-step methods, where a new position at the time t + δt is calculated from Adaptive cell sizes for an inhomogeneous molecule distribution. Cells that contain significantly more molecules than others are divided into smaller subcells. According to Newton's third law (actio = reactio), two interacting molecules experience the same force (in opposite directions) due to their mutual interaction, so that a suitable enumeration scheme can be employed to reduce the amount of cell pairs that are taken into account. Following such a scheme, it is sufficient to compute the force exerted by the highlighted molecule on molecules from the highlighted cells.73 the position, velocity and acceleration at the time t. This is repeated for a specified number of time steps n up to the time t + n δt. Usually, algorithms based on the (Størmer-)Verlet method 74,75 are used. Instead, ls1 mardyn employs the leapfrog method,76 which is algebraically equivalent to the Verlet method but more accurate numerically. Positions r i and velocitiesṙ i are calculated bẏ Figure 2 : 2Left: The simulation volume (within the bold line) is divided into cells by the linked-cell algorithm (thin lines) where the cell edge length is the cut-off radius r c . The simulation volume is divided into two subvolumes along cell boundaries (dotted line). Right: Halo cells (light shaded cells) are introduced storing copied molecule data from adjacent boundary cells (dark shaded cells). (??), i.e. N 2 i /2, corresponds to the distance calculations within cell i. The second term represents the calculation of distances between molecules in cell i and an adjacent cell j.While Eq. (??) can be evaluated with little effort, it is far more demanding to predict the number of force calculations. Furthermore, communication and computation costs at the boundary between adjacent subdomains allocated to different process can be significant. They depend on many factors, in particular on the molecule density at the boundary. Therefore, even if the load on all compute nodes is uniform and remains constant, the location of the subvolume boundaries has an influence on the overall performance. For a discussion of detailed models for the respective computational costs, the reader is referred toBuchholz. 73 In the present version of ls1 mardyn, the computational costs are estimated on the basis of the number of necessary distance calculations per cell according to Eq. (??). Figure 3 : 3Sequential execution times of ls1 mardyn on various platforms with different compilers. Scenarios: LJ vapor with N = 40 000, ρσ 3 = 0.044, and kT /ε = 0.7 (left) as well as liquid ethylene oxide with N = 65 536, ρ = 19.4 mol/l, and T = 285 K (right). The computational complexity of the linked-cell algorithm and domain decomposition scheme used in ls1 mardyn is O(N). To evaluate the efficiency of the implementation, runs with different numbers of molecules were performed. The results in 4 show that in the present case, the implementation scales almost perfectly with O(N), as the execution time per molecule is approximately constant. Figure 4 : 4Sequential execution time of ls1 mardyn per molecule, for simulations of a homogeneous LJ fluid at kT /ε = 0.95, ρσ 3 = 0.6223 with different system sizes on laki (SB). ( a )Figure 5 : a5Homogeneous liquid (N = 2 048) (b) Droplet (N = 46 585) (c) Planar interface (N = 102 400) Scenarios used during the performance evaluation of ls1 mardyn. Figure 6 : 6Scaling of ls1 mardyn on hermit with the fluid example. The starting points of the plots are placed on the diagonal, i.e. normalized to a parallel efficiency of 100 %, neglecting the deviation from perfect scaling for the respective reference case with the smallest number of processes.As discussed above, load balancing is of major importance for inhomogeneous molecule distributions. Strong scaling experiments were therefore carried out for the planar interface and droplet scenarios. The droplet was positioned slightly off the center of the simulation volume to avoid symmetry effects. The scenarios were run for 1 000 time steps, and the decomposition was updated every 100 time steps. The results are presented in 7 and show a clear advantage of the dynamic tree-based decomposition, making the simulation up to four times as fast as the static decomposition into subdomains with equal volume. Figure 7 : 7Accumulated execution time of ls1 mardyn for a strong scaling experiment on hermit using the planar interface scenario with N = 5 497 000 (left) and the droplet scenario with N = 3 698 000 (right). A straightforward static domain decomposition ( ), which assigns subdomains with equal volumes to all processing units, is compared with the dynamic k-d tree based decomposition (•).In addition to comparing the run times, the effectiveness of the dynamic load balancing implementation in ls1 mardyn is supported by traces revealing the load distribution between the processes. 8 shows such traces, generated with vampirtrace, for 15 processes of a droplet scenario simulation on the hermit system. For the trivial domain decomposition, 12 out of 15 processes are waiting in MPI routines most of the time, while the remaining three processes have to carry the bulk of the actual computation. In contrast, the k-d decomposition exhibits a more balanced distribution of computation and communication. Figure 8 : 8Traces for the droplet scenario on hermit, generated with vampirtrace. The program state over two time steps is shown for 15 parallel processes. Computation is indicated by blue colour, communication by red colour. Vertical lines indicate message passing between processes. connect by Mellanox and composed of 18 so-called islands, each of which consists of 512 nodes with 16 Intel Sandy Bridge EP cores at 2.7 GHz clock speed (turbo mode disabled) sharing 32 GB of main memory.Main features of the optimized code version include a lightweight shared-memory parallelization and hand-coded intrinsics in single precision for the LJ interactions within the kernel. The kernels were implemented in AVX128 (rather than AVX256), mainly for two reasons: First, the architecture of the Intel Sandy Bridge processor is unbalanced with respect to load and store bandwidth, which may result in equal performance for both variants. Second, AVX128 code usually shows better performance on the AMD Bulldozer architecture, where two processor cores share one 256-bit floating-point unit. Figure 9 : 9Weak scaling (•) and strong scaling (•) of ls1 mardyn on hermit (left) and SuperMUC (right), including the speedup (top) and the parallel efficiency (bottom), i.e. the speedup reduced by the number of processes. Almost ideal scaling was achieved in case of weak scaling, whereas a parallel efficiency of 53 % was obtained in the strong scaling tests on SuperMUC and 82.5 % on hermit, compared to two nodes. Table 1 : 1A consistent set of atomic units (used by the scenario generators). cells, which have an edge length equal to the cut-off radius r c . This ensures that all interaction partners for any given molecule are situated either within the cell of the molecule itself or the 26 surrounding cells. Nonetheless, these cells still contain numerous molecules which are beyond the cut-off radius. The volume covered by 27 cells is 27 r 3 c , whereas the relevant volume containing the interaction partners is a sphere with a radius r c , corresponding to 4πr 3 c /3 ≈ 4.2 r 3 c . Thus, in case of a homogeneous configuration, only 16% of all pairs for which the distance is computed are actually considered for intermolecular interactions.For fluids with computationally inexpensive pair potentials, e.g. molecules modeled by a single LJ site, the distance evaluation requires approximately the same computational effort as the force calculation. Reducing the volume which is examined for interaction partners can therefore significantly reduce the overall runtime. This can be achieved by using smaller cells with an edge length of e.g. r c /2, which reduces the considered volume from 27 r 3 c to 15.6 r 3 c , so that for a homogeneous configuration, 27% of the computed distances are smaller than the cut-off radius. Table 2 : 2HPC platforms used for performance measurements.System, location Processor type Interconnect Cores hermit, Stuttgart AMD Opteron 6276 Cray Gemini 113 664 (Interlagos, 16 cores @2.3 GHz) laki (NH), Stuttgart Intel Xeon X5560 InfiniBand 5 600 (Gainestown, 4 cores @2.8 GHz) laki (SB), Stuttgart Intel Xeon E5-2670 InfiniBand 3 072 (Sandy Bridge, 8 cores @2.6 GHz) SuperMUC, Garching Intel Xeon E5-2680 InfiniBand 147 456 (Sandy Bridge, 8 cores @2.7 GHz) XE6 system hermit at HLRS, however, without shared-memory parallelization and built with the GNU compiler. A noteworthy feature of the Cray XE6 machine is its 3D torus network with Gemini interconnect, which directly plugs in to the HyperTransport 3 host interface for fast MPI communication. On hermit, the code achieved a parallel efficiency of 82.5% and 69.7 GFLOPS in case of strong scaling and 91.5 % and 76.8 TFLOPS or 12.8% peak performance for weak scaling, respectively, on 32 768 cores in comparison to 64 cores, i.e. two nodes. AcknowledgementThe authors would like to thank A. Large systems 1: molecular dynamics. Large systems 1: molecular dynamics; http://www.ls1-mardyn.de/, accessed August 19, 2014. Computer Simulation of Liquids. M P Allen, D J Tildesley, Clarendon: OxfordAllen, M. P.; Tildesley, D. J. Computer Simulation of Liquids; Clarendon: Oxford, 1987. Understanding Molecular Simulation. D Frenkel, B Smit, Academic PressSan Diego2nd ed.Frenkel, D.; Smit, B. Understanding Molecular Simulation, 2nd ed.; Academic Press: San Diego, 2002. . S Deublein, B Eckl, J Stoll, S V Lishchuk, G Guevara Carrión, C W Glass, T Merker, M Bernreuther, H Hasse, Vrabec, J. Comput. Phys. Comm. 182Deublein, S.; Eckl, B.; Stoll, J.; Lishchuk, S. V.; Guevara Carrión, G.; Glass, C. W.; Merker, T.; Bernreuther, M.; Hasse, H.; Vrabec, J. Comput. Phys. Comm. 2011, 182, 2350- 2367. . D Möller, J Fischer, Mol. Phys. 69Möller, D.; Fischer, J. Mol. Phys. 1990, 69, 463-473. . J Vrabec, H Hasse, Mol. Phys. 100Vrabec, J.; Hasse, H. Mol. Phys. 2002, 100, 3375-3383. . A I Rusanov, E N Brodskaya, J. Colloid Interf. Sci. 62Rusanov, A. I.; Brodskaya, E. N. J. Colloid Interf. Sci. 1977, 62, 542-555. . M Rao, B J Berne, M H Kalos, J. Chem. Phys. 68Rao, M.; Berne, B. J.; Kalos, M. H. J. Chem. Phys. 1978, 68, 1325-1336. . R Angélil, J Diemand, K K Tanaka, H Tanaka, J. Chem. Phys. 74303Angélil, R.; Diemand, J.; Tanaka, K. K.; Tanaka, H. J. Chem. Phys. 2014, 140, 074303. . A A Chialvo, P G Debenedetti, Phys. Rev. A. 43Chialvo, A. A.; Debenedetti, P. G. Phys. Rev. A 1991, 43, 4289-4295. . S Sokołowski, Fischer, J. Phys. Rev. A. 41Sokołowski, S.; Fischer, J. Phys. Rev. A 1990, 41, 6866-6870. . M Horsch, M Heitzig, C Dan, J Harting, H Hasse, J Vrabec, Langmuir, 26Horsch, M.; Heitzig, M.; Dan, C.; Harting, J.; Hasse, H.; Vrabec, J. Langmuir 2010, 26, 10913-10917. . F Rösch, H.-R Trebin, Eur. Phys. Lett. 8766004Rösch, F.; Trebin, H.-R. Eur. Phys. Lett. 2009, 87, 66004. . P A Thompson, S M Troian, Nature. 389Thompson, P. A.; Troian, S. M. Nature 1997, 389, 360-362. . H Frentrup, C Avendaño, M Horsch, A Salih, E A Müller, Mol, Sim, 38Frentrup, H.; Avendaño, C.; Horsch, M.; Salih, A.; Müller, E. A. Mol. Sim. 2012, 38, 540- 553. . F Müller-Plathe, Chemphyschem, 3Müller-Plathe, F. ChemPhysChem 2002, 3, 754-769. . E H Lee, J Hsin, M Sotomayor, G Comellas, K Schulten, Structure. 17Lee, E. H.; Hsin, J.; Sotomayor, M.; Comellas, G.; Schulten, K. Structure 2009, 17, 1295- 1306. . K Lindorff-Larsen, S Piana, R O Dror, D E Shaw, Science. 334Lindorff-Larsen, K.; Piana, S.; Dror, R. O.; Shaw, D. E. Science 2011, 334, 517-520. . M Engel, H.-R Trebin, Phys. Rev. Lett. 225505Engel, M.; Trebin, H.-R. Phys. Rev. Lett. 2007, 98, 225505. A Laio, M Parrinello, Proc. Nat. Acad. Sci. Nat. Acad. Sci99Laio, A.; Parrinello, M. Proc. Nat. Acad. Sci. 2002, 99, 12562-12566. W Eckhardt, A Heinecke, R Bader, M Brehm, N Hammer, H Huber, H.-G Kleinhenz, J Vrabec, H Hasse, M Horsch, M Bernreuther, C W Glass, C Niethammer, A Bode, J Bungartz, Supercomputing -Proceedings of the XXVIII. International Supercomputing Conference. ISCEckhardt, W.; Heinecke, A.; Bader, R.; Brehm, M.; Hammer, N.; Huber, H.; Kleinhenz, H.- G.; Vrabec, J.; Hasse, H.; Horsch, M.; Bernreuther, M.; Glass, C. W.; Niethammer, C.; Bode, A.; Bungartz, J. In Supercomputing -Proceedings of the XXVIII. International Su- percomputing Conference (ISC); . J M Kunkel, T Ludwig, H W Meuer, Lecture Notes in Computer Science. 7905SpringerKunkel, J. M., Ludwig, T., Meuer, H. W., Eds.; Lecture Notes in Computer Science 7905; Springer: Heidelberg, 2013; pp 1-12. . G Guevara Carrión, J Vrabec, H Hasse, J. Chem. Phys. 74508Guevara Carrión, G.; Vrabec, J.; Hasse, H. J. Chem. Phys. 2011, 134, 074508. . M Horsch, J Vrabec, M Bernreuther, S Grottel, G Reina, A Wix, K Schaber, H Hasse, J. Chem. Phys. 128164510Horsch, M.; Vrabec, J.; Bernreuther, M.; Grottel, S.; Reina, G.; Wix, A.; Schaber, K.; Hasse, H. J. Chem. Phys. 2008, 128, 164510. . M Horsch, J Vrabec, J. Chem. Phys. 184104Horsch, M.; Vrabec, J. J. Chem. Phys. 2009, 131, 184104. . E A Müller, Curr, Opin. Chem. Eng. 2Müller, E. A. Curr. Opin. Chem. Eng. 2013, 2, 223-228. . D A Case, I Cheatham, T E Darden, T Gohlke, H Luo, R Merz, K M Onufriev, A Simmerling, C Wang, B Woods, R , J. Comput. Chem. 26Case, D. A.; Cheatham, I., T. E.; Darden, T.; Gohlke, H.; Luo, R.; Merz, j., K. M.; Onu- friev, A.; Simmerling, C.; Wang, B.; Woods, R. J. Comput. Chem. 2005, 26, 1668-1688. . Salomon Ferrer, R Götz, A W Poole, D Le Grand, S Walker, R C , J. Chem. Theory Comput. 9Salomon Ferrer, R.; Götz, A. W.; Poole, D.; Le Grand, S.; Walker, R. C. J. Chem. Theory Comput. 2013, 9, 3878-3888. . H J C Berendsen, D Van Der Spoel, R Van Drunen, Comput. Phys. Comm. 91Berendsen, H. J. C.; van der Spoel, D.; van Drunen, R. Comput. Phys. Comm. 1995, 91, 43-56. . S Pronk, P Szilárd, R Schulz, P Larsson, P Bjelkmar, R Apostolov, M R Shirts, J C Smith, P M Kasson, D Van Der Spoel, B Hess, E Lindahl, Bioinformatics. 29Pronk, S.; Szilárd, P.; Schulz, R.; Larsson, P.; Bjelkmar, P.; Apostolov, R.; Shirts, M. R.; Smith, J. C.; Kasson, P. M.; van der Spoel, D.; Hess, B.; Lindahl, E. Bioinformatics 2013, 29, 845-854. . J C Phillips, R Braun, W Wang, J Gumbart, E Tajkhorshid, E Villa, C Chipot, R D Skeel, L Kalé, K Schulten, J. Comput. Chem. 26Phillips, J. C.; Braun, R.; Wang, W.; Gumbart, J.; Tajkhorshid, E.; Villa, E.; Chipot, C.; Skeel, R. D.; Kalé, L.; Schulten, K. J. Comput. Chem 2005, 26, 1781-1802. . P Ren, C Wu, J W Ponder, J. Chem. Theory Comput. 7Ren, P.; Wu, C.; Ponder, J. W. J. Chem. Theory Comput. 2011, 7, 3143-3461. . B R Brooks, R E Bruccoleri, B D Olafson, D J States, S Swaminathan, M Karplus, J. Comput. Chem. 4Brooks, B. R.; Bruccoleri, R. E.; Olafson, B. D.; States, D. J.; Swaminathan, S.; Karplus, M. J. Comput. Chem. 1983, 4, 187-217. . B R Brooks, I Brooks, C L Mackerell, A D Nilsson, L Petrella, R J Roux, B Won, Y Archontis, G Bartels, C Boresch, S Caflisch, A Caves, L Cui, Q Dinner, A R Feig, M Fischer, S Gao, J Hodoscek, M Im, W Kuczera, K Lazaridis, T Ma, J Ovchinnikov, V Paci, E Pastor, R W Post, C B Pu, J Z Schaefer, M Tidor, B Venable, R M Woodcock, H L Wu, X Yang, W York, D M Karplus, M , J. Comput. Chem. 30Brooks, B. R.; Brooks, I., C. L.; Mackerell, A. D.; Nilsson, L.; Petrella, R. J.; Roux, B.; Won, Y.; Archontis, G.; Bartels, C.; Boresch, S.; Caflisch, A.; Caves, L.; Cui, Q.; Din- ner, A. R.; Feig, M.; Fischer, S.; Gao, J.; Hodoscek, M.; Im, W.; Kuczera, K.; Lazaridis, T.; Ma, J.; Ovchinnikov, V.; Paci, E.; Pastor, R. W.; Post, C. B.; Pu, J. Z.; Schaefer, M.; Tidor, B.; Venable, R. M.; Woodcock, H. L.; Wu, X.; Yang, W.; York, D. M.; Karplus, M. J. Comput. Chem. 2009, 30, 1545-1615. . S Plimpton, J. Comput. Phys. 117Plimpton, S. J. Comput. Phys. 1995, 117, 1-19. . W M Brown, P Wang, S J Plimpton, A N Tharrington, Comput. Phys. Comm. 182Brown, W. M.; Wang, P.; Plimpton, S. J.; Tharrington, A. N. Comput. Phys. Comm. 2011, 182, 898-911. . S J Plimpton, A P Thompson, MRS Bulletin. 37Plimpton, S. J.; Thompson, A. P. MRS Bulletin 2012, 37, 513-521. . J Diemand, R Angélil, K K Tanaka, H Tanaka, J. Chem. Phys. 74309Diemand, J.; Angélil, R.; Tanaka, K. K.; Tanaka, H. J. Chem. Phys. 2013, 139, 074309. . I T Todorov, W Smith, K Trachenko, M T Dove, J. Materials Chem. 16Todorov, I. T.; Smith, W.; Trachenko, K.; Dove, M. T. J. Materials Chem. 2006, 16, 1911- 1918. . H.-J Limbach, A Arnold, B A Mann, C Holm, Comput. Phys. Comm. 174Limbach, H.-J.; Arnold, A.; Mann, B. A.; Holm, C. Comput. Phys. Comm. 2006, 174, 704- 727. . J Stadler, R Mikulla, H.-R Trebin, Int. J. Mod. Phys. C. 8Stadler, J.; Mikulla, R.; Trebin, H.-R. Int. J. Mod. Phys. C 1997, 8, 1131-1140. . J Roth, F Gähler, H.-R Trebin, Int. J. Mod. Phys. C. 11Roth, J.; Gähler, F.; Trebin, H.-R. Int. J. Mod. Phys. C 2000, 11, 317-322. . B Saager, J Fischer, M Neumann, Mol, Sim, 6Saager, B.; Fischer, J.; Neumann, M. Mol. Sim. 1991, 6, 27-49. . R Quentrec, C Brot, J. Comput. Phys. 13Quentrec, R.; Brot, C. J. Comput. Phys. 1973, 13, 430-432. . R W Hockney, J W Eastwood, Computer Simulation Using Particles. Hockney, R. W.; Eastwood, J. W. Computer Simulation Using Particles; . Mcgraw-Hill, New YorkMcGraw-Hill: New York, 1981. S Schamberger, J.-M Wierum, Proceedings of the VII. International Conference on Parallel Computing Technologies. the VII. International Conference on Parallel Computing TechnologiesPaCTSchamberger, S.; Wierum, J.-M. In Proceedings of the VII. International Conference on Par- allel Computing Technologies (PaCT); . V Malyshkin, Ed, Lecture Notes in Computer Science. SpringerMalyshkin, V., Ed.; Lecture Notes in Computer Sci- ence 2763; Springer: Heidelberg, 2003; pp 165-179. High Performance Computing on Vector Systems. M Bernreuther, J Vrabec, M Resch, T Bönisch, K Benkert, W Bez, T Furui, Y Seo, SpringerHeidelbergBernreuther, M.; Vrabec, J. In High Performance Computing on Vector Systems; Resch, M., Bönisch, T., Benkert, K., Bez, W., Furui, T., Seo, Y., Eds.; Springer: Heidelberg, 2006; pp 187-195. M Bernreuther, M Buchholz, H.-J Bungartz, Parallel Computing: Architectures, Algorithms and Applications -Proceedings of the XII. International Conference on Parallel Computing. ParCoBernreuther, M.; Buchholz, M.; Bungartz, H.-J. In Parallel Computing: Architectures, Al- gorithms and Applications -Proceedings of the XII. International Conference on Parallel Computing (ParCo); . G Joubert, C Bischof, F Peters, T Lippert, M Bücker, P Gibbon, B Mohr, Advances in Parallel Computing. 15IOS: AmsterdamJoubert, G., Bischof, C., Peters, F., Lippert, T., Bücker, M., Gibbon, P., Mohr, B., Eds.; Advances in Parallel Computing 15; IOS: Amsterdam, 2008; pp 53-60. . M Horsch, J Vrabec, H Hasse, Phys. Rev. E. 7811603Horsch, M.; Vrabec, J.; Hasse, H. Phys. Rev. E 2008, 78, 011603. . J Vrabec, M Horsch, H Hasse, J. Heat Transfer (ASME). 43202Vrabec, J.; Horsch, M.; Hasse, H. J. Heat Transfer (ASME) 2009, 131, 043202. . M Horsch, Z Lin, T Windmann, H Hasse, Vrabec, J. Atmospher. Res. 101Horsch, M.; Lin, Z.; Windmann, T.; Hasse, H.; Vrabec, J. Atmospher. Res. 2011, 101, 519- 526. . S Grottel, G Reina, J Vrabec, T Ertl, Transact, Vis. Comp. Graph. 13Grottel, S.; Reina, G.; Vrabec, J.; Ertl, T. IEEE Transact. Vis. Comp. Graph. 2007, 13, 1624- 1631. Physical Studies (L'viv). M Horsch, S Miroshnichenko, J J Vrabec, 134004Horsch, M.; Miroshnichenko, S.; Vrabec, J. J. Physical Studies (L'viv) 2009, 13, 4004. . M Horsch, H Hasse, A K Shchekin, A Agarwal, S Eckelsbach, J Vrabec, E A Müller, G Jackson, Phys. Rev. E. 8531605Horsch, M.; Hasse, H.; Shchekin, A. K.; Agarwal, A.; Eckelsbach, S.; Vrabec, J.; Müller, E. A.; Jackson, G. Phys. Rev. E 2012, 85, 031605. . S Werth, S V Lishchuk, M Horsch, H Hasse, Physica A. 392Werth, S.; Lishchuk, S. V.; Horsch, M.; Hasse, H. Physica A 2013, 392, 2359-2367. . M Horsch, H Hasse, Chem. Eng. Sci. 107Horsch, M.; Hasse, H. Chem. Eng. Sci. 2014, 107, 235-244. . S Werth, G Rutkai, J Vrabec, M Horsch, H Hasse, 10.1080/00268976.2013.861086Mol. Phys. in pressWerth, S.; Rutkai, G.; Vrabec, J.; Horsch, M.; Hasse, H. Mol. Phys. 2014, in press (DOI: 10.1080/00268976.2013.861086). M Horsch, J Vrabec, M Bernreuther, H Hasse, Proceedings of the 6th International Symposium on Turbulence. the 6th International Symposium on TurbulenceHeat and Mass TransferHorsch, M.; Vrabec, J.; Bernreuther, M.; Hasse, H. In Proceedings of the 6th International Symposium on Turbulence, Heat and Mass Transfer; . K Hanjalić, Ed, Begell HouseNew YorkHanjalić, K., Ed.; Begell House: New York, 2009; pp 89-92. . M Horsch, C Niethammer, J Vrabec, H Hasse, Informat. Technol. 55Horsch, M.; Niethammer, C.; Vrabec, J.; Hasse, H. Informat. Technol. 2013, 55, 97-101. S Grottel, G Reina, T Ertl, Proceedings of the IEEE Pacific Visualization Symposium. the IEEE Pacific Visualization SymposiumGrottel, S.; Reina, G.; Ertl, T. In Proceedings of the IEEE Pacific Visualization Symposium; . P Eades, T Ertl, H.-W Shen, IEEE Computer SocietyEades, P., Ertl, T., Shen, H.-W., Eds.; IEEE Computer Society, 2009; pp 65-72. . S Grottel, G Reina, C Dachsbacher, T Ertl, Comp. Graph. Forum. 29Grottel, S.; Reina, G.; Dachsbacher, C.; Ertl, T. Comp. Graph. Forum 2010, 29, 953-962. . H A Lorentz, Ann. Phys. (Leipzig) 1881. 12Lorentz, H. A. Ann. Phys. (Leipzig) 1881, 12, 127-136, 660-661. . T Schnabel, J Vrabec, H Hasse, J. Mol. Liq. 135Schnabel, T.; Vrabec, J.; Hasse, H. J. Mol. Liq. 2007, 135, 170-178. . D Berthelot, Compt. Rend. Acad. Sci. 1898Berthelot, D. Compt. Rend. Acad. Sci. 1898, 126, 1703-1706, 1857-1858. . J Vrabec, Y.-L Huang, H Hasse, Fluid Phase Equilib. 279Vrabec, J.; Huang, Y.-L.; Hasse, H. Fluid Phase Equilib. 2009, 279, 120-135. . J Stoll, J Vrabec, H Hasse, J Aiche, 49Stoll, J.; Vrabec, J.; Hasse, H. AIChE J. 2003, 49, 2187-2198. . J Vrabec, J Stoll, H Hasse, Mol, Sim, 31Vrabec, J.; Stoll, J.; Hasse, H. Mol. Sim. 2005, 31, 215-221. . A J Stone, Science. 321Stone, A. J. Science 2008, 321, 787-789. . C G Gray, K E Gubbins, Theory of Molecular Fluids. Gray, C. G.; Gubbins, K. E. Theory of Molecular Fluids; . B Eckl, J Vrabec, H Hasse, Fluid Phase Equilib. 274Eckl, B.; Vrabec, J.; Hasse, H. Fluid Phase Equilib. 2008, 274, 16-26. . Tersoff, J. Phys. Rev. Lett. 61Tersoff, J. Phys. Rev. Lett. 1988, 61, 2879-2882. . Tersoff, J. Phys. Rev. B. 39Tersoff, J. Phys. Rev. B 1989, 39, 5566-5568. . L M Ghiringhelli, C Valeriani, J H Los, E J Meijer, A Fasolino, D Frenkel, Mol. Phys. 106Ghiringhelli, L. M.; Valeriani, C.; Los, J. H.; Meijer, E. J.; Fasolino, A.; Frenkel, D. Mol. Phys. 2008, 106, 2011-2038. Framework zur Parallelisierung von Molekulardynamiksimulationen in verfahrenstechnischen Anwendungen. Dissertation. M Buchholz, Technische Universität MünchenBuchholz, M. Framework zur Parallelisierung von Molekulardynamiksimulationen in ver- fahrenstechnischen Anwendungen. Dissertation, Technische Universität München, 2010. Radium (Paris) 1912. C Störmer, 9Störmer, C. Radium (Paris) 1912, 9, 395-399. . L Verlet, Phys. Rev. 159Verlet, L. Phys. Rev. 1967, 159, 98-103. . R W Hockney, Methods Comput. Phys. 9Hockney, R. W. Methods Comput. Phys. 1970, 9, 136-211. In Parallel Computing in Computational Chemistry; Mattson. S Plimpton, B Hendrickson, T. G., Ed.ACSWashington, D.C.,Plimpton, S.; Hendrickson, B. In Parallel Computing in Computational Chemistry; Matt- son, T. G., Ed.; ACS: Washington, D.C., 1995; pp 114-132. . J L Bentley, Comm. ACM. 18Bentley, J. L. Comm. ACM 1975, 18, 509-517. . H D Simon, S.-H Teng, Siam, J. Sci. Comput. 18Simon, H. D.; Teng, S.-H. SIAM J. Sci. Comput. 1995, 18, 1436-1445. P.-E Bernard, T Gautier, D Trystram, Parallel Processing -Proceedings of the XIII. International Conference on Parallel and Distributed Processing (IPPS/SPDP). IEEEBernard, P.-E.; Gautier, T.; Trystram, D. Parallel Processing -Proceedings of the XIII. Inter- national Conference on Parallel and Distributed Processing (IPPS/SPDP); IEEE: Washing- ton, D.C., 1999; pp 638-644. F Fleissner, P Eberhard, Parallel Computing: Architectures, Algorithms and Applications -Proceedings of the XII. International Conference on Parallel Computing. ParCoFleissner, F.; Eberhard, P. In Parallel Computing: Architectures, Algorithms and Applica- tions -Proceedings of the XII. International Conference on Parallel Computing (ParCo); . G Joubert, C Bischof, F Peters, T Lippert, M Bücker, P Gibbon, B Mohr, Parallel Computing. 15IOS: AmsterdamJoubert, G., Bischof, C., Peters, F., Lippert, T., Bücker, M., Gibbon, P., Mohr, B., Eds.; Ad- vances in Parallel Computing 15; IOS: Amsterdam, 2008; pp 37-44. . M J Berger, S H Bokhari, Transact, 36Berger, M. J.; Bokhari, S. H. IEEE Transact. Comput. 1987, C-36, 570-580.
[]
[ "arXiv:physics/0411104v2 [physics.chem-ph] Importance of electronic self-consistency in the TDDFT based treatment of nonadiabatic molecular dynamics", "arXiv:physics/0411104v2 [physics.chem-ph] Importance of electronic self-consistency in the TDDFT based treatment of nonadiabatic molecular dynamics" ]
[ "T A Niehaus \nDept. of Theoretical Physics\nUniversity of Paderborn\nD -33098PaderbornGermany\n", "D Heringer \nDept. of Theoretical Physics\nUniversity of Paderborn\nD -33098PaderbornGermany\n", "B Torralva \nChemistry and Materials Science\nLawrence Livermore National Laboratory\n94550LivermoreCAUSA\n", "Th Frauenheim \nDept. of Theoretical Physics\nUniversity of Paderborn\nD -33098PaderbornGermany\n" ]
[ "Dept. of Theoretical Physics\nUniversity of Paderborn\nD -33098PaderbornGermany", "Dept. of Theoretical Physics\nUniversity of Paderborn\nD -33098PaderbornGermany", "Chemistry and Materials Science\nLawrence Livermore National Laboratory\n94550LivermoreCAUSA", "Dept. of Theoretical Physics\nUniversity of Paderborn\nD -33098PaderbornGermany" ]
[]
A mixed quantum-classical approach to simulate the coupled dynamics of electrons and nuclei in nanoscale molecular systems is presented. The method relies on a second order expansion of the Lagrangian in time-dependent density functional theory (TDDFT) around a suitable reference density. We show that the inclusion of the second order term renders the method a self-consistent scheme and improves the calculated optical spectra of molecules by a proper treatment of the coupled response. In the application to ion-fullerene collisions, the inclusion of self-consistency is found to be crucial for a correct description of the charge transfer between projectile and target. For a model of the photoreceptor in retinal proteins, nonadiabatic molecular dynamics simulations are performed and reveal problems of TDDFT in the prediction of intra-molecular charge transfer excitations.
10.1140/epjd/e2005-00079-7
[ "https://arxiv.org/pdf/physics/0411104v2.pdf" ]
26,265,166
physics/0411104
565fdb28705e4c3d24e870d4f392e78c5c0547b9
arXiv:physics/0411104v2 [physics.chem-ph] Importance of electronic self-consistency in the TDDFT based treatment of nonadiabatic molecular dynamics 23 Feb 2005 T A Niehaus Dept. of Theoretical Physics University of Paderborn D -33098PaderbornGermany D Heringer Dept. of Theoretical Physics University of Paderborn D -33098PaderbornGermany B Torralva Chemistry and Materials Science Lawrence Livermore National Laboratory 94550LivermoreCAUSA Th Frauenheim Dept. of Theoretical Physics University of Paderborn D -33098PaderbornGermany arXiv:physics/0411104v2 [physics.chem-ph] Importance of electronic self-consistency in the TDDFT based treatment of nonadiabatic molecular dynamics 23 Feb 2005(Dated: August 14, 2018) A mixed quantum-classical approach to simulate the coupled dynamics of electrons and nuclei in nanoscale molecular systems is presented. The method relies on a second order expansion of the Lagrangian in time-dependent density functional theory (TDDFT) around a suitable reference density. We show that the inclusion of the second order term renders the method a self-consistent scheme and improves the calculated optical spectra of molecules by a proper treatment of the coupled response. In the application to ion-fullerene collisions, the inclusion of self-consistency is found to be crucial for a correct description of the charge transfer between projectile and target. For a model of the photoreceptor in retinal proteins, nonadiabatic molecular dynamics simulations are performed and reveal problems of TDDFT in the prediction of intra-molecular charge transfer excitations. I. INTRODUCTION Beginning with the work of Zangwill and Soven [1], the generalization of density functional theory to time dependent phenomena (TDDFT) has become an important tool in the description of laser-matter interaction. The possible applications are diverse and range from the calculation of spectra (linear optical [2,3,4], circular dichroism [5,6], resonant Raman [7]) to the evaluation of properties (polarization [8,9], hyperpolarization [10]) up to studies of high harmonic generation [11,12,13] and photochemical reactions [14]. The formal justification of TDDFT was laid by Runge and Gross [15], who showed that the exact many body electron density can be obtained from single-particle mean field equations. The solution of these time dependent Kohn-Sham (TDKS) equations can be obtained either perturbatively in the small amplitude limit [16] or by direct numerical integration in the time domain [17]. Both approaches have their inherent merits and disadvantages. In the linear response regime, for example, the problem can be recast in an eigenvalue equation in the particlehole representation. This allows for an interpretation of optical spectra in terms of contributing single-particle transitions and also for a symmetry assignment of the states. Moreover, transitions with vanishing oscillator strength can be located, like dark singlet or generally triplet states [61]. One of the drawbacks of this approach is the rather poor numerical performance with a scaling of N 6 , where N is the number of electrons. It should be noted, however, that the CPU time as well as the memory demand can be significantly reduced when iterative procedures like the Davidson algorithm are employed. In terms of scaling behavior, the numerical integration of the TDKS equations is much more favorable. Here, only the set of occupied orbitals needs to be treated. Because the propagation involves only matrix-vector products, linear scaling can be achieved for large systems [19]. Moreover, since this approach is not restricted to small intensities, non-linear effects like harmonic generation or multiphoton processes can be addressed. Another advantage of working in the real time domain is the possibility to study systematically the effect of different pulse shapes of the laser field on observables such as ionization [20]. With todays femtosecond laser sources, this is currently an active field of experimental research [21]. Nearly all first principles applications of the real time approach have been limited to systems with fixed nuclei. Clearly, it would be highly desirable to study the motion of the coupled system of electrons and nuclei, which would allow one to address problems like laser induced vibrational excitation or photochemical reactions. Since the time step of such molecular dynamics simulations is set to attoseconds by the ultrafast electronic motion, only small systems with a few degrees of freedom can be treated in an ab initio frame work [13,20]. Consequently, approximate TDDFT methods are quite successful in this domain of application. Different groups contributed to this field and used their developments in a variety of different studies [14,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,39]. In all these approximate schemes only the valence electrons are treated explicitely and the TDKS orbitals are expanded in a limited (usually minimal) basis of atomic orbitals. The Lagrangian, which is a functional of the time dependent density, is then expanded around a static reference density up to a certain order. In zeroth order the Hamiltonian depends only on the reference density, which permits the calculation of the necessary matrix elements once and for all. In this respect, the methods are similar to tight-binding approaches, although no fitting to experimental data is performed. The purpose of this work is to analyze the implications of extending the mentioned expansion, since all studies so far were restricted to zeroth order. After a more detailed description of the problem in Sec. II, we test the extension in the determination of optical spectra in Sec. III A as well as for nonadiabatic molecular dynamics in Sec. III B. Finally, we perform an investigation of the photochemical reaction of a retinal analogue, a chromophore which exhibits an ultrafast radiationless deactivation in nature. These applications in quite different areas of molecular physics are intended to investigate the transferability of the method and also to illustrate the possibilities offered by an approximate solution of the TDDFT equations. II. METHOD In order to study the dynamics of a coupled system of electrons and nuclei, the equations of motion (EOM) need to be determined. While the electronic EOM in the framework of DFT is given by the well known time dependent Kohn-Sham equations, the nuclear EOM or force equation is not a priori evident. It can be derived either by exploiting the fact that the total energy is a conserved quantity, or by applying the Lagrange formalism. We follow the latter approach here and define the following Lagrangian, which depends on the TDKS states Ψ i (r t) and the nuclear positions R A : L = A 1 2 M AṘ 2 A (1) − occ i Ψ i (r t)| H[ρ](r t) − i ∂ ∂t |Ψ i (r t) − E DC − E ii , with ρ(r t) = i |Ψ i (r t)| 2 . Here the first term is the classical kinetic energy of the ions, while the remaining terms in Eq. (1) can be obtained from the TDDFT action functional under the assumption that the exchangecorrelation (xc) contributions are local in time [44]. In this widely used adiabatic local density approximation, standard ground state functionals can also be used in the time dependent context simply by evaluation at the time dependent density. Thus, the Hamiltonian H[ρ](r t) in Eq. (1) takes the common DFT form. Furthermore, E DC represents the double counting terms E DC = − 1 2 ′ ρ(r t)ρ(r ′ t) |r − r ′ | + E xc [ρ] − v xc [ρ]ρ(r t),(2) and E ii the ion-ion repulsion (Here and in the following dr ′ is abbreviated as ′ , and dr as ). We now proceed by applying the same kind of approximations as were used in the derivation of the density functional theory based tight-binding (DFTB) method [40,41] from static DFT. To keep the presentation concise we refer to some reviews [38,39], which provide a more detailed description of the basic concepts, practical realization and accuracy of the ground state DFTB approach. Here, we only report aspects, which are specific for the generalization to the time-dependent case. In a first step, the Lagrangian is expanded around a reference density ρ 0 (r), ρ(r t) = ρ 0 (r) + δρ(r t), which is given as a superposition of atomic (ground state) densities. In contrast to our earlier work [39], we now include terms up to second order in the density fluctuations δρ(r t): L ≈ A 1 2 M AṘ 2 A − occ i Ψ i (r t)| H[ρ 0 ](r) − i ∂ ∂t |Ψ i (r t) (3a) + 1 2 ′ ρ 0 (r)ρ 0 (r ′ ) |r − r ′ | − E xc [ρ 0 ] + v xc [ρ 0 ]ρ 0 (r) − E ii (3b) − 1 2 ′ 1 |r − r ′ | + δv xc [ρ](r t) δρ(r ′ t) δρ(r t)δρ(r ′ t). (3c) Please note that in this expansion all contributions which are linear in δρ are captured by the second term in Eq. (3a) through the TDKS states. The terms in Eq. (3b) can now be subsumed as E rep , a sum of short ranged pair potentials, which depend only on the atomic species and the interatomic distance. Since E rep is a functional of the time independent reference density ρ 0 only, it is exactly the same as used in the ground state DFTB scheme. The second order term of Eq. (3c), which is the focus of this work, is approximated as follows: E 2nd = 1 2 ′ 1 |r − r ′ | + δv xc [ρ](r t) δρ(r ′ t) δρ(r t)δρ(r ′ t) ≈ 1 2 AB ∆q A (t)γ AB ∆q B (t).(4) Here the ∆q A (t) denote atomic net Mulliken charges ∆q A (t) = q A (t) − q free atom A (5) q A (t) = 1 2 occ i µ∈A,ν b * µi (t)b νi (t)S µν + b * νi (t)b µi (t)S νµ , where the coefficients b µi (t) are defined by the expansion of the TDKS states in a basis of non-orthogonal atomic orbitals φ µ (r − R A ) Ψ i (r t) = µ b µi (t)φ µ (r − R A ),(6) which build the overlap matrix S µν = φ µ |φ ν . Further, the function γ AB in Eq. 4 interpolates between a pure Coulomb interaction for large interatomic distances and an element-specific constant in the atomic limit. This numerically evaluated parameter includes the effects of exchange and correlation and is directly related to the chemical hardness of the atomic species [41]. Taking the coefficients and nuclear positions as generalized coordinates, the evaluation of the Euler-Lagrange equations leads to the desired equations of motion. The electronic motion obeys: b νi = − δµ S −1 νδ iH δµ + AṘ A φ δ | ∂ ∂R A φ µ b µi(7a) with H µν = φ µ |H[ρ 0 ]|φ ν + 1 2 S µν C (γ AC + γ BC )∆q C . = H 0 µν + H 1 µν ; ∀ µ ∈ A; ν ∈ B.(7b) In zeroth order the Hamiltonian reduces to the first term in Eq. (7b) and depends solely on the reference density ρ 0 . For systems in the excited state or in charged or heteronuclear structures, the electronic density differs significantly from a simple superposition of atomic ground state densities. To a certain extent this difference is already captured at the zeroth order level, since the coefficients which solve Eq. (7a) correspond to a timedependent density different from ρ 0 . This is similar to the situation in empirical tight-binding schemes for the ground state, where even certain ionic crystals are suffciently well described [42]. However, a consideration of the full Hamiltonian in Eq. (7b) leads obviously to a more balanced treatment, because dynamical changes in the electron density are explicitly included in a selfconsistent fashion. In this way, one can even hope to correctly describe the large amplitude motion induced by intense laser fields. Solution of Eq. (7a) requires an iterative procedure with timesteps in the attosecond regime. A symplectic algorithm is used for this task [37], which is based on the Cayley representation of the time evolution operator and conserves the norm of the wavefunction exactly. Besides the Hamiltonian and overlap matrices (see Ref. [39] for details of the construction) , Eq. (7a) contains also the nonadiabatic coupling matrix φ µ | ∂ ∂RA φ ν , in which all on-site elements (φ µ , φ ν on the same atom) are set to zero [43]. This allows one to relate the remaining elements to a simple derivative of the overlap matrix. Variation of the Lagrangian with respect to the nuclear coordinates leads to the following expression for the forces: M ARA = − occ i µν b * µi b νi dH 0 µν dR A + dS µν dR A B γ AB ∆q B + 1 2 occ i µνδγ b * µi dS µν dR A S −1 νδ H δγ b γi + c.c. − ∆q A B dγ AB dR A ∆q B − dE rep dR A .(8) Since the Hamiltonian is time dependent due to an external field or nuclear motion, the molecular orbital coefficients will in general represent a coherent superposition of different eigenstates of the system. In this case, the nuclei move in a mean potential according to Eq. 8, rather than being restricted to a particular Born-Oppenheimer surface as in conventional adiabatic MD approaches. In fact, due to the coupling of the EOM, energy can be freely exchanged between the electronic and ionic subsystems as long as the total energy of the system is conserved. Equations of motion that are equivalent to the ones reported here, have been derived earlier by Saalmann and Schmidt [29] as well as Todorov [36]. Including the second order correction, they are solved here for the first time in an actual calculation. Todorov pointed out, that for an incomplete basis the force equation has to be augmented by additional velocity dependent terms, which should become important in high energy collisions. Interestingly, omission of these terms does not violate energy but only momentum conservation. Hence, it is useful to monitor the total momentum (electronic + ionic) of the system if Eq. (8) is used, as in every practical calculation the basis set is incomplete. Finally, in order to simulate the interaction with electromagnetic fields, the vector potential A(r t) needs to be incorporated, which is done via minimal coupling, p − e c A. A numerically efficient approximation was proposed by Graf and Vogl [45] and later by Allen [22] and is given by: H µν [r, p, A(t)] = exp ie c (R A − R B )A(t) × H µν [r, p]; µ ∈ A, ν ∈ B,(9) which relates the desired field dependent Hamiltonian to the already known matrix elements of the unperturbed one. Expression (9) was derived under the assumption that the radiation wavelength is much larger than the molecular system under study, which is usually well fulfilled for frequencies in the optical range. It should be mentioned, that Eq. (9) in principle holds for arbitrarily strong fields in contrast to the electric dipole approximation. If the interest is just on calculating optical spectra, rather than the molecular motion initiated by a laser pulse, only Eq. (7a) needs to be solved for a fixed geometry. Following the approach of Yabana and Bertsch [17,46] the field in Eq. (9) is turned on only at a certain instant of time, which populates the complete manifold of excited states. The time dependent dipole moment d(t) can then be used to calculate the dynamic polarizability: α(ω) = c eA e iωt (d(t) − d(0)) dt,(10) and the dipole strength function S(ω) = 2ω/π ℑα(ω) of the system; a quantity which can be directly compared to experimental spectra. III. APPLICATIONS A. Optical spectrum of trans-butadiene As a first application of our method, we examine a prototypical π-system, trans-butadiene (Fig. 1). The optical [47]. Shown is an average over different molecular orientations with respect to the polarization of the vector potential. Results obtained in the DFTB linear-response implementation of TDDFT are shown as stick spectrum. The associated oscillator strengths have been normalized to the maximum of the highest energy peak. spectrum of this molecule has been the subject of numerous quantum chemical studies (see e.g. Ref. 48 and references therein). Recently, also a detailed investigation of the molecular dynamics in the excited state appeared [14]. Dou et al. employed the DFTB method described in this work without the second order correction. Our interest here is to analyze the implications of including this term. To this end, we first relaxed the molecule with the ground state DFTB method and recorded the optical spectrum according to the prescription given in Sec. II. After applying a vector potential of A = 0.0125 gauss cm, the Kohn-Sham orbitals were propagated for 38.7 fs with a time step of 12 as. Since the finite sampling introduces spurious negative parts in the imaginary part of the polarizability, the dipole moment was damped with a factor of e −kt (k = 0.3 eV/ ), like in Ref. 17. This also simulates dephasing or other line broadening effects which would appear more naturally in a more complete theory. The resulting spectrum with and without second order correction is shown in Fig. 2. As can be seen, the maximum absorption in the latter method is located at 4.21 eV. This is exactly the difference of the LUMO (lowest unoccupied molecular orbital) and the HOMO (highest occupied molecular orbital) energies of the ground state DFTB method. If the second order term is included in the calculation, the absorption is strongly blue shifted to 5.56 eV, which is in good agreement with the experimental value of 5.8 eV [49]. Along with the energy shift, a reduction of absorption strength is also observed. To better understand the origin of these changes, we also performed calculations with our implementation of the TDDFT linear response formalism [50]. In this ap-proach, excited state singlet energies ω I are given by the solution of the following eigenvalue problem: kl ω 2 ij δ ik δ jl + 2 √ ω ij K ij,kl √ ω kl F I kl = ω 2 I F I ij . (11) Here the ω ij are energy differences between unoccupied orbitals j and occupied orbitals i, while the so called coupling matrix K describes the change of the SCF potential due to the induced density. As Eq. (11) shows, the effect of the coupling matrix is not only to shift the true excited state away from simple orbital energy differences, but also to couple different single-particle transitions. The explicit form of K is given by: K ij,kl = ′ ψ i (r)ψ j (r)(12)× 1 |r − r ′ | + δv xc [ρ](r) δρ(r ′ ) ψ k (r ′ )ψ l (r ′ ), which is nothing else than the second order term of Eq. (3c), when the induced density is expanded in particle-hole states. The results of the linear response calculations with and without the coupling matrix contribution are given as stick spectrum in Fig. 1. Obviously, there is a perfect match beween the real time and linear response approaches to TDDFT both in the energetical position of the states and the oscillator strength. This equivalence had to be expected, since the linear response approach amounts to a perturbative solution of the TDDFT equations in the small amplitude limit. However, to our knowledge this has so far never been shown in practical calculations. B. C + − C60 collisions Ion-cluster collisions provide an ideal application field for approximate TDDFT molecular dynamics simulations. This is because the number of degrees of freedom is usually too large to be treated with first principles calculations and also nonadiabatic effects are strong and important. Depending on the velocity of the projectile, collisions can induce vibrational or a combination of electronic and vibrational excitations of the cluster, where the latter type cannot be described in conventional Born-Oppenheimer dynamics. In this context, Kunert and Schmidt undertook a systematic investigation of ion-fullerene collisions and provided an explanation for seemingly conflicting experimental observations [34]. Their nonadiabatic quantum molecular dynamics (NA-QMD) method is essentially equivalent to the DFTB scheme described here in the zeroth order approximation. Accordingly, it is interesting to see whether the second order correction is of any benefit in these kind of simulations. For a.u.) for randomly oriented fullerene cages. Special care had to be taken in the definition of the initial conditions of the EOM, since the C + − C 60 configuration does not correspond to the ground state of the system. For that reason, we performed separate ground state calculations for the two subsystems and combined the resulting KS orbitals to obtain the desired charge state. The initial ion-cluster distance was chosen large enough to prevent any interaction and the system was then left to evolve freely according to the EOM [Eq. (7a) and (8)]. Fig. 3 depicts the total kinetic energy loss ∆E in the center of mass system that the projectile experiences due to the collision. It can be directly compared to the results in Fig. 3 of Ref. 34, which were obtained for a single fixed collision geometry. As already shown there, the vibrational excitation of the fullerene dominates for smaller velocities (v < 0.1 a.u.), while mostly electronic excitation is responsible for the energy loss at larger impact energies. As the impact velocity increases, ∆E first rises, peaks around 0.05 a.u and shows a weak increase beyond 0.1 a.u. in our calculations. This is in variance with the results of Kunert and Schmidt [34], which claim velocity-independent excitation energies in the high velocity range. Taking the strong dependence on the collision geometry into account, an extensive phase space sampling would be necessary to resolve this issue, which is outside the scope of this work. Turning now to a comparison of the predictions of DFTB with and without second order correction, we find only marginal differences in the results of both methods. Such a difference could have been expected for high impact velocities, but with such a large amount of energy deposited in the cluster, finer details of the electronic structure seem to have negligible influence. The second order correction has however implications for other observables. Fig. 4 shows the charge of the projectile after the collision. Here fractional charges need to be understood in the probabilistic interpretation of quantum mechanics, since in the simulations the system remains in a superposition of eigenstates with integer charges also asymptotically. For higher velocities, which correspond to higher electronic excitation as mentioned above, the charge given by the zeroth order DFTB method is significantly more negative than the second order one. This can be explained by a larger contribution of the asymptotic C − − C 2+ 60 state, which in the zeroth order approximation is located only slightly higher in energy as the initial C + −C 0 60 state. As the results in Tab. I show, this energy ordering is in striking contrast with the one given by the second order DFTB method and experiment. Although calculations of charge transfer cross sections have been performed in a zeroth order scheme [32,33], the results of this section suggest, that in general a more advanced treatment is absolutely necessary. C. Protonated Schiff base photodynamics As a last application we study the photodynamics of the retinal molecule, which is of special importance in the field of biology. This chromophore is found in a variety of proteins, where it initiates quite different reactions in the cell. In bacteriorhodopsin and halorhodopsin for example, light absorption of retinal triggers the membrane transport of protons and chloride ions, respectively. In contrast, it starts a cascade of reactions that initiate the I: Energy ordering of different asymptotic states of the singly positively charged C−C60 system in eV. The DFTB energies with and without second order correction were obtained by separate calculations of the two subsystems and addition of the results. For C60, all calculations were performed at the DFTB optimized geometry of the neutral species. The experimental results were obtained from measured ionization potentials and electron affinities [51,52,53]. The most stable state of each method was set to zero energy. vision process in rhodopsin. In all these systems, the retinal is known to isomerize around a specific double bond. The quantum yield of the photoreaction is particularly high (Φ ≈ 0.7) and the deexcitation to the ground state occurs in no more than 200-500 fs [54,55]. Because of these unusual features, this system provides an interesting subject for a theoretical investigation. For a complete understanding of the retinal photodynamics it would certainly be necessary to include the full protein environment in such an investigation. However, important information can already be drawn from the examination of small retinal analogues like the protonated Schiff bases (Fig. 5). These models share a polyene chain of alternating single and double bonds with retinal, as well as the positively charged NH + 2 Schiff base group, which is crucial for the function of the chromophore in the protein. High level quantum chemical CASPT2 calculations on these analogues revealed, that after absorption the system moves out of the Franck-Condon region along the C=C stretch normal coordinate (see Ref. 56 and references therein). After inversion of single and double bonds, torsion around the central C=C bond sets in. A barrierless path then leads to a conical intersection of ground and excited state, where efficient deactivation to the photoproduct occurs. In line with Stark spectroscopy [57,58], CASPT2 theory predicts a large charge transfer from the Schiff base group to the other terminus upon excitation, that increases along the excited state pathway. Recently, we investigated the excited state potential energy surface (PES) obtained from static DFTB calculations in the linear response formalism and found severe deviations from the CASPT2 results [59]. In fact, the only barrierless paths found to conical intersections with the ground state involved single rather than double bond isomerization (in accordance with ab initio TDDFT calculations). An intrinsic reaction coordinate connecting Franck-Condon point and correct conical intersection (as described in the CASSCF model) includes a significant barrier. Thus an efficient and ultrafast reaction seems to be unlikely at the DFT level of theory. However, one should keep in mind, that at finite temperature molecules posses a significant amount of kinetic energy already at the Franck-Condon point, which allows the system to sample a large fraction of the excited state PES. Hence, the minimum energy path might not necessarily provide a representative description of the photodynamical pathway. Moreover, nonadiabatic transitions are not restricted to conical intersections. They can also occur in regions where there is a finite gap between the ground and excited state surfaces, especially when the atomic velocity is high. Our interest is therefore to perform nonadiabatic molecular dynamics simulations of the PSB model system to see whether the discrepancies between DFT predictions and experiment remain at a full dynamical level. A complete description of the photochemical process would in principle require a full phase space sampling prior to excitation. Since the maximum absorption is strongly geometry dependent, we nevertheless take only the relaxed geometry of the ground state minimum into account. At the Franck-Condon point random velocities corresponding to a temperature of 300 K are assigned to the atoms and the system is left to evolve freely without further constraints. The excitation itself is induced by a Gaussian shaped laser pulse with a central frequency of 3.45 eV, that is slightly detuned with respect to the maximum absorption to avoid population of higher excited states close in energy. The maximum absorption itself is located at 3.88 eV in the second order DFTB method and agrees well with first principles TDDFT calculations [59] (4.03 eV) as well as CASPT2 [60] (4.02 eV) and experiment (3.85 -4.25 eV [61]). Similar to the case of trans-butadiene, the excitation energy is strongly underestimated at 2.66 eV, if the second order correction is not taken into account. Since the nonadiabatic excitation process depends strongly on the gap between ground and excited state PES, the DFTB method without electronic selfconsistency will therefore provide an unrealistic description of the photodynamics and will not be used here. The duration and fluence of the laser pulse were chosen to be 5.9 fs and 5.4 mJ/cm 2 (A = 0.7 gauss cm). For these parameters, the total energy of the system rises The results of the simulations show that the initial dynamics on the excited PES are dominated by C-C stretch motions with large amplitudes up to 0.15Å. This is in agreement with resonant Raman studies on the PSB in solution and in the rhodopsin protein [63,64,65]. In contrast to the mentioned CASPT2 calculations however, the bond alternation is only inverted for roughly half of the trajectories as shown in Fig. 6. Moreover, the timeaveraged bond alternation of 0.060Åis only slightly reduced with respect to the ground state minimum (0.063 A). Considering now the dihedral angle which represents the torsion around the central double bond, Fig. 7 shows that none of the 100 trajectories resulted in a successful isomerization within 1 ps. We find much larger amplitudes for the rotation around single bonds, with tor- sion angles up to 80 • for certain trajectories, although also here no isomerization is completed in the simulation time. This preference of single over double bond isomerization in DFT based methods was already found in the static investigation of Ref. 59. Another prediction of CASPT2 theory, which is in nice agreement with Stark spectroscopy has already been mentioned. The S 0 − S 1 excitation and the subsequent motion on the S 1 PES is known to involve a large charge transfer away from the Schiff base group. We however, do not observe any significant charge transfer in our simulations. Finally, it is interesting to analyze whether deexcitation to the ground state occurred. In Fig. 8 the excitation energy of the system is shown, which is given by the difference of the time-dependent and ground state energy at the same geometry. Directly after the end of the laser pulse, a small reduction of the excitation energy is observed (≈ 0.3 eV), which is related to an elongation of all bonds in the PSB model. There is little change from this point on and nonadiabatic transitions, which would manifest in a sharp drop of the excitation energy, do not occur. In our DFT based treatment, deactivation is therefore predicted to happen on longer timescales, presumably involving the slower processes of internal conversion and fluorescence. At any rate, ultrafast isomerization with a concomitant intersection of ground and excited state is not found to be the dominant process. This is in stark contrast to the results of Vreven et al. [66], who showed that double bond isomerization can occur in less than 100 fs for the model at hand. The simulations of this section could be extended by computing a larger number of trajectories or a longer propagation time. Considering the narrow distribution of the results presented in Fig. 6 to 8, it is not very likely that an improved sampling would reveal new information. Extension of the simulation time might seem advantageous in light of the experiments by Logunov et al. [67], who measured an excited state life time of 2-3 ps for the full retinal chromophore [ Fig. 5 (top)] in solu-tion. However, for the shorter PSB model used in this study, the excited state PES has been found to be much steeper [56] in line with the mentioned study of Vreven et al. [66]. Hence, the photochemical process should be completed in the chosen simulation time of 1 ps. To summarize, we find that our simulations disagree in most aspects with CASPT2 results and experiment. At the same time we confirmed the static investigations of the DFT based potential energy surfaces from Ref. 59 by dynamical calculations. Given that, (i) timedependent DFTB and first principles TDDFT, as well as, (ii) TDDFT with different exchange-correlation functionals (local, gradient corrected or hybrid) yield qualitatively the same picture [59], a correct DFT based description of the retinal photodynamics is highly unlikely. Only very recently, failures of TDDFT in the description of inter-molecular charge transfer states were reported [68]. Exchange-correlation functionals that address this shortcoming have also been recently proposed [69,70,71]. It will be interesting to see whether these developments can also remedy the problems with intra-molecular charge transfer found here. IV. SUMMARY In this work, we presented a mixed quantum-classical approach to simulate the coupled dynamics of electrons and nuclei. The method is based on a second order expansion of the TDDFT Lagrangian around a suitable reference density. We showed that the inclusion of the second order term improves both qualitatively and quantitatively the optical spectrum of molecules. For transbutadiene a strong blueshift of the absorption was observed together with a significant reduction of oscillator strength. In this context, the analogy with the linear response approach to TDDFT revealed that the zeroth order treatment of the Lagrangian corresponds to an un-coupled response that neglects collective effects. Moreover, experience with the linear response implementation suggests that this absorption shift is quite general and especially large for π − π * transitions. For n − π * excitations however, the coupling is usually weak and realistic results might already be achieved in the zeroth order approximation. In the simulations of high energy collision of Sec. III B, there is little difference in the predictions of both schemes when the interest is in energy transfer only. This is because the process is dominated by vibrational rather than electronic excitation in the regime of low energy transfer, where the different level structure could be resolved. Considering now the charge transfer, we found important differences for the C + − C 60 system, which were attributed to the problematic description of the asymptotic states in the zeroth order scheme. Charge transfer played also an important role in the nonadiabatic molecular dynamics simulations of the protonated Schiff base. In contrast to experiment we found no isomerization. This result is a negative, but we think important one. It should be stressed, that this failure is not introduced by our approximations but already inherent in TDDFT itself. FIG. 1 : 1Schematic illustration of trans-butadiene (C4H6). FIG. 2 : 2Dipole strength of trans-butadiene as given by the time-dependent DFTB method in zeroth and second order FIG. 3 : 3the special case of C + −C 60 collisions we performed calculations for different values of the impact velocity (v = 0.01 . . . 0.45 a.u.) and impact parameter (b =2.0,Total kinetic energy loss ∆E of the projectile in the center of mass system for an impact parameter of b = 2.0 a.u. For each velocity three trajectories with random cage orientation were calculated; error bars correspond to the standard deviation. Dark squares: second order, open circles: zeroth order DFTB results, open triangles: loss due to electronic excitation, obtained from the difference of time-dependent and ground state energy in the zeroth order DFTB scheme. FIG. 4 : 4Charge of the carbon atom after the collision with C60 for different values of the impact velocity and an impact parameter of b = 7.5 a.u. . For each velocity three trajectories with random cage orientation were calculated; error bars correspond to the standard deviation. Dark squares: second order, open circles: zeroth order DFTB results. FIG. 5 : 5Top: Structure of 11-cis retinal, which is found in dark adapted rhodopsin and transforms to the all-trans form upon absorption of light. Bottom: Protonated Schiff base model used in this study. FIG. 6 : 6Bond alternation in the PSB model system for all calculated trajectories, estimated as the mean bond length difference between neighboring C-C single and double bonds. FIG. 7 : 7Torsion angle around the central double bond of the PSB model for all calculated trajectories.by 3.88 eV, i.e. a one-photon transition to the first excited state is simulated. For a total simulation time of 1.1 ps, 100 trajectories were propagated with a timestep of 12.1 as. This guaranteed an energy conservation of ∆E/E ≈ 10 −8 . With these parameters, one trajectory took about 22 minutes CPU time on an Intel Xeon 3.06 GHz processor. FIG. 8 : 8Excitation energy of the PSB model versus time in eV. The inset shows the transition to the excited state due to the applied laser pulse. TABLE AcknowledgementsThe authors thank G. Seifert and M. Wanko for fruitful discussion and careful reading of the manuscript. . A Zangwill, P Soven, Phys. Rev. A. 211561A. Zangwill and P. Soven, Phys. Rev. A 21, 1561 (1980). . R Bauernschmitt, R Ahlrichs, Chem. Phys. Lett. 256454R. Bauernschmitt and R. Ahlrichs, Chem. Phys. Lett. 256, 454 (1996). . I Vasiliev, S Ögüt, J R Chelikowsky, Phys. Rev. Lett. 821919I. Vasiliev, S.Ögüt, and J. R. Chelikowsky, Phys. Rev. Lett. 82, 1919 (1999). . P L De Boeij, F Koostra, J A Berger, R Van Leeuwen, J G Sniders, J. Chem. Phys. 1151995P. L. de Boeij, F. Koostra, J. A. Berger, R. van Leeuwen, and J. G. Sniders, J. Chem. Phys. 115 1995 (2001). . F Furche, R Ahlrichs, C Wachsmann, E Weber, A Sobanski, F Vogtle, S Grimme, J. Am. Chem. Soc. 1221717F. Furche, R. Ahlrichs, C. Wachsmann, E. Weber, A. Sobanski, F. Vogtle, and S. Grimme, J. Am. Chem. Soc. 122 1717 (2000). . K Yabana, G F Bertsch, Phys. Rev. A. 601271K. Yabana, and G. F. Bertsch, Phys. Rev. A 60, 1271 (1999). . D Heringer, T A Niehaus, M Wanko, Th Frauenheim, to be publishedD. Heringer, T. A. Niehaus, M. Wanko, and Th. Frauen- heim, to be published. . S J A Van Gisbergen, J G Snijders, E J Baerends, Chem. Phys. Lett. 259599S. J. A. van Gisbergen, J. G. Snijders, and E. J. Baerends, Chem. Phys. Lett. 259, 599 (1996). . S J A Van Gisbergen, V P Osinga, O V Gritsenko, R Van Leeuwen, J G Snijders, E J Baerends, J. Chem. Phys. 1053142S. J. A. van Gisbergen, V. P. Osinga, O. V. Gritsenko, R. van Leeuwen, J. G. Snijders, and E. J. Baerends, J. Chem. Phys. 105, 3142 (1996). . S J A Van Gisbergen, J G Snijders, E J Baerends, Phys. Rev. Lett. 783097S. J. A. van Gisbergen, J. G. Snijders, and E. J. Baerends, Phys. Rev. Lett. 78, 3097 (1997). C A Ullrich, S Erhard, E K U V Gross ; M, Fedorov, Sper Intense Laser Atom Physics IV. KluwerC. A. Ullrich, S. Erhard, and E. K. U. Gross, in Sper Intense Laser Atom Physics IV, edited by H. G. Muller and M. V. Fedorov, (Kluwer,1996), p. 267-284. . X Chu, S I Chu, Phys. Rev. A. 6323411X. Chu and S. I. Chu, Phys. Rev. A 63, 023411 (2001). . A Castro, M A L Marques, J A Alonso, G F Bertsch, A Rubio, Eur. Phys. J. D. 28211A. Castro, M. A. L. Marques, J. A. Alonso, G. F. Bertsch, and A. Rubio, Eur. Phys. J. D 28, 211 (2004). . Y S Dou, B R Torralva, R E Allen, Chem. Phys. Lett. 392352Y. S. Dou, B. R. Torralva, and R. E. Allen, Chem. Phys. Lett. 392, 352 (2004). . E Runge, E K U Gross, Phys. Rev. Lett. 52997E. Runge and E. K. U. Gross, Phys. Rev. Lett. 52, 997 (1984). M E Casida, Recent Advances in Density Functional Methods, Part I. D. P. ChongSingaporeWorld Scientific155M. E. Casida, in Recent Advances in Density Functional Methods, Part I, edited by D. P. Chong (Singapore, World Scientific, 1995) p. 155. . K Yabana, G F Bertsch, Phys. Rev. B. 544484K. Yabana and G. F. Bertsch, Phys. Rev. B 54, 4484 (1996). Here it is assumed, that spin-orbit coupling is neglected. Here it is assumed, that spin-orbit coupling is neglected. . C Y Yam, S Yokojima, G H Chen, Phys. Rev. B. 68153105C. Y. Yam, S. Yokojima, and G. H. Chen, Phys. Rev. B 68, 153105 (2003). . F Calvayrac, P G Reinhard, E Suraud, Eur. Phys. J. D. 9389F. Calvayrac, P. G. Reinhard, and E. Suraud, Eur. Phys. J. D 9, 389 (1999); . P G Reinhard, E Suraud, J. Clust. Sci. 10239P. G. Reinhard, and E. Suraud, J. Clust. Sci. 10, 239 (1999); Reinhard. E Suraud, P , Phys. Rev. Lett. 852296E. Suraud, and P. G. Rein- hard, Phys. Rev. Lett. 85, 2296 (2000). . T Brixner, N H Damrauer, G Gerber, Adv. in At., Mol. and Opt. Phys. 46Academic PressT. Brixner, N. H. Damrauer, and G. Gerber, in Adv. in At., Mol. and Opt. Phys. (Academic Press, 2001) Vol. 46, p. 1-54. . R E Allen, Phys. Rev. B. 5018629R. E. Allen, Phys. Rev. B 50, 18629 (1994). . B Torralva, T A Niehaus, M Elstner, S Suhai, Th, R E Frauenheim, Allen, Phys. Rev. B. 64153105B. Torralva, T. A. Niehaus, M. Elstner, S. Suhai, Th. Frauenheim, and R. E. Allen, Phys. Rev. B 64, 153105 (2001). . B R Torralva, R E Allen, J. Mod. Opt. 49593B. R. Torralva and R. E. Allen, J. Mod. Opt. 49, 593 (2002). . Y S Dou, R E Allen, J. Chem. Phys. 11910658Y. S. Dou and R. E. Allen, J. Chem. Phys. 119, 10658 (2003). . Y S Dou, B R Torralva, R E Allen, J. Mod. Opt. 502615Y. S. Dou, B. R. Torralva, and R. E. Allen, J. Mod. Opt. 50, 2615 (2003). . Y S Dou, B R Torralva, R E Allen, J. Phys. Chem. A. 1078817Y. S. Dou, B. R. Torralva, and R. E. Allen, J. Phys. Chem. A 107, 8817 (2003). . Y S Dou, R E Allen, Chem. Phys. Lett. 378323Y. S. Dou and R. E. Allen, Chem. Phys. Lett. 378, 323 (2003). . U Saalmann, R Schmidt, Z. Phys. D. 38153U. Saalmann and R. Schmidt, Z. Phys. D 38, 153 (1996). . U Saalmann, R Schmidt, Phys. Rev. Lett. 803213U. Saalmann and R. Schmidt, Phys. Rev. Lett. 80, 3213 (1998). . R Schmidt, O Knospe, U Saalmann, Nuovo Cimento Soc. Ital. Fis. A. 1101201R. Schmidt, O. Knospe, and U. Saalmann, Nuovo Ci- mento Soc. Ital. Fis. A 110, 1201 (1997). . O Knospe, J Jellinek, U Saalmann, R Schmidt, Eur. Phys. J. D. 51O. Knospe, J. Jellinek, U. Saalmann, and R. Schmidt, Eur. Phys. J. D 5, 1 (1999). . O Knospe, J Jellinek, U Saalmann, R Schmidt, Phys. Rev. A. 6122715O. Knospe, J. Jellinek, U. Saalmann, and R. Schmidt, Phys. Rev. A 61, 022715 (2000). . T Kunert, R Schmidt, Phys. Rev. Lett. 865258T. Kunert and R. Schmidt, Phys. Rev. Lett. 86, 5258 (2001). . T Kunert, R Schmidt, Eur. Phys. J. D. 2515T. Kunert and R. Schmidt, Eur. Phys. J. D 25, 15 (2003). . T N Todorov, J.Phys.: Cond. Matt. 1310125T. N. Todorov, J.Phys.: Cond. Matt. 13 10125 (2001). . T A Niehaus, dissertation at the University of PaderbornT. A. Niehaus, dissertation at the University of Pader- born, http://www.ub.uni-paderborn.de (2001). . T Frauenheim, G Seifert, M Elstner, Z Hajnal, G Jungnickel, D Porezag, S Suhai, R Scholz, phys. stat. sol.(b). 21741T. Frauenheim, G. Seifert, M. Elstner, Z. Hajnal, G. Jungnickel, D. Porezag, S. Suhai, and R. Scholz, phys. stat. sol.(b) 217, 41 (2000). . T Frauenheim, G Seifert, M Elstner, T Niehaus, C Köhler, M Amkreutz, M Sternberg, Z Hajnal, A Di Carlo, S Suhai, J. Phys.: Cond. Matt. 143015T. Frauenheim, G. Seifert, M. Elstner, T. Niehaus, C. Köhler, M. Amkreutz, M. Sternberg, Z. Hajnal, A. Di Carlo and S. Suhai, J. Phys.: Cond. Matt. 14 3015 (2002). . D Porezag, Th, Frauenheim, Th, G Köhler, R Seifert, Kaschner, Phys. Rev. B. 5112947D. Porezag, Th. Frauenheim, Th. Köhler, G. Seifert, R. Kaschner, Phys. Rev. B 51, 12947 (1995). . M Elstner, D Porezag, G Jungnickel, J Elsner, M Haugk, Th, S Frauenheim, G Suhai, Seifert, Phys. Rev. B. 587260M. Elstner, D. Porezag, G. Jungnickel, J. Elsner, M. Haugk, Th. Frauenheim, S. Suhai, and G. Seifert, Phys. Rev. B 58, 7260 (1998). . H M Polatoglou, M Methfessel, Phys Rev. B. 3710403H. M. Polatoglou and M. Methfessel, Phys Rev. B 37, R10403 (1988). . P Kürpick, W.-D Sepp, B Fricke, Phys. Rev. A. 513693P. Kürpick, W.-D. Sepp, and B. Fricke, Phys. Rev. A 51 3693 (1995). . E K U Gross, W Kohn, Adv. Quantum Chem. 21255E. K. U. Gross and W. Kohn, Adv. Quantum Chem. 21, 255 (1990). . M Graf, P Vogl, Phys. Rev. B. 514940M. Graf and P. Vogl, Phys. Rev. B 51, 4940 (1995). . M A L Marques, A Castro, G F Bertsch, A Rubio, Comp. Phys. Comm. 15160M. A. L. Marques, A. Castro, G. F. Bertsch, and A. Rubio, Comp. Phys. Comm. 151, 60 (2003). In this context, a grid representation of the wavefunction together with an implementation of absorbing boundary conditions like in Ref. 17 is more appropriate. We would also like to mention, that the energetical position of the second state around 9 eV is likely to be in error, since this excitation involves a transition to an unbound orbital at the minimal basis level. Please note that ionization, which sets in beyond 9. 07 eV experimentally [51], can not be correctly described in a localized basis. In spite of these problems, the full spectrum is shown in order to demonstrate the general principlePlease note that ionization, which sets in beyond 9.07 eV experimentally [51], can not be correctly described in a localized basis. In this context, a grid representation of the wavefunction together with an implementation of absorbing boundary conditions like in Ref. 17 is more appropriate. We would also like to mention, that the en- ergetical position of the second state around 9 eV is likely to be in error, since this excitation involves a transition to an unbound orbital at the minimal basis level. In spite of these problems, the full spectrum is shown in order to demonstrate the general principle. . C.-P Hsu, S Hirata, M Head-Gordon, J. Phys. Chem. A. 105451C.-P. Hsu, S. Hirata, and M. Head-Gordon, J. Phys. Chem. A 105, 451 (2001). . J P Doering, J. Chem. Phys. 703902J. P. Doering, J. Chem. Phys. 70, 3902 (1979). . T A Niehaus, S Suhai, F Della Sala, P Lugli, M Elstner, G Seifert, Th Frauenheim, Phys. Rev. B. 6385108T. A. Niehaus, S. Suhai, F. Della Sala, P. Lugli, M. El- stner, G. Seifert, and Th. Frauenheim, Phys. Rev. B 63, 085108 (2001). Ion Energetics Data in NIST Chemistry Webbook, NIST Standard Reference Database Number. P.J. Linstrom and W.G. MallardGaithersburg, MD69National Institute of Standards and TechnologyIon Energetics Data in NIST Chemistry Webbook, NIST Standard Reference Database Number 69, ed. P.J. Linstrom and W.G. Mallard (National Institute of Standards and Technology, Gaithersburg, MD, 2001) (http://webbook.nist.gov). . D M Cox, S Behal, M Disko, S M Gorun, M Greaney, C S Hsu, E B Kollin, J Millar, J Robbins, W Robbins, R D Sherwood, P Tindall, J. Am. Chem. Soc. 1132940D. M. Cox, S. Behal, M. Disko, S. M. Gorun, M. Gre- aney, C. S. Hsu, E. B. Kollin, J. Millar, J. Robbins, W. Robbins, R. D. Sherwood, and P. Tindall, J. Am. Chem. Soc. 113, 2940 (1991). . H Ajie, M M Alvarez, S J Anz, R D Beck, F Dieterich, K Fostiropoulos, D R Huffman, W Krätschmer, Y Rubin, K E Schriver, D Sensharma, R L Whetten, J. Phys. Chem. 948630H. Ajie, M. M. Alvarez, S. J. Anz, R. D. Beck, F. Dieterich, K. Fostiropoulos, D. R. Huffman, W. Krätschmer, Y. Rubin, K. E. Schriver, D. Sensharma, and R. L. Whetten, J. Phys. Chem. 94, 8630 (1990). . A Aharoni, B Hou, N Friedman, M Ottolenghi, I Rousso, S Ruhman, M Sheves, T Ye, Q Zhong, Biochemistry (Moscow). 661210A. Aharoni, B. Hou, N. Friedman, M. Ottolenghi, I. Rousso, S. Ruhman, M. Sheves, T. Ye, and Q. Zhong, Biochemistry (Moscow) 66, 1210 (2001). . H Kandori, Y Shichida, T Yoshizawa, Biochemistry (Moscow). 661210H. Kandori, Y. Shichida, and T. Yoshizawa, Biochem- istry (Moscow) 66, 1210 (2001). . M Garavelli, F Bernardi, M Olivucci, T Vreven, S Klein, P Celani, M A Robb, Faraday Discuss. 11051M. Garavelli, F. Bernardi, M. Olivucci, T. Vreven, S. Klein, P. Celani, and M. A. Robb, Faraday Discuss. 110, 51 (1998). R Mathies, L Stryer, Proc. Natl. Acad. Sci. USA. Natl. Acad. Sci. USA732169R. Mathies, and L. Stryer, Proc. Natl. Acad. Sci. USA 73, 2169 (1976). . M Ponder, R Mathies, J. Phys. Chem. 875090M. Ponder and R. Mathies, J. Phys. Chem. 87, 5090 (1983). . M Wanko, M Garavelli, F Bernardi, T A Niehaus, T Frauenheim, M Elstner, J. Chem. Phys. 1201674M. Wanko, M. Garavelli, F. Bernardi, T. A. Niehaus, T. Frauenheim, and M. Elstner, J. Chem. Phys. 120, 1674 (2004). . M Garavelli, F Bernardi, M Robb, M Olivucci, Int. J. Photoenergy. 457M. Garavelli, F. Bernardi, M. Robb, and M. Olivucci, Int. J. Photoenergy 4, 57 (2002). The experimental values from Ref. 62 correspond to similar but not completely equivalent structures. See also the discussion of this point in Ref. 59The experimental values from Ref. 62 correspond to simi- lar but not completely equivalent structures. See also the discussion of this point in Ref. 59. . M Arnaboldi, M G Motto, K Tsujimoto, V Balogh-Nair, K Nakanishi, J. Am. Chem. Soc. 1017082M. Arnaboldi, M. G. Motto, K. Tsujimoto, V. Balogh- Nair, and K. Nakanishi, J. Am. Chem. Soc. 101, 7082 (1997); . V Balogh-Nair, J D Carriker, B Honig, V Kamat, M G Motto, K Nakanishi, R Sen, M Sheves, M Arnaboldi, K Tsujimoto, Photochem. Photobiol. 33483V. Balogh-Nair, J. D. Carriker, B. Honig, V. Ka- mat, M. G. Motto, K. Nakanishi, R. Sen, M. Sheves, M. Arnaboldi, and K. Tsujimoto, Photochem. Photobiol. 33, 483 (1981); . R F Chields, G S Shaw, R E Wasylishen, J. Am. Chem. Soc. 1095362R. F. Chields, G. S. Shaw, and R. E. Wa- sylishen, J. Am. Chem. Soc. 109, 5362 (1987). . R Mathies, T B Freedman, L Stryer, J. Mol. Biol. 109367R. Mathies, T. B. Freedman, and L. Stryer, J. Mol. Biol. 109 367 (1977). . S W Lin, M Groesbeek, I Van Der Hoef, P Verdegem, J Lugtenburg, R A Mathies, J. Phys. Chem. 102S. W. Lin, M. Groesbeek, I. van der Hoef, P. Verdegem, J. Lugtenburg, and R. A. Mathies, J. Phys. Chem. 102 2787-2806 (1998). The Primary Photoreaction of Rhodopsin, Handbook of Biological Physics. R A Mathies, J Lugtenburg, D. G. Stavenga, W. J. de Grip, and E. N. Pugh Jr.,Elsevier Science B.V3R. A. Mathies, J. Lugtenburg, in The Primary Photore- action of Rhodopsin, Handbook of Biological Physics, edited by D. G. Stavenga, W. J. de Grip, and E. N. Pugh Jr., (Elsevier Science B.V., 2000) Vol. 3, Chap. 2. . T Vreven, F Bernardi, M Garavelli, M Olivucci, M A Robb, H B Schlegel, J. Am. Chem. Soc. 11912687T. Vreven, F. Bernardi, M. Garavelli, M. Olivucci, M. A. Robb, and H. B. Schlegel, J. Am. Chem. Soc. 119, 12687 (1997). . S L Logunov, L Song, M A El-Sayed, J. Phys. Chem. 10018586S. L. Logunov, L. Song, and M. A. El-Sayed, J. Phys. Chem. 100, 18586 (1996). . A Dreuw, J L Weisman, M Head-Gordon, J. Chem. Phys. 1192943A. Dreuw, J. L. Weisman, and M. Head-Gordon, J. Chem. Phys. 119, 2943 (2003). . Y Tawada, T Tsuneda, S Yanagisawa, J. Chem. Phys. 1208425Y. Tawada, T. Tsuneda, and S. Yanagisawa, J. Chem. Phys. 120, 8425 (2004). . O Gritsenko, E J Baerends, J. Chem. Phys. 121656O. Gritsenko and E.J. Baerends, J. Chem. Phys. 121, 656 (2004). . T Yanai, D P Tew, N C Handy, Chem. Phys. Lett. 39351T. Yanai, D. P. Tew, and N. C. Handy, Chem. Phys. Lett. 393, 51 (2004).
[]
[ "Clarissa Family Age from the Yarkovsky Effect Chronology", "Clarissa Family Age from the Yarkovsky Effect Chronology" ]
[ "Vanessa C Lowry \nUniversity of Central Florida\n4000 Central Florida Blvd32816OrlandoFLUSA\n", "David Vokrouhlický \nAstronomical Institute\nCharles University\nV Holešovičkách 218000Prague, Prague 8CZCzech Republic\n", "David Nesvorný \nDepartment of Space Studies\nSouthwest Research Institute\n1050 Walnut St., Suite 30080302BoulderCOUSA\n", "Humberto Campins \nUniversity of Central Florida\n4000 Central Florida Blvd32816OrlandoFLUSA\n" ]
[ "University of Central Florida\n4000 Central Florida Blvd32816OrlandoFLUSA", "Astronomical Institute\nCharles University\nV Holešovičkách 218000Prague, Prague 8CZCzech Republic", "Department of Space Studies\nSouthwest Research Institute\n1050 Walnut St., Suite 30080302BoulderCOUSA", "University of Central Florida\n4000 Central Florida Blvd32816OrlandoFLUSA" ]
[]
The Clarissa family is a small collisional family composed of primitive C-type asteroids. It is located in a dynamically stable zone of the inner asteroid belt. In this work we determine the formation age of the Clarissa family by modeling planetary perturbations as well as thermal drift of family members due to the Yarkovsky effect. Simulations were carried out using the SWIFT-RMVS4 integrator modified to account for the Yarkovsky and Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) effects. We ran multiple simulations starting with different ejection velocity fields of fragments, varying proportion of initially retrograde spins, and also tested different Yarkovsky/YORP models. Our goal was to match the observed orbital structure of the Clarissa family which is notably asymmetrical in the proper semimajor axis, a p . The best fits were obtained with the initial ejection velocities 20 m s −1 of diameter D 2 km fragments, ∼ 4:1 preference for spin-up by YORP, and assuming that 80% of small family members initially had retrograde rotation. The age of the Clarissa family was found to be t age = 56 ± 6 Myr for the assumed asteroid density ρ = 1.5 g cm −3 . Small variation of density to smaller or larger value would lead to slightly younger or older age estimates. This is the first case where the Yarkovsky effect chronology has been successfully applied to an asteroid family younger than 100 Myr.
10.3847/1538-3881/aba4af
[ "https://arxiv.org/pdf/2009.06030v1.pdf" ]
221,593,456
2009.06030
acb35e0afaecd8973db71b06f8e146a6c59362b6
Clarissa Family Age from the Yarkovsky Effect Chronology September 15, 2020 Vanessa C Lowry University of Central Florida 4000 Central Florida Blvd32816OrlandoFLUSA David Vokrouhlický Astronomical Institute Charles University V Holešovičkách 218000Prague, Prague 8CZCzech Republic David Nesvorný Department of Space Studies Southwest Research Institute 1050 Walnut St., Suite 30080302BoulderCOUSA Humberto Campins University of Central Florida 4000 Central Florida Blvd32816OrlandoFLUSA Clarissa Family Age from the Yarkovsky Effect Chronology September 15, 2020Draft version Typeset using L A T E X default style in AASTeX63 The Clarissa family is a small collisional family composed of primitive C-type asteroids. It is located in a dynamically stable zone of the inner asteroid belt. In this work we determine the formation age of the Clarissa family by modeling planetary perturbations as well as thermal drift of family members due to the Yarkovsky effect. Simulations were carried out using the SWIFT-RMVS4 integrator modified to account for the Yarkovsky and Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) effects. We ran multiple simulations starting with different ejection velocity fields of fragments, varying proportion of initially retrograde spins, and also tested different Yarkovsky/YORP models. Our goal was to match the observed orbital structure of the Clarissa family which is notably asymmetrical in the proper semimajor axis, a p . The best fits were obtained with the initial ejection velocities 20 m s −1 of diameter D 2 km fragments, ∼ 4:1 preference for spin-up by YORP, and assuming that 80% of small family members initially had retrograde rotation. The age of the Clarissa family was found to be t age = 56 ± 6 Myr for the assumed asteroid density ρ = 1.5 g cm −3 . Small variation of density to smaller or larger value would lead to slightly younger or older age estimates. This is the first case where the Yarkovsky effect chronology has been successfully applied to an asteroid family younger than 100 Myr. INTRODUCTION Asteroid families consist of fragments produced by catastrophic and cratering impacts on parent bodies (see Nesvorný et al. 2015 for a review). The fragments produced in a single collision, known as family members, share similar proper semimajor axes (a p ), proper eccentricities (e p ), and proper inclinations (i p ) (Knežević et al. 2002;. Family members are also expected to have spectra that indicate similar mineralogical composition to the parent body (Masiero et al. 2015). After their formation, families experience collisional evolution (Marzari et al. 1995), which may cause them to blend into the main belt background, and evolve dynamically (Bottke et al. 2001). Since the collisional lifetime of a 2 km sized body in the main belt is greater than 500 Myr (Bottke et al. 2005), which is nearly 10 times longer than the Clarissa family's age (Section 4), we do not need to account for collisional evolution. Instead, we only consider dynamical evolution of the Clarissa family to explain its current orbital structure and constrain its formation conditions and age. The Clarissa family is one of small, primitive (C or B type in asteroid taxonomy) families in the inner asteroid belt (Morate et al. 2018). Of these families, Clarissa has the smallest extent in semimajor axis suggesting that it might be the youngest. This makes it an interesting target of dynamical study. Before discussing the Clarissa family in depth, we summarize what is known about its largest member, asteroid (302) Clarissa shown in Figure 1. Analysis of dense and sparse photometric data shows that (302) Clarissa has a retrograde spin with rotation period P = 14.4797 hr, pole ecliptic latitude −72 • , and two possible solutions for pole ecliptic longitude, 28 • or 190 • (Hanuš et al. 2011). The latter solution was excluded once information from stellar occultations became available (Ďurech et al. 2010, 2011). The stellar occultation also provided a constraint on the volume-equivalent diameter D = 43 ± 4 km of (302) Clarissa (Ďurech et al. 2010, 2011) improving on an earlier estimate of 39 ± 3 km from the analysis of IRAS data (Tedesco et al. 2004). Figure 1. Three-dimensional shape of (302) Clarissa from light-curve inversion (Ďurech et al. 2010Hanuš et al. 2011;https://astro.troja.mff.cuni.cz/projects/damit/). Equatorial views along the principal axes of inertia are shown in the left and middle panels. The polar view along the shortest inertia axis is shown on the right (view from the direction of the rotation pole). The 179 family members of the Clarissa family are tightly clustered around (302) Clarissa . The synthetic proper elements of the Clarissa family members were obtained from the Planetary Data System (PDS) node (Nesvorný 2015). 1 Figure 2 shows the projection of the Clarissa family onto the (a p , e p ) and (a p , sin i p ) planes. For comparison, we also indicate in Fig. 2 the initial distribution of fragments if they were ejected at speeds equal to the escape speed from (302) Clarissa. To generate these initial distributions we adopted the argument of perihelion ω 90 • and true anomaly f 90 • , both given at the moment of the parent body breakup (Zappalà et al. 1984;Nesvorný et al. 2006a;Vokrouhlický et al. 2006a) (see Appendix A). Other choices of these (free) parameters would lead to ellipses in Figure 2 that would be tilted in the (a p , e p ) projection and/or vertically squashed in the (a p , i p ) projection (Vokrouhlický et al. 2017a). Interestingly, the areas delimited by the green ellipses in Fig. 2 contain only a few known Clarissa family members. We interpret this as a consequence of the dynamical spreading of the Clarissa family by the Yarkovsky effect. Immediately following the impact on (302) Clarissa, the initial spread of fragments reflects their ejection velocities. We assume that the Clarissa family was initially much more compact than it is now (e.g., the green ellipses in Fig. 2). As the family members drifted by the Yarkovsky effect, the overall size of the family in a p slowly expanded. It is apparent that the Clarissa family has undergone Yarkovsky drift since there is a depletion of asteroids in the central region of the family in Figure 3. There are no major resonances in the immediate orbital neighborhood of (302) Clarissa. The e p and i p values of family members therefore remained initially unchanged. Eventually, the family members reached the principal mean motion resonances, most notably the J4-S2-1 three-body resonance at 2.398 au (Nesvorný & Morbidelli 1998) which can change e p and i p . This presumably contributed to the present orbital structure of the Clarissa family, where members with a p < 2.398 au have significantly larger spread in e p and i p than those with a p > 2.398 au. Note, in addition, that there are many more family members sunward from (302) Clarissa, relative to those on the other side ( Fig. 3). We discuss this issue in more detail in Section 4. Figure 3 shows the absolute magnitudes H of family members as a function of a p . We use the mean WISE albedo of the Clarissa family, p V = 0.056 ± 0.017 (Masiero et al. 2013(Masiero et al. , 2015 to convert H to D (shown on the right ordinate). As often seen in asteroid families, the small members of the Clarissa family are more dispersed in a p than the large members. The envelope of the distribution in (a p , H) is consequently "V" shaped ). The small family members also concentrate toward the extreme a p values, while there is a lack of asteroids in the family center, giving the family the appearance of ears. This feature has been shown to be a consequence of the YORP effect, which produces a perpendicular orientation of spin axis relative to the orbital plane and maximizes the Yarkovsky drift (e.g., Vokrouhlický et al. 2006a). Notably, the observed spread of small family members exceeds, by at least a factor of two, the spread due to the escape velocity from (302) Clarissa, and the left side of the family in Fig. 3 is overpopulated by a factor of ∼4. (302) Clarissa is highlighted by the green diamond. The locations of principal mean motion resonances are indicated by the gray shaded regions with the three-body J4-S2-1 resonance on the left and the exterior M1/2 resonance with Mars on the right. The resonance widths were computed using software available from Gallardo (2014) (see also Nesvorný & Morbidelli 1998). It can be noted that the dispersion of the fragments surrounding (302) Clarissa is narrow in ep, but sunward of the J4-S2-1 resonance, fragments are more dispersed in ep. The red symbols mark potential interlopers in the family based on their location in the (ap, H) plane (see Figure 3). The green ellipses indicate a region in proper element space where the initial fragments would land assuming: (i) isotropic ejection from the parent body, (ii) f ω 90 • (see Appendix A), and (iii) velocity of 20 m s −1 . This is equal to the escape velocity from (302) Clarissa (vesc = 19.7 ± 1.8 m s −1 for D = 43 ± 4 km and bulk density ρ = 1.5 g cm −3 ). The solid gray curves in the bottom panel of Figure 3 delimit the boundaries wherein most family members are located. The curves in the figure are calculated from the equation H = 5 log 10 (|a p − a c |/C) ,(1) where a c is the family center and C is a constant. The best fit to the envelope of the family is obtained with C = (5 ± 1) × 10 −6 au. As explained in Nesvorný et al. (2015), the constant C is an expression of (i) the ejection velocity field with velocities inversely proportional to the size, and (ii) the maximum Yarkovsky drift of fragments over the family age. It is difficult to decouple these two effects without detailed modeling of the overall family structure (Sections 3 and 4). Ignoring (i), we can crudely estimate the Clarissa family age. For that we use t age 1 Gy C 10 −4 au a 2.5 au 2 ρ 2.5 g cm −3 0.2 p V 1/2 (2) from Nesvorný et al. (2015), where ρ is the asteroid bulk density and p V is the visual albedo. For the Clarissa family we adopt C = (5 ± 1) ×10 −6 au, and values typical for a C-type asteroid: ρ = 1.5 g cm −3 , p V = 0.05, and find t age 56 ± 11 Myr. Using similar arguments Paolicchi et al. (2019) estimated that the Clarissa family is ∼ 50-80 My old. Furthermore, Bottke et al. (2015) suggested t age 60 Myr for the Clarissa family, but did not attach an error bar to this value. Objects residing far outside of the curves given by Eq. (1) are considered interlopers (marked red in Figs. 2 and 3) and are not included in the top panel of Fig. 3. Further affirmation that these objects are interlopers could be obtained from spectroscopic studies (e.g., demonstrating that they do not match the family type; Vokrouhlický et al. 2006c). The spectroscopic data for the Clarissa family are sparse, however, and do not allow us to confirm interlopers. We mention asteroid (183911) 2004 CB100 (indicated by a red triangle) which was found to be a spectral type X (likely an interloper). Asteroid (36286) 2000 EL14 (indicated by a red square in Fig. 3) has a low albedo p V 0.06 in the WISE catalog (Masiero et al. 2011) and Morate et al. (2018) found it to be a spectral type C. Similarly, asteroid (112414) 2000 NV42 (indicated by a red circle) was found to be a spectral type C. These bodies may be background objects although the background of primitive bodies in the inner main belt is not large. The absolute magnitudes H could be determined particularly badly (according to Pravec et al. (2012) the determination of H may have an uncertainty accumulated up to a magnitude), but it is not clear why only these two bodies of spectral type C would have this bias. The absolute magnitude H distribution of Clarissa family members can be approximated by a power law, N (<H) ∝ 10 γH (Vokrouhlický et al. 2006c), with γ = 0.75 (Fig. 4). The relatively large value of γ and large size of (302) Clarissa relative to other family members are indicative of a cratering event on (302) Clarissa ). The significant flattening of (302) Clarissa in the northern hemisphere ( Fig. 1) may be related to the family-forming event (e.g., compaction after a giant cratering event). This is only speculation and we should caution the reader about the uncertainties in shape modeling. In the next section, we describe numerical simulations that were used to explain the present orbital structure of the Clarissa family and determine its formation conditions and age. We take particular care to demonstrate the strength of the J4-S2-1 resonance and its effect on the family structure inside of 2.398 au. NUMERICAL MODEL As a first step toward setting up the initial conditions, we integrated the orbit of (302) Clarissa over 1 Myr. We determined the moment in time when the argument of perihelion of (302) Clarissa reached 90 • (note that the argument is currently near 54 • ). Near that epoch we followed the asteroid along its orbit until the true anomaly reached 90 • ( Figure 5). This was done to have an orbital configuration compatible with ω f 90 • (see Appendix A for more comments) that we used to plot the ellipses in Fig. 2. At that epoch we recorded (302) Clarissa's heliocentric position vector R and velocity V , and added a small velocity change δV to the latter. This represents the initial ejection speed of individual members. From that we determined the orbital elements using the Gauss equations with f = 90 • and ω = 90 • (Zappalà et al. 1996). We generated three distributions with δV = 10, 20, and 30 m s −1 to probe the dependence of our results on this parameter (ejection directions were selected isotropically). Note that δV = 20 m s −1 best matches the escape speed from (302) Clarissa. The assumption of a constant ejection speed of simulated fragments is not a significant approximation, because we restrict our modeling to D 2 km which is the characteristic size of most known family members. We used the SWIFT-RMVS4 code of the SWIFT-family software written by Levison & Duncan (1994). The code numerically integrates orbits of N massive (the sun and planets) and n massless bodies (asteroids in our synthetic Clarissa family). For all of our simulations we included eight planets (Mercury to Neptune), thus N = 9, and n = 500 test family members. We used a time step of two days and simulated all orbits over the time span of 150 Myr. This is comfortably longer than any of the age estimates discussed in Section 1. The integrator eliminates test bodies that get too far from the sun or impact a massive body. Since (302) Clarissa is located in a dynamically stable zone and the simulation is not too long, we did not see many particles being eliminated. Only a few particles leaked from the J4-S2-1 resonance onto planet-crossing orbits. The Clarissa family should thus not be a significant source of near-Earth asteroids. The integrator mentioned above takes into account gravitational forces among all massive bodies and their effect on the massless bodies. Planetary perturbations of Clarissa family members include the J4-S2-1 and M1/2 resonances shown in Fig. 3 and the secular perturbations which cause oscillations of osculating orbital elements (Fig. 5). The principal driver of the long-term family evolution, however, is the Yarkovsky effect, which causes semimajor axis drift of family members to larger or smaller values. Thus, we extended the original version of the SWIFT code to allow us to model these thermal accelerations. Details of the code extension can be found in Vokrouhlický et al. (2017a) and Bottke et al. (2015) (also see Appendix B). Here we restrict ourselves to describing the main features and parameters relevant to this work. The linear heat diffusion model for spherical bodies is used to determine the thermal accelerations (Vokrouhlický 1999). We only account for the diurnal component since the seasonal component is smaller and its long-term effect on semimajor axis vanishes when the obliquity is 0 • or 180 • . We use D = 2 km and assume a bulk density of 1.5 g cm −3 which should be appropriate for primitive C/B-type asteroids Scheeres et al. 2015). Considering the spectral class and size, we set the surface thermal inertia equal to 250 in the SI units (Delbo et al. 2015). To model thermal drift in the semimajor axis we also need to know the rotation state of the asteroids: the rotation period P and orientation of the spin vector s. The current rotational states of the Clarissa family members, except for (302) Clarissa itself (see Section 1), are unknown. This introduces another degree of freedom into our model, because we must adopt some initial distribution for both of these parameters. For the rotation period P we assume a Maxwellian distribution between 3 and 20 hr with a peak at 6 hr based on Eq. (4) from Pravec et al. (2002). The orientation of the spin vectors was initially set to be isotropic but, as we will show, this choice turned out to be a principal obstacle in matching the orbital structure of the Clarissa family (e.g., the excess of members sunward from (302) Clarissa). We therefore performed several additional simulations with non-isotropic distributions to test different initial proportions of prograde and retrograde spins. The final component of the SWIFT extension is modeling the evolution of the asteroids' rotation state. For this we implement an efficient symplectic integrator described in Breiter et al. (2005). We introduce ∆ which is the dynamical elipticity of an asteroid. It is an important parameter since the SWIFT code includes effects of solar gravitational torque. We assume that ∆ = (C − 0.5(A + B))/C, where (A, B, C) are the principal moments of the inertia tensor; has a Gaussian distribution with mean 0.25 and standard deviation 0.05. These values are representative of a population of small asteroids for which the shape models were obtained (e.g., Vokrouhlický &Čapek 2002). The YORP effect produces a long-term evolution of the rotation period and direction of the spin vector (Bottke et al. 2006;Vokrouhlický et al. 2015). To account for that we implemented the model ofČapek & Vokrouhlický (2004) where the YORP effect was evaluated for bodies with various computer-generated shapes (random Gaussian spheroids). For a 2 km sized Clarissa family member, this model predicts that YORP should double the rotational frequency over a mean timescale of 80 Myr. We define one YORP cycle as the evolution from a generic initial rotation state to the asymptotic state with very fast or very slow rotation, and obliquity near 0 • or 180 • . Given that the previous Clarissa family age estimates are slightly shorter than the YORP timescale quoted above, we expect that D 2 km members have experienced less than one YORP cycle. This is fortunate because previous studies showed that modeling multiple YORP cycles can be problematic Vraštil & Vokrouhlický 2015). As for the preference of YORP to accelerate or decelerate spins, Golubov & Krugly (2012) studied YORP with lateral heat conduction and found that YORP more often tends to accelerate rotation than to slow down rotation (see also Golubov & Scheeres 2019, for effects on obliquity). The proportion of slow-down to spin-up cases is unknown and, for sake of simplicity, we do not model these effects in detail. Instead, we take an empirical and approximate approach. Here we investigate cases where (i) 50% of spins accelerate and 50% decelerate (the YORP1 model), and (ii) 80% of spins accelerate and 20% decelerate (YORP2). See Table 1 for a summary of model assumptions both physical and dynamical. Summary of Physical and Dynamical Model Assumptions Asteroid Physical Properties (C-type) Value (302) Clarissa's diameter 43 ± 4 km Visual albedo 0.056 ± 0.017 Bulk density 1.5 g cm −3 Thermal inertia 250 J m −2 s −0.5 K −1 Constant C (see Nesvorný et al. 2015) (5 ± 1) × 10 −6 au Dynamical Properties Value ANALYSIS We simulated 500 test bodies over 150 Myr to model the past evolution of the synthetic Clarissa family. For each body, we compute the synthetic proper elements with 0.5 Myr cadence and 10 Myr Fourier window (Šidlichovský & Nesvorný 1996). Our goal is to match the orbital distribution of the real Clarissa family. This is done as follows. The top panel of Figure 3 shows the semimajor axis distribution of 114 members of the Clarissa family with the sizes D 2 km. We denote the number of known asteroids in each of the bins as N i obs , where i spans 25 bins as shown in Figure 3. We use one million trials to randomly select 114 of our synthetic family asteroids and compute their semimajor axis distribution N i mod for the same bins. For each of these trials we compute a χ 2 -like quantity, χ 2 a (T ) = i N i mod − N i obs 2 N i obs ,(3) where the summation goes over all 25 bins. The normalization factor of χ 2 a (T ), namely N i obs , is a formal statistical uncertainty of the population in the ith bin. We set the denominator equal to unity if N i obs = 0 in a given bin. Another distinctive property of the Clarissa family is the distribution of eccentricities sunward of the J4-S2-1 resonance. We denote as N + obs the number of members with a p < 2.398 au and e p > 0.1065 (i.e., sunward from J4-S2-1 and eccentricities larger than the proper eccentricity of (302) Clarissa; Figure 2). Similarly, we denote as N − obs the number of members with a p < 2.398 au and e p < 0.1065. For D 2 km, we find N + obs = 48 and N − obs = 5. It is peculiar that N + obs /N − obs 10 because the initial family must have had a more even distribution of eccentricities. This has something to do with crossing of the J4-S2-1 resonance (see below). As our goal is to simultaneously match the semimajor axis distribution and N + obs /N − obs , we define χ 2 (T ) = χ 2 a (T ) + N + mod − N + obs 2 N + obs + N − mod − N − obs 2 N − obs ,(4) where N − mod and N + mod are computed from the model. The Clarissa family age is found by computing the minimum of χ 2 (T ) (Figure 6). initial ejection velocity field that is isotropic in space. Symbols show χ 2 computed by the procedure described in Section 3. The color coding corresponds to models with different fractions of initially retrograde rotators (see Figure 7 for a summary of the parameters): 50% (black), 60% (cyan), 70% (green), 80% (red), 90% (blue), and 100% (purple). The red and blue parabolas are quadratic fits of χ 2 near the minima of the respective data sets. The dashed light-gray line marks the value of 27, equal to the number of effective bins. The best age solutions are 54 ± 6 Myr and 56 ± 6 Myr for YORP1 and YORP2, respectively. The YORP2 model provides the best solution with χ 2 5.4. In both cases, simulations with 70%-90% of initially retrograde rotators provide the best solutions. We find that χ 2 (T ) always reaches a single minimum in 0 < T < 150 Myr. The minimum of χ 2 (T ) was then determined by visual inspection, performing a second-order polynomial fit of the form χ 2 (T ) aT 2 + bT + c in the vicinity of the minimum, and thus correcting the guessed value to T * = −b/2a. After inspecting the behavior of χ 2 (T ), we opted to use a ±15 Myr interval where we fit the second-order polynomial; for instance, between 40 and 70 Myr, if the minimum is at 55 Myr, and so on. A formal uncertainty is found by considering an increment ∆χ 2 resulting in δT = ∆χ 2 /a. Thus the age of the Clarissa family is t age = T * ± δT . In a two-parameter model, where the two parameters are T and the initial fraction of retrograde rotators, we need ∆χ 2 = 2.3 for a 68% confidence limit or 1σ, and ∆χ 2 = 4.61 for a 90% confidence limit or 2σ (Press et al. 2007, Chapter 15, Section 6). Our error estimates are approximate. The model has many additional parameters, such as the initial velocity field, thermal inertia, bulk density, etc. Additionally, a Kolmogorov-Smirnov two-sample test (Press et al. 2007, Chapter 14, Section 3) was performed on selected models (Figures 8 and 9). This test provides an alternative way of looking at the orbital distribution of Clarissa family members in the semimajor axis. It has the advantage of being independent of binning. These reference jobs used the assumption of an initially isotropic velocity field with all fragments launched at 20 m s −1 with respect to (302) Clarissa. This set of simulations included two cases: the (i) YORP1 model (equal likelihood of acceleration and deceleration of spin), and (ii) the YORP2 model (80% chance of acceleration versus 20% chance of deceleration). In each case, we simulated six different scenarios with different percentages of initially retrograde rotators (from 50% to 100% in increments of 10%). See Figure 7 for a summary diagram of model input parameters. In total, this effort represented 12 simulations, each following 500 test Clarissa family members. Figure 6 and explained in Section 4. This diagram can be read as follows: for example, in the YORP1 model (50:50 chance of acceleration:deceleration of spin by YORP) we simulated fragments with an initial ejection velocity of 20 m s −1 which included varying fractions of initially retrograde rotators from 50% to 100% in increments of 10%. The initial ejection velocity of 20 m s −1 represents overall 12 simulations with the YORP1 and YORP2 models. These simulated cases include only the isotropic velocity field. Figure 6 summarizes the results by reporting the time dependence of χ 2 (T ) from Eq. (4). In all cases, χ 2 (T ) reaches a well defined minimum. Initially the test body distribution is very different from the orbital structure of the Clarissa family and χ 2 (0) is therefore large. For T ≥ 100 Myr, the simulated bodies evolve too far from the center of the family, well beyond the width of the Clarissa family in a p , and χ 2 (T ) is large again. The minimum of χ 2 (T ) occurs near 50-60 Myr. For the models with equal split of prograde and retrograde rotators (Figure 6, black symbols) the minimum χ 2 (T * ) 50, which is inadequately large for 27 data points (this applies to both the YORP1 and YORP2 models). This model can therefore be rejected. The main deficiency of this model is that bodies have an equal probability to drift inward or outward in a P (left panel of Figure 10). The model therefore produces a symmetric distribution in semimajor axis, which is not observed (see the top panel of Figure 3). The simulations also show that the M1/2 orbital resonance with Mars is not strong enough to produce the observed asymmetry. We thus conclude that the a p distribution asymmetry must be a consequence of the predominance of retrograde rotators in the family. This prediction can be tested observationally. Model Input Parameters The results shown in Figure 6 indicate that the best solutions are obtained when 70%-90% of fragments have initially retrograde rotation. These models lead to χ 2 (T * ) 11.4 for YORP1 and χ 2 (T * ) 5.4 for YORP2. Both these values are acceptably low. A statistical test shows that the probability χ 2 should attain or exceed this level by random fluctuations is greater than 90% (Press et al. 2007, Chapter 15, Section 2). The inferred age of the Clarissa family is t age = 54 ± 6 Myr for the YORP1 model and t age = 56 ± 6 Myr for the YORP2 model. The results of the Kolmogorov-Smirnov test confirm these inferences. For example, if we select a 90% confidence limit to be able to compare with the best-fit χ 2 result, we obtain t age = 56 +7 −6 Myr for the YORP2 model (see Figs. 8 and 9). The best fit to the observed a p distribution is shown for YORP2 in Figure 10 . The test was applied to the preferred YORP2 model (80% preference for acceleration by YORP with 80% of retrograde rotators; Fig. 6). The dashed red line refers to the critical D-value, Dα (α = 0.10), which corresponds to a 90% confidence limit (O'Connor & Kleyner 2012, appendix 3). We find tage = 56 +7 −6 Myr using this test. This result closely matches that obtained with the χ 2 method described in Section 3. The orbital distribution produced by our preferred YORP2 model is compared with observations in Figure 11. We note that the test bodies crossing the J4-S2-1 resonance often have their orbital eccentricity increased. This leads to the predominance of orbits with e p > 0.1065 for a p < 2.398 au. We obtain N + mod = 45 and N − mod = 5, which is nearly identical to the values found in the real family (N + obs = 48 and N − obs = 5). Suggestively, even the observed sin i p distribution below the J4-S2-1 resonance, which is slightly wider, is well reproduced. We also note a hint of a very weak mean motion resonance at a P 2.404 au which manifests itself as a slight dispersal of e P . Using tools discussed and provided by Gallardo (2014), we tentatively identified it as a three-body resonance J6-S7-1, but we did not prove it by analysis of the associated critical angle. Anisotropic velocity field with 20 m s −1 Here we discuss (and rule out) the possibility that the observed asymmetry in a p is related to an anisotropic ejection velocity field (rather than to the preference for retrograde rotation as discussed above). To approximately implement an anisotropic velocity field we select test bodies initially populating the left half of the green ellipses in Fig. 2 (i.e., all fragments assumed to have initial a p lower than (302) Clarissa) and adopt a 50-50% split of prograde/retrograde rotators. This model does not work because the evolved distribution in a p becomes roughly symmetrical (with only a small sunward shift of the center). This happens because the effects of the Yarkovsky drift on the a p distribution are more important than the initial distribution of fragments in a p . We also tested a model that combined the preference for retrograde rotation with the anisotropic ejection field. As before, we found that the best-fitting models were obtained if there was an ∼80% preference for retrograde rotation. The fits were not as good, however, as those obtained for the isotropic ejection field. The minimum χ 2 (T * ) achieved was 12, which is significantly higher than the previous result with χ 2 (T * ) 5.4. We therefore conclude that the observed structure of the Clarissa family can best be explained if fragments were ejected isotropically and there was 4:1 preference for retrograde rotation. This represents an important constraint on the impact that produced the Clarissa family and, more generally, on the physics of large-scale collisions. Isotropic velocity field with 10 and 30 m s −1 We performed additional simulations with the isotropic ejection field and velocities of 10 and 30 m s −1 . The main goal of these simulations was to determine the sensitivity of the results to this parameter. Analysis of the simulations with 10 m s −1 revealed results similar to those obtained with 20 m s −1 (Section 4.1). For example, the best-fitting solution for the preferred YORP2 had χ 2 (T * ) 3.9 and t age = 59 ± 5 Myr (90% confidence interval). Again, 70% to 90% of test bodies are required to have initially retrograde rotation. Results obtained with 30 m s −1 showed that this value is already too large to provide an acceptable fit. The best-fit solution of all investigated models with 30 m s −1 is χ 2 25. This happens because the initial spread in the semimajor axis is too large and the Yarkovsky and YORP effects are not capable of producing Clarissa family ears (see Fig. 3). We conclude that ejection speeds 30 m s −1 can be ruled out. Figure 12 shows results similar to Figure 11, but for ejection velocities v ej = 10 m s −1 (left panel) and v ej = 30 m s −1 (right panel); all other simulation parameters are the same as in Fig. 11 and the configuration is shown when χ 2 of the corresponding run reached a minimum. The former simulation, 10 m s −1 ejection speed, still provides very good results. The initial spread in proper eccentricities near (302) Clarissa (at a P 2.4057 au) is significantly smaller, but the above-mentioned weak mean motion resonance at 2.404 au suitably extends the family at smaller a P region as the members drift across. This helps to balance the e P distribution of the family members below the J4-S2-1 resonance and also provides a tight e P distribution near the M1/2 resonance. On the other hand, the simulation with a 30 m s −1 ejection speed gives much worse results (the best we could get was χ 2 (T ) 23.8 for the simulation shown on the right panel of Fig.12). Here the family initial extension in e P and sin i P is large, and this implies that also the population of fragments that crossed the J4-S2-1 resonance remains unsuitably large and contradicts the evidence of a shift toward larger values on its sunward side. DISCUSSION AND CONCLUSIONS The Clarissa family is an interesting case. The family's location in a dynamically quiet orbital region of the inner belt allowed us to model its structure in detail. Its estimated age is older than any of the very young families (e.g., Nesvorný et al. 2002; but younger than any of the families to which the Yarkovsky effect chronology was previously applied (e.g., Vokrouhlický et al. 2006a;2006c). Specifically, we found that the Clarissa family is 56 ± 6 Myr old (formal 90% confidence limit). The dependence on parameters not tested in this work may imply a larger uncertainty. For example, here we adopted a bulk density ρ = 1.5 g cm −3 . In the case of pure Yarkovsky drift the age scales with ρ as t age ∝ ρ; higher/lower densities would thus imply older/younger ages. However, in our model this scaling is more complicated since altering ρ changes the YORP timescale and the speed of resonance crossing. The initial ejection velocities were constrained to be smaller than 20 m s −1 , a value comparable to the escape velocity from (302) Clarissa. We found systematically better results for the model where ∼80% of fragments had rotation accelerated by YORP and the remaining ∼20% had rotation decelerated by YORP. This tendency is consistent with theoretical models of YORP and actual YORP detection, which suggest the same preference (as reviewed in Vokrouhlický et al. 2015). The most interesting result of this work is the need for asymmetry in the initial rotation direction for small fragments. We estimate that between 70% and 90% of D 2 km Clarissa family members initially had retrograde rotation. As this preference was not modified much by YORP over the age of the Clarissa family, we expect that the great majority of small family members with a < 2.406 au (i.e., lower than the semimajor axis of (302) Clarissa) must be retrograde rotators today. This prediction can be tested observationally. In fact, prior to running the test cases mentioned in Figure 7 of Section 4 we expected that simulating more retrograde rotators in roughly the 80:20 proportion would match the distribution of the observed Clarissa family. We see roughly the same proportion in the V-shape of Figure 3 where there are more asteroids on the left side. Possible causes for the split of prograde/retrograde rotators in the Clarissa family and other asteroid families could be a consequence of the original parent body rotation, the geometry of impact, fragment reaccumulation, or something else. Some previously studied asteroid families have already hinted at possible asymmetries or peculiar diversity. For example, the largest member of the Karin family is a slow prograde rotator, while a number of members following (832) Karin in size are retrograde rotators (Nesvorný & Bottke 2004;. Similarly, the largest member of the Datura family is a very fast prograde rotator, while several members with smaller size are very slowly rotating and peculiarly shaped objects, all in a prograde sense (e.g., Vokrouhlický et al. 2017b). The small members of the Agnia family are predominantly retrograde ( 60%; Vokrouhlický et al. 2006b). The inferred conditions of Clarissa family formation, together with (302) Clarissa's slow and retrograde rotation, therefore present an additional interesting challenge for modeling large-scale asteroid impacts. We encourage our fellow researchers to investigate the interesting scientific problem of the possible causes in the split of prograde/retrograde rotators. APPENDIX A. CHOICE OF PARAMETERS ω AND f Obviously our choice of ω and f for the initial configuration of the synthetic Clarissa family is not unique. However, we argue that (i) there are some limits to be satisfied, and (ii) beyond these limits the results would not critically depend on the choice of f and ω. First, we postulate an initial ejection velocity of family members around 20 m s −1 (about the escape velocity from (302) Clarissa) as the most probable value (often seen in young asteroid families). Then, the choice ω + f either near 0 • or 180 • is dictated by the Clarissa family extent in proper inclination values in between the J4-S2-1 and M1/2 resonances (see the green ellipses in Fig. 2). There are no dynamical effects in between these two resonances to increase the inclination to observed values. So, for instance, if ω + f were close to 90 • , the spread in proper inclination would collapse to zero which is contrary to observation (see the corresponding Gauss equation from Zappalà et al. 1996). In the same way, if we want to have fragments roughly equally represented around Clarissa in the (a p , e p ) plot (Fig. 2) then we need f near 90 • (for instance, values of 0 • or 180 • would shrink the appropriate ellipse to a line segment, again not seen in the data). B. DETAILS OF CODE EXTENSION Here we provide few more details about implementation of radiation torques (the YORP effect) in our code; more can be found in Vokrouhlický et al. (2017a) and Bottke et al. (2015). We do not assume a constant rate in rotational frequency ω, and we do not assume a constant obliquity of all evolving Clarissa fragments. Instead, after setting some initial values (ω 0 , 0 ) for each of them, we numerically integrate Eqs. (3) and (5) ofČapek & Vokrouhlický (2004), or also Eqs. (2) and (3) of Appendix A in Bottke et al. (2015). This means dω dt = f ( ) and d dt = g( )/ω. As a template for the f -and g-functions we implement results from Figure 8 ofČapek & Vokrouhlický (2004) (note this also fixes the general dependence of the g-functions on the surface thermal conductivity). The g-function has a typical wave pattern making obliquity evolve asymptotically to 0 • or 180 • from a generic initial value. The f -function also has a wave pattern, though the zero value is near ∼ 55 • and ∼ 125 • obliquity and at 0 • or 180 • obliquity (Čapek & Vokrouhlický 2004).Čapek & Vokrouhlický had an equal likelihood of acceleration or deceleration of ω (due to the simplicity of their approach). This is our YORP1 model. When we tilt these statistics to 80% asymptotically accelerating and 20% asymptotically decelerating cases for ω, we obtain our YORP2 model (this is our empirical implementation of the physical effects studied in Golubov & Krugly 2012). We can do that straightforwardly because at the beginning of the integration we assign to each of the fragments one particular realization of the f -and g-functions from the pool covered byČapek & Vokrouhlický (2004) results (their Figure 8). The behavior at the boundaries of the rotation rate ω also needs to be implemented: (i) the shortest allowed rotation period is set to 2 hr (before fission would occur), and (ii) the longest is set at a 1000 hr rotation period. Modeling of the rotation evolution at these limits is particularly critical for old families, such as Flora or even Eulalia (discussed in Vokrouhlický et al. 2017a andBottke et al. 2015), because small fragments reach the limiting values over a timescale much smaller than the age of the family. Fortunately this problem is not an issue for the Clarissa family due to its young age. Many fragments in our simulation just make it to these asymptotic limits. Note also that while slowing down ω by YORP, asteroids do not cease rotation and start tumbling (not implemented in detail in our code). For that reason, while arbitrary and simplistic, the 1000 hr smallest-ω limit is acceptable. Our code inherits from Bottke et al. (2015) some approximate recipes on what to do in these limits. For instance, when stalled at an 1000 hr rotation period ("tumbling phase"), the bodies are sooner or later assumed to receive a sub-catastrophic impact that resets their rotation state to new initial values. When the rotation period reaches 2 hr, we assume a small fragment gets ejected by fission and the rotation rate resets to a smaller value. In both cases, an entirely new set of values for the f -and g-functions is chosen. Figure 2 . 2Orbital structure of the Clarissa family: ep vs. ap (top panel), sine of ip vs. ap (bottom panel). Figure 3 . 3Clarissa family distribution in ap and H (bottom panel). (302) Clarissa is highlighted by the black diamond (D = 43 ± 4 km;Ďurech et al. 2011). Small family members (H > 17) are missing in the center of the family and they are pushed toward the borders. The curves are calculated by Eq. (1), where ac = 2.4057 au corresponds to (302) Clarissa and C = (5±1)×10 −6 au (Nesvorný et al. 2015). Asteroids located far outside this envelope, shown in red, are suspected interlopers. The top panel shows the ap distribution of family members (suspected interlopers excluded) with 16.8 < H < 18 (corresponding to D 2 km). The gray arrow indicates a range of ap values that would be initially populated by Clarissa family members (assumed ejection speeds 20 m s −1 ). Figure 4 . 4Cumulative absolute magnitude H distribution of 169 Clarissa family members (10 suspected interlopers removed;Fig. 3). The large dot indicates (302) Clarissa. The gray reference line is N (<H) ∝ 10 γH with γ = 0.75. The diameters are labeled on the top. Figure 5 . 5Osculating elements of (302) Clarissa over 1 Myr into the future. At time t 47.35 ky (red dot), ω = 90 • and f = 90 • . Figure 6 . 6Time-dependence of the χ 2 function defined in Eq. (4): left panel for the YORP1 model, right panel for the YORP2 model. These simulations used a 20 m s −1 Figure 7 . 7Summary of all input parameters for the YORP model simulations as shown in (right panel). The model distribution for T * = 56 Myr indeed represents an excellent match to the present Clarissa family. Figure 8 . 8Kolmogorov-Smirnov two-sample test: D-value vs. time (top panel), and p-value vs. time (bottom panel), both computed from the cumulative distribution functions F1(ap) (model) and F2(ap) (observed) Figure 9 . 9Kolmogorov-Smirnov two-sample test: cumulative distribution functions F1(ap) (model, blue line) and F2(ap) (observed, red line). The model distribution is shown for our preferred YORP2 model at T * = 56 Myr. This curve (blue) corresponds to the minimum of the D-value and the maximum p-value plotted in Figure 8. The green and black dashed lines are F1(ap) ± Dα corresponding a 90% confidence band where F1(ap) − Dα ≤ F2(ap) ≤ F1(ap) + Dα. This interval contains the true cumulative distribution function F2(ap) with a 90% probability. Figure 10 . 10Minimum χ 2 solution for the YORP2 model in both panels with 20 m s −1 ejection speed with 50% of initially retrograde spins (left panel) and 80% of initially retrograde spins (right panel). The solid black line represents the observed family corresponding to the top panel of Figure 3. The dashed line is the initial distribution of test bodies. The gray histogram is the model distribution where χ 2 reached a minimum in the simulation corresponding to age solutions of T * = 63 Myr (left panel) and 56 Myr (right panel). In the left panel the model distribution is quite symmetric and does not match the observed family distribution. There is only a slight asymmetry in the model distribution (left panel) which is due to asteroids leaking out of the family range via the M1/2 resonance. The light-gray bars highlight locations of the principal resonances. Figure 11 . 11Orbital evolution of family members in our preferred YORP2 model shown in Fig. 10 (right panel). The proper orbital elements ep and ap are shown in the top panel, and sin ip and ap are shown in the bottom panel. The red symbols are the 114 members of the Clarissa family with sizes D 2 km (those within the strip delimited by the dashed gray lines in the bottom panel of Figure 3). The blue symbols show orbits of 114 modeled bodies for T * = 56 Myr. The gray lines show the evolutionary tracks of the test bodies. Figure 12 . 12Orbital evolution of family members in our preferred YORP2 model (80% of initially retrograde spins and 80% preference for spin acceleration by YORP) for initial ejection velocities of vej = 10 m s −1 (left panel) and vej = 30 m s −1 (right panel). This figure is similar to Figure 11 but with two different ejection speeds. The red symbols are the 114 members of the Clarissa family with sizes D 2 km (those within the strip delimited by the dashed gray lines in the bottom panel of Figure 3). The blue symbols show orbits of 114 modeled bodies at the time of minimum χ 2 of the respective simulation, T * = 59 Myr (left panel) and T * = 56 Myr (right panel). The gray lines show the evolutionary tracks of the test bodies in both simulations. Table 1. Note. The asteroid physical parameters are based on typical values for C-type asteroids.Initial velocity field Isotropic with 10-30 m s −1 Initial percentage of asteroid retrograde rotation Varying from 50 to 100% by 10% incre- ments Asteroid rotational period Maxwellian 3-20 hr with peak at 6 hr Only considered diurnal component of Yarkovsky Drift Dominates over seasonal component Asteroid dynamical ellipticity ∆ Gaussian with µ = 0.25 and σ = 0.05 Preference for YORP to accelerate or decelerate asteroid spin 50:50 and 80:20 (acceleration:deceleration) The dynamical properties stem from current predictions of YORP theory (see Vokrouhlický &Čapek 2002;Čapek & Vokrouhlický 2004; Breiter et al. 2005; Bottke et al. 2006; Vokrouhlický et al. 2015). Re-running the hierarchical clustering method(Zappalà et al. 1990; on the most recent proper element catalog results in only a slightly larger membership of the Clarissa family. As the orbital structure of the family remains the same, we opt to use the original PDS identification. ACKNOWLEDGMENTS Support for this work is given by SSERVI-CLASS grant NNA14AB05A and NASA grant NNX17AG92G (V.L. and H.C.). Further support was given by the NASA Florida Space Grant Consortium Fellowship (V.L.). Coauthor support for this study was given by the NASA SSW program (D.N.) and by the Czech Science Foundation (grant 18-06083S) (D.V.). . W F Bottke, D D Durda, D Nesvorný, 10.1016/j.icarus.2005.05.017Icar. 179Bottke, W. F., Durda, D. D., Nesvorný, D., et al. 2005, Icar, 179, 63, doi: 10.1016/j.icarus.2005.05.017 . W F Bottke, D Vokrouhlický, M Brož, D Nesvorný, A Morbidelli, 10.1126/science.1066760Science. 2941693Bottke, W. F., Vokrouhlický, D., Brož, M., Nesvorný, D., & Morbidelli, A. 2001, Science, 294, 1693, doi: 10.1126/science.1066760 . W F Bottke, D Vokrouhlický, D P Rubincam, D Nesvorný, 10.1146/annurev.earth.34.031405.125154Annual Review of Earth and Planetary Sciences. 34157Bottke, W. F., Vokrouhlický, D., Rubincam, D. P., & Nesvorný, D. 2006, Annual Review of Earth and Planetary Sciences, 34, 157, doi: 10.1146/annurev.earth.34.031405.125154 . W F Bottke, D Vokrouhlický, K Walsh, 10.1016/j.icarus.2014.09.046Icar. 247191Bottke, W. F., Vokrouhlický, D., Walsh, K., et al. 2015, Icar, 247, 191, doi: 10.1016/j.icarus.2014.09.046 . S Breiter, D Nesvorný, D Vokrouhlický, 10.1086/432258AJ. 1301267Breiter, S., Nesvorný, D., & Vokrouhlický, D. 2005, AJ, 130, 1267, doi: 10.1086/432258 . D Capek, D Vokrouhlický, 10.1016/j.icarus.2004.07.003Icar. 172526Capek, D., & Vokrouhlický, D. 2004, Icar, 172, 526, doi: 10.1016/j.icarus.2004.07.003 . V Carruba, D Nesvorný, 10.1093/mnras/stw043MNRAS. 4571332Carruba, V., & Nesvorný, D. 2016, MNRAS, 457, 1332, doi: 10.1093/mnras/stw043 . V Carruba, D Nesvorný, D Vokrouhlický, 10.3847/0004-6256/151/6/164AJ. 151164Carruba, V., Nesvorný, D., & Vokrouhlický, D. 2016, AJ, 151, 164, doi: 10.3847/0004-6256/151/6/164 . M Delbo, M Mueller, J P Emery, B Rozitis, M T Capria, 10.2458/azu_uapress_9780816532131-ch006IV, P. Michel, F. E. DeMeo, and W. F. BottkeUniversity of Arizona Press895107TucsonDelbo, M., Mueller, M., Emery, J. P., Rozitis, B., & Capria, M. T. 2015, Asteroids IV, P. Michel, F. E. DeMeo, and W. F. Bottke (eds.), University of Arizona Press, Tucson, 895, 107, doi: 10.2458/azu uapress 9780816532131-ch006 . J Durech, V Sidorin, M Kaasalainen, A&A. 51346Durech, J., Sidorin, V., & Kaasalainen, M. 2010, A&A, 513, A46, doi: https://astro.troja.mff.cuni.cz/projects/damit/ . J Durech, M Kaasalainen, D Herald, 10.1016/j.icarus.2011.03.016214652Durech, J., Kaasalainen, M., Herald, D., et al. 2011, Icar, 214, 652, doi: 10.1016/j.icarus.2011.03.016 . T Gallardo, 10.1016/j.icarus.2013.12.020Gallardo, T. 2014, Icar, 231, 273, doi: 10.1016/j.icarus.2013.12.020 . O Golubov, Y N Krugly, 10.1088/2041-8205/752/1/L11ApJL. 752Golubov, O., & Krugly, Y. N. 2012, ApJL, 752, 5, doi: 10.1088/2041-8205/752/1/L11 . O Golubov, D J Scheeres, 10.3847/1538-3881/aafd2cAJ. 157105Golubov, O., & Scheeres, D. J. 2019, AJ, 157, 105, doi: 10.3847/1538-3881/aafd2c . J Hanuš, J Ďurech, M Brož, 10.1051/0004-6361/201116738A&A. 530134Hanuš, J.,Ďurech, J., Brož, M., et al. 2011, A&A, 530, A134, doi: 10.1051/0004-6361/201116738 . Z Knežević, M Lemaître, A Milani, Iii Asteroids, W F BottkeJr, A Cellino, P Paolicchi, R. P. BinzelUniversity of Arizona Press603TucsonKnežević, Z., Lemaître, M., & Milani, A. 2002, Asteroids III, W. F. Bottke Jr., A. Cellino, P. Paolicchi, and R. P. Binzel (eds), University of Arizona Press, Tucson, 603 . H Levison, M Duncan, Icar. 10818Levison, H., & Duncan, M. 1994, Icar, 108, 18 . F Marzari, D Davis, V Vanzani, 10.1006/icar.1995.1014Icar. 113168Marzari, F., Davis, D., & Vanzani, V. 1995, Icar, 113, 168, doi: 10.1006/icar.1995.1014 . J R Masiero, F Demeo, T Kasuga, A Parker, 10.2458/azu_uapress_9780816532131-ch017IV, P. Michel, F. E. DeMeo, and W. F. BottkeUniversity of Arizona Press895323TucsonMasiero, J. R., DeMeo, F., Kasuga, T., & Parker, A. 2015, Asteroids IV, P. Michel, F. E. DeMeo, and W. F. Bottke (eds.), University of Arizona Press, Tucson, 895, 323, doi: 10.2458/azu uapress 9780816532131-ch017 . J R Masiero, A K Mainzer, J M Bauer, 10.1088/0004-637X/770/1/7ApJ. 77022Masiero, J. R., Mainzer, A. K., Bauer, J. M., et al. 2013, ApJ, 770, 22, doi: 10.1088/0004-637X/770/1/7 . J R Masiero, A K Mainzer, T Grav, 10.1088/0004-637X/741/2/68ApJ. 74168Masiero, J. R., Mainzer, A. K., Grav, T., et al. 2011, ApJ, 741, 68, doi: 10.1088/0004-637X/741/2/68 . D Morate, J De León, M De Prá, 10.1051/0004-6361/201731407A&A. 61025Morate, D., de León, J., De Prá, M., et al. 2018, A&A, 610, A25, doi: 10.1051/0004-6361/201731407 D Nesvorný, Nesvorny HCM Asteroid Families V3.0. EAR-A-VARGBDET-5-NESVORNYFAM-V3.0. NASA Planetary Data System. Nesvorný, D. 2015, Nesvorny HCM Asteroid Families V3.0. EAR-A-VARGBDET-5-NESVORNYFAM-V3.0. NASA Planetary Data System, doi: https://ui.adsabs.harvard. edu/abs/2015PDSS..234..... . D Nesvorný, W Bottke, 10.1016/j.icarus.2004.04.012Icar. 170324Nesvorný, D., & Bottke, W. 2004, Icar, 170, 324, doi: 10.1016/j.icarus.2004.04.012 . D Nesvorný, W Bottke, L Dones, H F Levison, 10.1038/nature00789720417Nesvorný, D., Bottke, W., Dones, L., & Levison, H. F. 2002, natur, 417, 720, doi: 10.1038/nature00789 . D Nesvorný, W F Bottke, D Vokrouhlický, A Morbidelli, R Jedicke, 10.1017/S1743921305006800ACM. D. Lazzaro, S. Ferraz-Mello & J.A. Fernández229289ACMRio de JaneiroNesvorný, D., Bottke, W. F., Vokrouhlický, D., Morbidelli, A., & Jedicke, R. 2006a, ACM 229, ed. D. Lazzaro, S. Ferraz-Mello & J.A. Fernández (Rio de Janeiro, Brasil: ACM), 289, doi: 10.1017/S1743921305006800 . D Nesvorný, M Brož, V Carruba, 10.2458/azu_uapress_9780816532131-ch016IV, P. Michel, F. E. DeMeo, and W. F. BottkeUniversity of Arizona Press895297TucsonNesvorný, D., Brož, M., & Carruba, V. 2015, Asteroids IV, P. Michel, F. E. DeMeo, and W. F. Bottke (eds.), University of Arizona Press, Tucson, 895, 297, doi: 10.2458/azu uapress 9780816532131-ch016 . D Nesvorný, A Morbidelli, 10.1086/300632AJ. 1163029Nesvorný, D., & Morbidelli, A. 1998, AJ, 116, 3029, doi: 10.1086/300632 . D Nesvorný, D Vokrouhlický, W F Bottke, 10.1126/science.1126175Science. 1490Nesvorný, D., Vokrouhlický, D., & Bottke, W. F. 2006b, Science, 312, 1490, doi: 10.1126/science.1126175 P O&apos;connor, A Kleyner, Practical Reliability Engineering. West SussexWiley5th ed.O'Connor, P., & Kleyner, A. 2012, Practical Reliability Engineering (5th ed.; West Sussex: Wiley) . P Paolicchi, F Spoto, Z Knežević, A Milani, 10.1093/mnras/sty3446MNRAS. 4841815Paolicchi, P., Spoto, F., Knežević, Z., & Milani, A. 2019, MNRAS, 484, 1815, doi: 10.1093/mnras/sty3446 . P Pravec, A W Harris, P Kušnirák, A Galád, K Hornoch, 10.1016/j.icarus.2012.07.026365Pravec, P., Harris, A. W., Kušnirák, P., Galád, A., & Hornoch, K. 2012, Icar, 221, 365, doi: 10.1016/j.icarus.2012.07.026 . P Pravec, A W Harris, T Michalowski, III, W. F. Bottke Jr., A. Cellino, P. Paolicchi, and R. P. BinzelUniversity of Arizona Press3113TucsonPravec, P., Harris, A. W., & Michalowski, T. 2002, Asteroids III, W. F. Bottke Jr., A. Cellino, P. Paolicchi, and R. P. Binzel (eds), University of Arizona Press, Tucson, 3, 113 W Press, S Teukolsky, W Vetterling, B Flannery, Numerical Recipes: The Art of Scientific Computing. New YorkCambridge University Press3rd ed.Press, W., Teukolsky, S., Vetterling, W., & Flannery, B. 2007, Numerical Recipes: The Art of Scientific Computing, (3rd ed.; New York: Cambridge University Press) . D Scheeres, D Britt, B Carry, K Holsapple, 10.2458/azu_uapress_9780816532131-ch038IV, P. Michel, F. E. DeMeo, and W. F. BottkeUniversity of Arizona Press895745TucsonScheeres, D., Britt, D., Carry, B., & Holsapple, K. 2015, Asteroids IV, P. Michel, F. E. DeMeo, and W. F. Bottke (eds.), University of Arizona Press, Tucson, 895, 745, doi: 10.2458/azu uapress 9780816532131-ch038 . M Sidlichovský, D Nesvorný, 10.1086/300632Celestial Mechanics and Dynamical Astronomy. 65137Sidlichovský, M., & Nesvorný, D. 1996, Celestial Mechanics and Dynamical Astronomy, 65, 137, doi: 10.1086/300632 . E Tedesco, P Noah, M Noah, S Price, NASA Planetary Data System. 12harvard.edu/abs/2004PDSS...12....Tedesco, E., Noah, P., Noah, M., & Price, S. 2004, NASA Planetary Data System, 12, doi: https://ui.adsabs. harvard.edu/abs/2004PDSS...12..... . D Vokrouhlický, A&A. 334362Vokrouhlický, D. 1999, A&A, 334, 362 . D Vokrouhlický, W F Bottke, S R Chesley, D J Scheeres, T S Statler, 10.2458/azu_uapress_9780816532131-ch027IV, P. Michel, F. E. DeMeo, and W. F. BottkeUniversity of Arizona Press156TucsonVokrouhlický, D., Bottke, W. F., Chesley, S. R., Scheeres, D. J., & Statler, T. S. 2015, Asteroids IV, P. Michel, F. E. DeMeo, and W. F. Bottke (eds.), University of Arizona Press, Tucson, 156, 82, doi: 10.2458/azu uapress 9780816532131-ch027 . D Vokrouhlický, W F Bottke, D Nesvorný, 10.3847/1538-3881/aa64dcAJ. 153172Vokrouhlický, D., Bottke, W. F., & Nesvorný, D. 2017a, AJ, 153, 172, doi: 10.3847/1538-3881/aa64dc . D Vokrouhlický, M Brož, W F Bottke, D Nesvorný, A Morbidelli, 10.1016/j.icarus.2005.12.010Icar. 182Vokrouhlický, D., Brož, M., Bottke, W. F., Nesvorný, D., & Morbidelli, A. 2006a, Icar, 182, 118, doi: 10.1016/j.icarus.2005.12.010 . 10.1016/j.icarus.2006.03.002349-. 2006b, Icar, 183, 349, doi: 10.1016/j.icarus.2006.03.002 . D Vokrouhlický, M Brož, A Morbidelli, 10.1016/j.icarus.2005.12.011Icar. 18292Vokrouhlický, D., Brož, M., Morbidelli, A., et al. 2006c, Icar, 182, 92, doi: 10.1016/j.icarus.2005.12.011 . D Vokrouhlický, D &amp;čapek, 10.1006/icar.2002.6918449Vokrouhlický, D., &Čapek, D. 2002, Icar, 2, 449, doi: 10.1006/icar.2002.6918 . D Vokrouhlický, P Pravec, J Ďurech, 10.1051/0004-6361/201629670A&A. 59891Vokrouhlický, D., Pravec, P.,Ďurech, J., et al. 2017b, A&A, 598, A91, doi: 10.1051/0004-6361/201629670 . J Vraštil, D Vokrouhlický, 10.1051/0004-6361/201526138A&A. 57914Vraštil, J., & Vokrouhlický, D. 2015, A&A, 579, A14, doi: 10.1051/0004-6361/201526138 . V Zappalà, A Cellino, A Dell&apos;oro, F Migliorini, P Paolicchi, 10.1006/icar.1996.0196Icar. 124156Zappalà, V., Cellino, A., Dell'Oro, A., Migliorini, F., & Paolicchi, P. 1996, Icar, 124, 156, doi: 10.1006/icar.1996.0196 . V Zappalà, A Cellino, P Farinella, Z Knežević, 10.1086/115658AJ. 1002030Zappalà, V., Cellino, A., Farinella, P., & Knežević, Z. 1990, AJ, 100, 2030, doi: 10.1086/115658 . V Zappalà, A Cellino, P Farinella, A Milani, 10.1086/116897AJ. 107772Zappalà, V., Cellino, A., Farinella, P., & Milani, A. 1994, AJ, 107, 772, doi: 10.1086/116897 . V Zappalà, P Farinella, Z Knežević, P Paolicchi, 10.1016/0019-1035(84)90027-7Icar. 59261Zappalà, V., Farinella, P., Knežević, Z., & Paolicchi, P. 1984, Icar, 59, 261, doi: 10.1016/0019-1035(84)90027-7
[]
[ "Disciplined Convex-Concave Programming", "Disciplined Convex-Concave Programming" ]
[ "Xinyue Shen ", "Steven Diamond ", "Yuantao Gu ", "Stephen Boyd " ]
[]
[]
In this paper we introduce disciplined convex-concave programming (DCCP), which combines the ideas of disciplined convex programming (DCP) with convex-concave programming (CCP). Convex-concave programming is an organized heuristic for solving nonconvex problems that involve objective and constraint functions that are a sum of a convex and a concave term. DCP is a structured way to define convex optimization problems, based on a family of basic convex and concave functions and a few rules for combining them. Problems expressed using DCP can be automatically converted to standard form and solved by a generic solver; widely used implementations include YALMIP, CVX, CVXPY, and Convex.jl. In this paper we propose a framework that combines the two ideas, and includes two improvements over previously published work on convex-concave programming, specifically the handling of domains of the functions, and the issue of nondifferentiability on the boundary of the domains. We describe a Python implementation called DCCP, which extends CVXPY, and give examples.
10.1109/cdc.2016.7798400
[ "https://arxiv.org/pdf/1604.02639v1.pdf" ]
7,815,963
1604.02639
7b814e39159da8bef0a26308ddf3e4315c687675
Disciplined Convex-Concave Programming April 12, 2016 Xinyue Shen Steven Diamond Yuantao Gu Stephen Boyd Disciplined Convex-Concave Programming April 12, 2016 In this paper we introduce disciplined convex-concave programming (DCCP), which combines the ideas of disciplined convex programming (DCP) with convex-concave programming (CCP). Convex-concave programming is an organized heuristic for solving nonconvex problems that involve objective and constraint functions that are a sum of a convex and a concave term. DCP is a structured way to define convex optimization problems, based on a family of basic convex and concave functions and a few rules for combining them. Problems expressed using DCP can be automatically converted to standard form and solved by a generic solver; widely used implementations include YALMIP, CVX, CVXPY, and Convex.jl. In this paper we propose a framework that combines the two ideas, and includes two improvements over previously published work on convex-concave programming, specifically the handling of domains of the functions, and the issue of nondifferentiability on the boundary of the domains. We describe a Python implementation called DCCP, which extends CVXPY, and give examples. 1 Disciplined convex-concave programming 1.1 Difference of convex programming Difference of convex (DC) programming problems have the form minimize f 0 (x) − g 0 (x) subject to f i (x) − g i (x) ≤ 0, i = 1, . . . , m,(1) where x ∈ R n is the optimization variable, and the functions f i : R n → R and g i : R n → R for i = 0, . . . , m are convex. The DC problem (1) can also include equality constraints of the form p i (x) = q i (x), where p i and q i are convex; we simply express these as the pair of inequality constraints p i (x) − q i (x) ≤ 0, q i (x) − p i (x) ≤ 0, which have the difference of convex form in (1). When the functions g i are all affine, the problem (1) is a convex optimization problem, and easily solved [BV04]. The broad class of DC functions includes all C 2 functions [Har59], so the DC problem (1) is very general. A special case is Boolean linear programs, which can represent many problems, such as the traveling salesman problem, that are widely believed to be hard to solve [Kar72]. DC programs arise in many applications in fields such as signal processing [LOX15], machine learning [ALNT08], computer vision [LZOX15], and statistics [THA + 14]. DC problems can be solved globally by methods such as branch and bound [Agi66,LW66], which can be slow in practice. Good overviews of solving DC programs globally can be found in [HPT95,HT99] and the references therein. A locally optimal (approximate) solution can be found instead through the many techniques of general nonlinear optimization [NW06]. The convex-concave procedure (CCP) [YR03] is another heuristic algorithm for finding a local optimum of (1), which leverages our ability to efficiently solve convex optimization problems. In its basic form, it replaces concave terms with a convex upper bound, and then solves the resulting convex problem, which is a restriction of the original DC problem. Basic CCP can thus be viewed as an instance of majorization minimization (MM) algorithms [LHY00], in which a minimization problem is approximated by an easier to solve upper bound created around the current point (a step called majorization) and then minimized. Many MM extensions have been developed over the years and more can be found in [LR87,Lan04,MK07]. CCP can also be viewed as a version of DCA [TS86] which instead of explicitly stating the linearization, finds it by solving a dual problem. More information on DCA can be found at [An15] and the references therein. A recent overview of CCP, with some extensions, can be found in [LB15], where the issue of infeasibility is handled (heuristically) by an increasing penalty on constraint violations. The method we present in this paper is an extension of the penalty CCP method introduced in [LB15], given as algorithm 1.1 below. Algorithm 1.1 Penalty CCP. given an initial point x 0 , τ 0 > 0, τ max > 0, and µ > 1. k := 0. repeat 1. Convexify. Formĝ i (x; x k ) = g i (x k ) + ∇g i (x k ) T (x − x k ) for i = 0, . . . , m. 2. Solve. Set the value of x k+1 to a solution of minimize f 0 (x) −ĝ 0 (x; x k ) + τ k m i=1 s i subject to f i (x) −ĝ i (x; x k ) ≤ s i , i = 1, . . . , m s i ≥ 0, i = 1, . . . , m. 3. Update τ . τ k+1 := min(µτ k , τ max ). 4. Update iteration. k := k + 1. until stopping criterion is satisfied. See [LB15] for discussion of a few variations on the penalty CCP algorithm, such as not using slack variables for constraints that are convex, i.e., the case when g i is affine. Here it is assumed that g i are differentiable, and have full domain (i.e., R n ). The first condition is not critical; we can replace ∇g i (x k ) with a subgradient of g i at x k , if it is not differentiable. The linearization with a subgradient instead of the gradient is still a lower bound on g i . In some practical applications, the second assumption, that g i have full domain, does not hold, in which case the penalty CCP algorithm can fail, by arriving at a point x k not in the domain of g i , so the convexification step fails. This is one of the issues we will address in this paper. Disciplined convex programming Disciplined convex programming (DCP) is a methodology introduced by Grant et al. [GBY06] that imposes a set of conventions that must be followed when constructing (or specifying or defining) convex programs. Conforming problems are called disciplined convex programs. The conventions of DCP restrict the set of functions that can appear in a problem and the way functions can be composed. Every function in a disciplined convex program must come from a set of atomic functions with known curvature and graph implementation, or representation as partial optimization over a cone program [GB08,NN92]. Every composition of functions f (g 1 (x), . . . , g k (x)), where f : R p → R → R is convex and g 1 , . . . , g p : R n → R, must satisfy the following composition rule, which ensures the composition is convex. Let f : R p → R → R ∪ {∞} be the extended-value extension of f [BV04, Chap. 3]. One of the following conditions must hold for each i = 1, . . . , p: • g i is convex andf is nondecreasing in argument i on the range of (g 1 (x), . . . , g p (x)). • g i is concave andf is nonincreasing in argument i on the range of (g 1 (x), . . . , g p (x)). • g i is affine. The composition rule for concave functions is analogous. These rules allow us to certify the curvature (i.e., convexity or concavity) of functions described as compositions using the basic atomic functions. A DCP problem has the specific form minimize/maximize o(x) subject to l i (x) ∼ r i (x), i = 1, . . . , m,(2) where o (the objective), l i (lefthand sides), and r i (righthand sides) are expressions (functions of the variable x) with curvature known from the DCP rules, and ∼ denotes one of the relational operators =, ≤, or ≥. In DCP this problem must be convex, which imposes conditions on the curvature of the expressions, listed below. • For a minimization problem, o must be convex; for a maximization problem, o must be concave. • When the relational operator is =, l i and r i must both be affine. • When the relational operator is ≤, l i must be convex, and r i must be concave. • When the relational operator is ≥, l i must be concave, and r i must be convex. Functions that are affine (i.e., are both convex and concave) can match either curvature requirement; for example, we can minimize or maximize an affine expression. A disciplined convex program can be transformed into an equivalent cone program by replacing each function with its graph implementation. The convex optimization modeling systems YALMIP [Lof04], CVX [CVX12], CVXPY [DB16], and Convex.jl [UMZ + 14] use DCP to verify problem convexity and automatically convert convex programs into cone programs, which can then be solved using generic solvers. Disciplined convex-concave programming We refer to a problem as a disciplined convex-concave program if it has the form (2), with o, l i , and r i all having known DCP-verified curvature, but the DCP curvature conditions for the objective and constraints need not hold. Such problems include DCP as a special case, but it includes many other nonconvex problems as well. In a DCCP problem we can, for example, maximize a convex function, subject to nonaffine equality constraints, and nonconvex inequality constraints between convex and concave expressions. The general DC program (1) and the DCCP standard form (2) are equivalent. To express (1) as (2), we express it as minimize f 0 (x) − t subject to t = g 0 (x) f i (x) ≤ g i (x), i = 1, . . . , m, where x is the original optimization variable, and t is a new optimization variable. The objective here is convex, we have one (nonconvex) equality constraint, and the constraints are all nonconvex (except for some special cases when f i or g i is affine) It is straighforward to express the DCCP problem (2) in the form (1), by identifying the functions o i , l i , and r i as ±f i or ±g i depending on their curvatures. DCCP problems are an ideal standard form for DC programming because the linearized problem in algorithm 1.1 is a DCP program whenever the original problem is DCCP. The linearized problem can thus be automatically converted into a cone program and solved using generic solvers. Domain and subdifferentiability In this section we delve deeper into an issue that is 'assumed away' in the standard treatments and discussions of DC programming, specifically, how to handle the case when the functions g i do not have full domain. (The functions f i can have non-full domains, but this is handled automatically by the conversion into a cone program.) An example. Suppose the domain of g i is D i , for i = 0, . . . , m. If D i = R n , simply defining the linearizationĝ i (x; z) as the first order Taylor expansion of g i at the point z can lead to failure. The following simple problem gives an example: minimize √ x subject to x ≥ −1, where x ∈ R is the optimization variable. The objective has domain R + , and the solution is evidently x = 0. The linearized problem in the first iteration of CCP is minimize x 0 + 1 2 √ x 0 (x − x 0 ) subject to x ≥ −1, which has solution x 1 = −1. The DCCP algorithm will fail in the first step of the next iteration, since the original objective function is not defined at x 1 = −1. If we add the domain constraint directly into the linearized problem, we obtain x 1 = 0, but the first step of the next iteration also fails here, in a different way. While x 1 is in the domain of the objective function, the objective is not differentiable (or superdifferentiable) at x 1 , so the linearization does not exist. This phenomenon of non-subdifferentiability or non-superdifferentiability can only occur at a point on the boundary of the domain. Linearization with domain Suppose that the intersection of domains of all g i in problem (1) is D = ∩ m i=0 D i . The correct way to handle the domain is to define the linearization of g i at point z to bê g i (x; z) = g i (z) + ∇g i (z) T (x − z) − I i (x),(3) where the indicator function is I i (x) = 0 x ∈ D i ∞ x / ∈ D i , so any feasible point for the linearized problem is in the domain D. Since g i is convex, D i is a convex set and I i is a convex function. Therefore the 'linearization' (3) is a concave function; it follows that if we replace the standard linearization in algorithm 1.1 with the domain-restricted linearization (3), the linearized problem is still convex. Domain in DCCP Recall that we defined DCCP problems to ensure that the linearized problem in algorithm 1.1 is a DCP problem. It is not obvious that if we replace the standard linearization with equation (3) the linearized problem is still a DCP problem. In this section we prove that the linearized DCCP problem still satisfies the rules of DCP, or equivalently that each I i (x) has a known graph implementation or satisfies the DCP composition rule. If g i is an atomic function, then we assume that D i = ∩ p i=1 {x | A i x + b i ∈ K i }, for some cone constraints K 1 , . . . , K p . The assumption is reasonable since g i itself can be represented as partial optimization over a cone program. The graph implementation of I i (x) is simply minimize 0 subject to A i x + b i ∈ K i , i = 1, . . . , p. The other possibility is that g i is a composition of atomic functions. Since the original problem is DCCP, we may assume that g i (x) = f (h 1 (x), . . . , h p (x)) for some convex atomic function f : R p → R and DCP compliant h 1 , . . . , h p : R n → R such that f (h 1 (x), . . . , h p (x)) satisfies the DCP composition rule. Then we have I i (x) = I f (h 1 (x), . . . , h p (x)) + p j=1 I h j (x), where I f is the indicator function for the domain of f and I h 1 , . . . , I hp are defined similarly. Since f is convex, I f is convex. Moreover, I f (h 1 (x), . . . , h p (x) ) satisfies the DCP composition rule. To see why, observe that for i = 1, . . . , p, if h i is convex then by assumption the extended-value extensionf is nondecreasing in argument i on the range of (h 1 (x), . . . , h p (x)). It follows that I f is nondecreasing in argument i on the range of (h 1 (x), . . . , h p (x)). Similarly, if h i is concave then I f is nonincreasing in argument i on the range of (h 1 (x), . . . , h p (x) ). An inductive argument shows that I h 1 , . . . , I hp are convex and satisfy the DCP rules. We conclude that I i satisfies the DCP composition rule. Sub-differentiability on boundary When D = R n , a solution to the linearized problemx k at iteration k can be on the boundary of the closure of D. It is possible (as our simple example above shows) that the convex function g i is not subdifferentiable atx k , which means the linearization does not exist and the algorithm fails. This pathology can and does occur in practical problems. In order to handle this, at each iteration, when the subgradient ∇g i (x k ) for any function g i does not exist, we simply take a damped step, x k = αx k + (1 − α)x k−1 , where 0 < α < 1. If x 0 is in the interior of the domain, then x k will be in the interior for all k ≥ 0, and ∇g i (x k ) will be guaranteed to exist. The algorithm can (and does, for our simple example) converge to a point on the boundary of the the domain, but each iterate is in the interior of the domain, which is enough to guarantee that the linearization exists. Initialization As a heuristic method, the result of algorithm 1.1 generally depends on the initialization, and the initial values of variables should be in the interior of the domain. In many applications there is a natural way to carry out this initialization; here we discuss a generic method (attempting) to do it. Note that in general the problem of finding x 0 ∈ D can be very hard, so we do not expect to have a generic method that always works. One simple and effective method is to generate random points z j for j = 1, . . . , k ini , with entries drawn from from i.i.d. standard Gaussian distributions. We then project these points onto D, i.e., solve the problems minimize x − z j 2 subject to x ∈ D, for j = 1, . . . , k ini , denoting the solutions as x j ini . These points are on the boundary of D when z j ∈ D. We then take x 0 = 1 k ini k ini j=1 x j ini . Forming the average is a heuristic for finding x 0 in the interior of D; but it is still possible that x 0 is on the boundary, in which case it is an unacceptable starting point. As a generic practical method, however, this approach seems to work very well. Implementation The proposed methods described above have been implemented as the Python package DCCP, which extends the package CVXPY. New methods were added to CVXPY to return the domain of a DCP expression (as a list of constraints), and gradients (or subgradients or supergradients) were added to the atoms. The linearization, damping, and initialization are handled by the package DCCP. Users can form any DCCP problem of the form (2), with each expression composed of functions in the CVXPY library. When the solve(method = 'dccp') method is called on a problem object, DCCP first verifies that the problem satisfies the DCCP rules. The package then splits each non-affine equality constraint l i = r i into l i ≤ r i and l i ≥ r i . The curvature of the objective and the left and righthand sides of each constraint is checked, and if needed, linearized. In the linearization the function value and gradient are CVXPYparameters, which are constants whose value can change without reconstructing the problem. For each constraint in which the left or righthand side is linearized, a slack variable is introduced, and added to the objective. For any expression that is linearized, the domain of the original expression is added into the constraints. Algorithm 1.1 is next applied to the convexified problem. If a valid initial value of a variable is given by the user, it is used; otherwise the generic method described above is used. In each iteration the parameters in the linearizations (which are function and gradient values) are updated based on the current value of the variables. If a gradient (or super-or subgradient) w.r.t. any variable does not exist, damping is applied to all the variables. The convexified problem at each iteration is solved using CVXPY. Some useful functions and attributes in the DCCP package are below. • Function is_dccp(problem) returns a boolean indicating if an optimization problem is DCCP. • Attribute expression.gradient returns a dictionary of the gradients of a DCP expression w.r.t. its variables at the points specified by variable.value. (This attribute is also in the core CVXPY package.) • Function linearize(expression) returns the linearization (3) of a DCP expression. • Attribute expression.domain returns a list of constraints describing the domain of a DCP expression. (This attribute is also in the core CVXPY package.) • Function convexify(constraint) returns the transformed constraint (without slack variables) satisfying DCP of a DCCP constraint. • Method problem.solve(method = 'dccp') carries out the proposed penalty CCP algorithm, and returns the value of the transformed cost function, the value of the weight µ k , and the maximum value of slack variables at each iteration k. An optional parameter is used to set the number of times to run CCP, using the randomized initialization. Examples In this section we describe some simple examples, show how they can be expressed using DCCP, and give the results. In each case we run the default solve method, with no tuning or adjustment of algorithm parameters. Circle packing The aim is to arrange n circles in R 2 with given radii r i for i = 1, . . . , n, so that they do not overlap and are contained in the smallest possible square [Spe13]. The optimization problem can be formulated as minimize max i=1,...,n ( c i ∞ + r i ) subject to c i − c j 2 ≥ r i + r j , 1 ≤ i < j ≤ n, Boolean least squares A binary signal s ∈ {−1, 1} n is transmitted through a communication channel, and received as y = As + v, where v ∼ N (0, σ 2 I) is a noise, and A ∈ R m×n is the channel matrix. The maximum likelihood estimate of s given y is a solution of minimize y − Ax 2 subject to x 2 i = 1, i = 1, . . . , n, where x is the optimization variable [For72]. It is a boolean least squares problem if the objective function is squared. The corresponding code for this problem is given below. x = Variable(n) prob = Problem(Minimize(norm(y-A*x,2)), [square(x) == 1]) result = prob.solve(method = 'dccp') Note that the square function in the constraint is elementwise. We consider some numerical examples with m = n = 100, with A ij ∼ N (0, 1) i.i.d., and s i i.i.d. with probability 1/2 1 or −1. The signal to noise ratio level is n/σ 2 . In each of the 10 independent instances, A and s are generated, and n/σ 2 takes 8 values from 1 to 17. For each value of n/σ 2 , v is generated. The bit error rates averaged from 10 instances are shown in figure 2. Also shown are the same results obtained when the boolean least squares problem is solved globally (at considerably more effort) using MOSEK [ApS15]. We can see that the results, judged in terms of bit error rate, are very similar. Path planning The goal is to find the shortest path connecting points a and b in R d that avoids m circles, centered at p j with radius r j , j = 1, . . . , m [Lat91]. After discretizing the arc length parametrized path into points x 0 , . . . , x n , the problem is posed as minimize L subject to x 0 = a, x n = b x i − x i−1 2 ≤ L/n, i = 1, . . . , n x i − p j 2 ≥ r j , i = 1, . . . , n, j = 1, . . . , m, where L and x i are variables, and a, b, p j , and r j are given. The code is given below. Control with collision avoidance We have n linear dynamic systems, given by x i t+1 = A i x i t + B i u i t , y i t = C i x i t , i = 1, . . . , n, where t = 0, 1, . . . denotes (discrete) time, x i t are the states, and y i t are the outputs. At each time t for t = 0, . . . , T the n outputs y i t are required to keep a distance of at least d min from each other [MWCD99]. The initial states x i 0 and ending states x i n are given by x i init and x i end , and the inputs are limited by u i t ∞ ≤ f max . We will minimize a sum of the 1 norms of the inputs, an approximation of fuel use. (Of course we can have any convex state and input constraints, and any convex objective.) This gives the problem minimize n i=1 T −1 t=0 u i t 1 subject to x i 0 = x i init , x i T = x i end , i = 1, . . . , n x i t+1 = A i x i t + B i u i t , t = 0, . . . , T − 1, i = 1, . . . , n y i t − y j t 2 ≥ d min , t = 0, . . . , T, 1 ≤ i < j ≤ n y i t = C i x i t , u i t ∞ ≤ f max , t = 0, . . . , T − 1, i = 1, . . . , n, where x i t , y i t , and u i t are variables. The code can be written as follows. We consider an instance with n = 2, with outputs (positions) y i t ∈ R 2 , d min = 0.6, f max = 0.5, T = 100. The linear dynamic system matrices are Sparse recovery using 1/2 'norm' The aim is to recover a sparse nonnegative signal x 0 ∈ R n from a measurement vector y = Ax 0 , where A ∈ R m×n (with m < n) is a known sensing matrix [CW08]. A common heuristic based on convex optimization is to minimize the 1 norm of x (which reduces here to the sum of entries of x) subject to y = Ax 0 (and here, x ≥ 0). It has been proposed to minimize the sum of the squareroots of the entries of x, which since x ≥ 0 is the same as minimizing the squareroot of the 1/2 'norm' (which is not convex, and therefore not a norm), to obtain better recovery. The optimization problem is minimize n i=1 √ x i subject to y = Ax, where x is the variable. (The constraint x ≥ 0 is implicit, since this is the objective domain.) This is a nonconvex problem, directly in DCCP form. The corresponding code is as follows. x = Variable(n,1) x.value = np.ones((n,1)) prob = Problem(Minimize(sum_entries(sqrt(x))), [A*x == y]) result = prob.solve(method = 'dccp') In a numerical simulation, we take n = 100, A ij ∼ N (0, 1), the positions of the nonzero entries in x 0 are from uniform distribution, and the nonzero values are the absolute values of N (0, 100) random variables. To count the probability of recovery, 100 independent instances are tested, and a recovery is successful if the relative error x − x 0 2 / x 0 2 is less than 0.01. In each instance, the cardinality takes 6 values from 30 to 50, according to which x 0 is generated, and A is generated for each m taking one of the 6 values from 50 to 80. The results in figure 5 verify that nonconvex recovery is more effective than convex recovery. Phase retrieval Phase retrieval is the problem of recovering a signal x 0 ∈ C n from the magnitudes of the complex inner products x * 0 a k , for k = 1, . . . , m, where a k ∈ C n are the given measurement vectors [CESV13]. The recovery problem can be expressed as find x subject to |x * a k | = y k , k = 1, . . . , m, where x ∈ C n is the optimization variable, and a k and y k ∈ R + are given. The lefthand side of the constraints are convex quadratic functions of the real and imaginary parts of the arguments, which are in turn linear functions of the variable x. The following code segment specifies the problem. CVXPY (and therefore DCCP) does not yet support complex variables and constants, so we expand complex numbers into real and imaginary parts. We consider an instance with n = 128 and m = 3n. The real part and the imaginary part of each entry of x 0 and a k are i.i. d. N (0, 1). The result in figure 6 shows that the phase is recovered (up to a global constant). Magnitude filter design A filter is characterized by its impulse response {h k } n k=1 . Its frequency response H : [0, π] → C is defined as H(ω) = n k=1 h k e −iωk , where i = √ −1. In magnitude filter design, the goal is to find impulse response coefficients that meet certain specifications on the magnitude of the frequency response [WBV99]. We will consider a typical lowpass filter design problem, which can be expressed as minimize U stop subject to L pass ≤ |H(πl/N )| ≤ U pass , l = 0, . . . , l pass − 1 |H(πl/N )| ≤ U pass , l = l pass , . . . , l stop − 1 |H(πl/N )| ≤ U stop , l = l stop , . . . , N, where h ∈ R n and U stop ∈ R are the optimization variables. The passband magnitude limits L pass and U pass are given. The code can be written as follows. Sparse singular vectors The left singular vectors associated with the smallest and largest singular values of a matrix A (globally) minimize and maximize Ax 2 subject to x 2 = 1. Here we seek sparse vectors, with x 2 = 1, which make Ax 2 large or small [WTH09]. To induce sparsity in x, we limit the 1 -norm of x. (We could also limit a nonconvex sparsifier, as above in sparse recovery.) This leads to the problems minimize/maximize Ax 2 subject to x 2 = 1, x 1 ≤ µ, where x ∈ R n is the variable and µ ≥ 0 controls the sparsification, to find x that is sparse, satisfies x 2 = 1, and makes Ax 2 small or large. We call such a vector, with some abuse of notation, a sparse singular vector. Since x 2 = 1, we know 1 ≤ x 1 ≤ √ n, so the range of µ can be set as [1, √ n]. The code (for minimization) is the following. x = Variable(n) prob = Problem(Minimize(norm(A*x)), [norm(x) == 1, norm(x,1) <= mu]) prob.solve(method = 'dccp') We consider an instance for minimization with a random matrix A ∈ R 100×100 with i.i.d. entries A ij ∼ N (0, 1), with (positive) smallest singular value σ min . The parameter µ is swept from 1 to 10 with increment 0.2, and for each value of µ the result of solving the problem above is shown as a red dot in figure 8. The most left point in the figure corresponds to x 1 ≤ 1, which gives cardinality 1. (In this instance it achieves the globally optimal value, which is the smallest of the norm of the columns of A.) Gaussian covariance matrix estimation Suppose y i ∈ R n for i = 1, . . . , N are points drawn i.i.d from N (0, Σ). Our goal is to estimate the parameter Σ given these samples. The maximum likelihood problem of estimating Σ is convex in the inverse of Σ, but not Σ [BGdN06]. If there are no other constraints on Σ, the maximum likelihood estimate isΣ = 1 N N i=1 y i y T i , the empirical covariance matrix. We consider here the case where the sign of the off-diagonal entries in Σ is known; that is, we know which entries of Σ are negative, which are zero, and which are positive. (So we know which components of y are uncorrelated, and which are negatively and positively correlated.) The maximum likelihood problem is then maximize − log det(Σ) − 1 N N i=1 y T i Σ −1 y i subject to Σ Ω + ≥ 0, Σ Ω − ≤ 0, Σ Ω 0 = 0, where Σ is the variable, and the index sets Ω + , Ω − , and Ω 0 are given. The objective is a difference of convex functions, so we transform the problem into the following DCCP problem with additional variable t, maximize − log det(Σ) − t subject to 1 N N i=1 y T i Σ −1 y i ≤ t Σ Ω + ≥ 0, Σ Ω − ≤ 0, Σ Ω 0 = 0. The code is as follows. An example with n = 20 and N = 30 is in figure 9. Not surprisingly, knowledge of the signs of the entries of Σ allows us to obtain a much better estimate of the true covariance matrix. Figure 1 : 1Circle packing. Figure 2 : 2where the variables are the centers of the circles c i ∈ R 2 , i = 1, . . . , n, and r i , i = 1, . . . , n, are given data. If l is the value of the objective function, the circles are contained in thesquare [−l, l] × [−l, l].This problem can be specified in DCCP (and solved, in the last line) as follows.c = Variable(n,2) constr = [] for i in range(n-1): for j in range(i+1,n): constr += [norm(c[i,:]-c[j,:]) >= r[i]+r[j]] prob = Problem(Minimize(max_entries(row_norm(c,'inf')+r)), constr) prob.solve(method = 'dccp')The result obtained for an instance of the problem, with n = 14 circles, is shown infigure 1. The fraction of the square covered by circles is 0.73. Boolean least squares. Figure 3 : 3Path planning. , 0 ] 0== a, x[:,n] == b] for i in range(1,n+1): constr += [norm(x[:,i]-x[:,i-1],2) <= L/n] for j in range(m): constr += [norm(x[:,i]-center[:,j],2) >= r[j]] prob = Problem(Minimize(cost), constr) result = prob.solve(method = 'dccp') An example with d = 2 and n = 50 is shown in figure 3. Figure 4 : 4Optimal control with collision avoidance. Left: Output trajectory without collision avoidance. Middle: Output trajectory with collision avoidance. Right: Distance between outputs versus time. Figure 5 : 5The results are in figure 4, where the black arrows in the first two figures show initial and final states (position and velocity), and the black dashed line in the third figure shows d min . Sparse recovery. Left: l 1 norm. Right: Sqrt of 1/2 'norm'. Figure 6 : 6Phase retrieval. Figure 7 :Figure 8 : 78An instance of low pass filter design, with n = 10 and N = 100, is shown in figure 7. Low pass filter design. The frequency response magnitude upper and lower limits are shown. Sparse singular vectors. trace(sum([matrix_frac(y[:,i], Sigma)/N for i in range(N)])) prob = Problem(Maximize(cost), [trace_val <= t, Sigma[pos] >= 0, Sigma[neg] <= 0, Sigma[zero] == 0]) prob.solve(method = 'dccp') Figure 9 : 9Gaussian covariance matrix estimation. == x*Ar[k,:] + c*x*Ai[k,:]] prob = Problem(Minimize(0), constr) result = prob.solve(method = 'dccp')x = Variable(2,n) z = [] constr = [] c = np.matrix([[0,1],[-1,0]]) for k in range(m): z.append(Variable(2)) z[-1].value = np.random.rand(2,1) constr += [norm(z[-1]) == y[k]] constr += [z[-1] += [norm(expo[l]*h,2) <= U_stop] prob = Problem(Minimize(U_stop), constr) result = prob.solve(method = 'dccp')omega = np.linspace(0,np.pi,N) h = Variable(n) U_stop = Variable() constr = [] for l in range(len(omega)): if l < l_pass: constr += [norm(expo[l]*h,2) >= L_pass] if l < l_stop: constr += [norm(expo[l]*h,2) <= U_pass] else: constr AcknowledgmentsThis material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-114747, by the DARPA X-DATA and SIMPLEX programs, and by the CSC State Scholarship Fund. Optimum seeking with branch and bound. N Agin, Management Science. 13N. Agin. Optimum seeking with branch and bound. Management Science, 13:176-185, 1966. A DC programming approach for feature selection in support vector machines learning. L T H An, H M Le, V V Nguyen, P D Tao, Advances in Data Analysis and Classification. 2L. T. H. An, H. M. Le, V. V. Nguyen, and P. D. Tao. A DC programming approach for feature selection in support vector machines learning. Advances in Data Analysis and Classification, 2(3):259-278, 2008. DC programming and DCA: local and global approaches -theory. L T H An, L. T. H. An. DC programming and DCA: local and global approaches -theory, algorithms and applications. http://lita.sciences.univ-metz.fr/~lethi/ DCA.html, 2015. The MOSEK Python optimizer API manual. Version 7.1 (Revision 49). Mosek Aps, MOSEK ApS. The MOSEK Python optimizer API manual. Version 7.1 (Revi- sion 49)., 2015. Convex optimization techniques for fitting sparse gaussian graphical models. O Banerjee, L E Ghaoui, A Aspremont, G Natsoulis, Proceedings of the 23rd International Conference on Machine Learning, ICML '06. the 23rd International Conference on Machine Learning, ICML '06O. Banerjee, L. E. Ghaoui, A. d'Aspremont, and G. Natsoulis. Convex opti- mization techniques for fitting sparse gaussian graphical models. In Proceedings of the 23rd International Conference on Machine Learning, ICML '06, pages 89-96, 2006. Convex Optimization. S Boyd, L Vandenberghe, Cambridge University PressCambridgeS. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, Cambridge, 2004. Phase retrieval via matrix completion. E J Candès, Y C Eldar, T Strohmer, V Voroninski, SIAM Journal on Imaging Sciences. 61E. J. Candès, Y. C. Eldar, T. Strohmer, and V. Voroninski. Phase retrieval via matrix completion. SIAM Journal on Imaging Sciences, 6(1):199-225, 2013. CVX: Matlab software for disciplined convex programming. Cvx Research, Inc, CVX Research, Inc. CVX: Matlab software for disciplined convex programming, version 2.0. http://cvxr.com.cvx, August 2012. An introduction to compressive sampling. E J Candès, M B Wakin, IEEE Signal Processing Magazine. 252E. J. Candès and M. B. Wakin. An introduction to compressive sampling. IEEE Signal Processing Magazine, 25(2):21-30, 2008. CVXPY: A Python-embedded modeling language for convex optimization. S Diamond, S Boyd, Journal of Machine Learning Research. To appearS. Diamond and S. Boyd. CVXPY: A Python-embedded modeling language for convex optimization. To appear, Journal of Machine Learning Research, 2016. Maximum-likelihood sequence estimation of digital sequences in the presence of intersymbol interference. G Forney, IEEE Transactions on Information Theory. 183G. Forney. Maximum-likelihood sequence estimation of digital sequences in the presence of intersymbol interference. IEEE Transactions on Information Theory, 18(3):363-378, 1972. Graph implementation for nonsmooth convex programs. M Grant, S Boyd, Recent Advances in Learning and Control, Lecture Notes in Control and Information Sciences. V. Blondel, S. Boyd, and H. KimuraSpringer-Verlag LimitedM. Grant and S. Boyd. Graph implementation for nonsmooth convex programs. In V. Blondel, S. Boyd, and H. Kimura, editors, Recent Advances in Learning and Control, Lecture Notes in Control and Information Sciences. Springer-Verlag Limited, 2008. http://stanford.edu/~boyd/graph_dcp.html. Disciplined convex programming. M Grant, S Boyd, Y Ye, Global Optimization: From Theory to Implementation, Nonconvex Optimization and its Applications. L. Liberti and N. MaculanSpringerM. Grant, S. Boyd, and Y. Ye. Disciplined convex programming. In L. Liberti and N. Maculan, editors, Global Optimization: From Theory to Implementation, Nonconvex Optimization and its Applications, pages 155-210. Springer, 2006. On functions representable as a difference of convex functions. P Hartman, Pacific Journal of Math. 93P. Hartman. On functions representable as a difference of convex functions. Pacific Journal of Math, 9(3):707-713, 1959. Introduction to Global Optimization. R Horst, P M Pardalos, N V Thoai, Kluwer Academic PublishersDordrecht, NetherlandsR. Horst, P. M. Pardalos, and N. V. Thoai. Introduction to Global Optimization. Kluwer Academic Publishers, Dordrecht, Netherlands, 1995. DC programming: overview. R Horst, N V Thoai, Journal of Optimization Theory and Applications. 1031R. Horst and N. V. Thoai. DC programming: overview. Journal of Optimization Theory and Applications, 103(1):1-43, 1999. Reducibility among combinatorial problems. R M Karp, Complexity of Computer Computation. R. E. Miller and J. W. ThatcherPlenumR. M. Karp. Reducibility among combinatorial problems. In R. E. Miller and J. W. Thatcher, editors, Complexity of Computer Computation, pages 85-104. Plenum, 1972. Optimization. Springer Texts in Statistics. K Lange, SpringerNew York, New YorkK. Lange. Optimization. Springer Texts in Statistics. Springer, New York, New York, 2004. Robot Motion Planning. J C Latombe, Kluwer Academic PublishersJ. C. Latombe. Robot Motion Planning. Kluwer Academic Publishers, 1991. Variations and extension of the convex-concave procedure. T Lipp, S Boyd, Optimization and Engineering. T. Lipp and S. Boyd. Variations and extension of the convex-concave procedure. Optimization and Engineering, pages 1-25, 2015. Optimization transfer using surrogate objective functions. K Lange, D R Hunter, I Yang, Journal of Computational and Graphical Statistics. 91K. Lange, D. R. Hunter, and I. Yang. Optimization transfer using surrogate objective functions. Journal of Computational and Graphical Statistics, 9(1):1- 20, 2000. YALMIP: A toolbox for modeling and optimization in MATLAB. J Lofberg, Proceedings of the IEEE International Symposium on Computed Aided Control Systems Design. the IEEE International Symposium on Computed Aided Control Systems DesignJ. Lofberg. YALMIP: A toolbox for modeling and optimization in MATLAB. In Proceedings of the IEEE International Symposium on Computed Aided Control Systems Design, pages 294-289, September 2004. Computational aspects of constrained L1-L2 minimization for compressive sensing. Y Lou, S Osher, J Xin, Advances in Intelligent Systems and Computing. 359Y. Lou, S. Osher, and J. Xin. Computational aspects of constrained L1-L2 minimization for compressive sensing, volume 359 of Advances in Intelligent Systems and Computing, pages 169-180. 2015. Statistical Analysis with Missing Data. R J A Little, D B Rubin, John Wiley & SonsNew York, New YorkR. J. A. Little and D. B. Rubin. Statistical Analysis with Missing Data. John Wiley & Sons, New York, New York, 1987. Branch-and-bound methods: a survey. E L Lawler, D E Wood, Operations Research. 14E. L. Lawler and D. E. Wood. Branch-and-bound methods: a survey. Operations Research, 14:699-719, 1966. A weighted difference of anisotropic and isotropic total variation model for image processing. Y Lou, T Zeng, S Osher, J Xin, SIAM Journal on Imaging Sciences. 83Y. Lou, T. Zeng, S. Osher, and J. Xin. A weighted difference of anisotropic and isotropic total variation model for image processing. SIAM Journal on Imaging Sciences, 8(3):1798-1823, 2015. The EM algorithm and extensions. G Mclachlan, T Krishnan, John Wiley & SonsG. McLachlan and T. Krishnan. The EM algorithm and extensions. John Wiley & Sons, 2007. Optimal control of a ship for collision avoidance maneuvers. A Miele, T Wang, C S Chao, J B Dabney, Journal of Optimization Theory and Applications. 1033A. Miele, T. Wang, C. S. Chao, and J. B. Dabney. Optimal control of a ship for collision avoidance maneuvers. Journal of Optimization Theory and Appli- cations, 103(3):495-519, 1999. Conic formulation of a convex programming problem and duality. Y Nesterov, A Nemirovsky, Optimization Methods and Software. 12Y. Nesterov and A. Nemirovsky. Conic formulation of a convex programming problem and duality. Optimization Methods and Software, 1(2):95-115, 1992. Numerical Optimization. J Nocedal, S J Wright, SpringerJ. Nocedal and S. J. Wright. Numerical Optimization. Springer, 2006. . E Specht, Packomania, E. Specht. Packomania. http://www.packomania.com/, October 2013. Inverse covariance estimation from data with missing values using the concave-convex procedure. J Thai, T Hunter, A K Akametalu, C J Tomlin, A M Bayen, IEEE 53rd Annual Conference on. IEEEDecision and Control (CDC)[THA + 14] J. Thai, T. Hunter, A. K. Akametalu, C. J. Tomlin, and A. M. Bayen. Inverse covariance estimation from data with missing values using the concave-convex procedure. In Decision and Control (CDC), 2014 IEEE 53rd Annual Conference on, pages 5736-5742. IEEE, 2014. Algorithms for solving a class of nonconvex optimization problems. Methods of subgradients. P D Tao, E B Souad, FER-MAT Days 85: Mathematics for Optimization. J.-B. Hiriart-UrrutyElsevier Scince Publishers B. VP. D. Tao and E. B. Souad. Algorithms for solving a class of nonconvex optimiza- tion problems. Methods of subgradients. In J.-B. Hiriart-Urruty, editor, FER- MAT Days 85: Mathematics for Optimization, pages 249-271. Elsevier Scince Publishers B. V., 1986. Convex optimization in Julia. M Udell, K Mohan, D Zeng, J Hong, S Diamond, S Boyd, Proceedings of the Workshop for High Performance Technical Computing in Dynamic Languages. the Workshop for High Performance Technical Computing in Dynamic LanguagesUMZ + 14] M. Udell, K. Mohan, D. Zeng, J. Hong, S. Diamond, and S. Boyd. Convex optimization in Julia. In Proceedings of the Workshop for High Performance Technical Computing in Dynamic Languages, pages 18-28, 2014. S P Wu, S Boyd, L Vandenberghe, chapter FIR Filter Design via Spectral Factorization and Convex Optimization. BostonBirkhäuser1S. P. Wu, S. Boyd, and L. Vandenberghe. Applied and Computational Control, Signals, and Circuits: Volume 1, chapter FIR Filter Design via Spectral Factor- ization and Convex Optimization, pages 215-245. Birkhäuser Boston, 1999. A penalized matrix decomposition, with applications to sparse principal components and canonical correlation analysis. D M Witten, R Tibshirani, T Hastie, Biostatistics. D. M. Witten, R. Tibshirani, and T. Hastie. A penalized matrix decomposi- tion, with applications to sparse principal components and canonical correlation analysis. Biostatistics, 2009. The concave-convex procedure. A L Yuille, A Rangarajan, Neural Computation. 154A. L. Yuille and A. Rangarajan. The concave-convex procedure. Neural Com- putation, 15(4):915-936, 2003.
[]
[ "Information Theoretically Secure, Enhanced Johnson Noise Based Key Distribution over the Smart Grid with Switched Filters", "Information Theoretically Secure, Enhanced Johnson Noise Based Key Distribution over the Smart Grid with Switched Filters" ]
[ "E Gonzalez ", "L B Kish ", "R S Balog ", "P Enjeti ", "\nDepartment of Electrical and Computer Engineering\nTexas A&M University\nCollege Station, TexasUnited States of America\n", "\nUniversity of Adelaide\nAustralia\n" ]
[ "Department of Electrical and Computer Engineering\nTexas A&M University\nCollege Station, TexasUnited States of America", "University of Adelaide\nAustralia" ]
[ "PLoS ONE" ]
We introduce a protocol with a reconfigurable filter system to create non-overlapping single loops in the smart power grid for the realization of the Kirchhoff-Law-Johnson-(like)-Noise secure key distribution system. The protocol is valid for onedimensional radial networks (chain-like power line) which are typical of the electricity distribution network between the utility and the customer. The speed of the protocol (the number of steps needed) versus grid size is analyzed. When properly generalized, such a system has the potential to achieve unconditionally secure key distribution over the smart power grid of arbitrary geometrical dimensions.
10.1371/journal.pone.0070206
null
10,516,216
1303.3262
3c68998d157e4e3100468c768f04ae6af9ad9be7
Information Theoretically Secure, Enhanced Johnson Noise Based Key Distribution over the Smart Grid with Switched Filters 2013 E Gonzalez L B Kish R S Balog P Enjeti Department of Electrical and Computer Engineering Texas A&M University College Station, TexasUnited States of America University of Adelaide Australia Information Theoretically Secure, Enhanced Johnson Noise Based Key Distribution over the Smart Grid with Switched Filters PLoS ONE 87702062013Received April 2, 2013; Accepted June 18, 2013; Published July 25, 2013Editor: Funding: The authors have no support or funding to report. Competing Interests: The authors have declared that no competing interests exist. * E-mail: [email protected] . These authors contributed equally to this work. We introduce a protocol with a reconfigurable filter system to create non-overlapping single loops in the smart power grid for the realization of the Kirchhoff-Law-Johnson-(like)-Noise secure key distribution system. The protocol is valid for onedimensional radial networks (chain-like power line) which are typical of the electricity distribution network between the utility and the customer. The speed of the protocol (the number of steps needed) versus grid size is analyzed. When properly generalized, such a system has the potential to achieve unconditionally secure key distribution over the smart power grid of arbitrary geometrical dimensions. Introduction KLJN, the Information Theoretically Secure Wirebased Key Exchange Scheme On February 12, 2013 President Obama issued an executive order to outline policies directing companies and operators of vital infrastructure, such as power grids, to set standards for cybersecurity [1]. This step is one of the indications of an urgent need to protect intelligence, companies, infrastructure, and personal data in an efficient way. In this paper, we propose a solution that provides information theoretic (that is, unconditional) secure key exchange over the smart grid. This method is controlled by filters and protects against man-in-the-middle attacks. A smart grid [2,3] is an electrical power distribution network that uses information and communications technology to improve the security [4,5], reliability, efficiency, and sustainability of the production and distribution of electricity. A form of a cyberphysical system, the smart grid enables greater efficiency through a higher degree of awareness and control but also introduces new failure modes associated with data being intercepted and compromised. Private key based secure communications require a shared secret key between Alice and Bob who may communicate over remote distances. In today's secure communications, sharing such a key also utilizes electronic communications because courier and mail services are slow. However the software based key distribution methods offer only limited security levels that are only computationally-conditional thus they are not future-proof. By having a sufficiently enhanced computing power, the eavesdropper (Eve) can crack the key and all the communications that are using that key. Therefore, unconditional security (indicating that the security holds even for infinite computational power), which is the popular wording of the term ''information theoretic security'' [6], requires more than a software solution. It needs the utilization of the proper laws of physics. The oldest scheme that claims information theoretic security by utilizing the laws of physics is quantum key distribution (QKD). While, the security available in QKD schemes have currently been debated/compromised [7][8][9][10][11][12][13][14][15][16][17][18][19][20][21], the discussions indicate that there is a potential to reach a satisfactory security level in the future though they may require a new approach [7]. However, current QKD devices are prohibitively expensive and have other practical weaknesses, such as they are sensitive to vibrations, bulky, limited in range, and require a special ''dark optical fiber'' cable with sophisticated infrastructure. On the other hand, the smart grid offers a unique way of secure key exchange because each household (host) in the grid is electrically connected. To utilize a wire connection for secure key exchange, a different set of the laws of physics (not the laws applied for QKD that work with optical fibers) must be utilized. Recently a classical statistical physical alternative to QKD, the Kirchhoff-Law-Johnson-(like)-Noise (KLJN) key exchange system has been proposed [22][23][24][25][26][27][28], which is a wire-based scheme that is free from several weaknesses of QKD. A recent survey is given in [24]. Similarly to QKD, KLJN is also an information theoretically secure key distribution [24]; however it is robust; not sensitive to vibrations; it has unlimited range [25]; it can be integrated on chips [26]; it can use existing wire infrastructure such as power lines [27]; and KLJN based networks can also be constructed [28]. The KLJN channel is a wire [24]. At the beginning of each clock cycle (note, the 50 Hz/60 Hz AC grid provide a universal time synchronization), Alice and Bob, who have identical pairs of resistors R L and R H (representing the 0 and 1 bit situations) randomly select and connect one of the resistors, see Figure 1. In practical applications, voltage noise generators enhance the Johnson noise of the resistors so that all resistors in the system have the same, publicly known effective noise-temperature T eff (where T eff §10 9 Kelvin). The enhanced Johnson noise voltages {U L,A (t) or U H,A (t); and U L,B (t) or U H,B (t)} of the resistor result in a channel noise voltage between the wire and the ground, and a channel noise current I ch (t) in the wire. Low-pass filters are used because the noise-bandwidth, which we also call KLJN-band B kljn (its value depends on the range), must be chosen so narrow that wave, reflection, and propagation/delay effects are negligible, otherwise the security is compromised [22]. Alice and Bob can measure the mean-square amplitudes SU 2 ch (t)T and/or SI 2 ch (t)T within the KLJN-band in the line. From any of these values, the loop resistance can be calculated [22] by using the Johnson noise formula with the noise-bandwidth T eff : SU 2 ch (t)T~4kT eff R loop B kljn SI 2 ch (t)T~4 kT eff B kljn R loopð1Þ Alice and Bob know their own choice resistor thus, from the loop resistance, they can deduce the resistance value and the actual bit status at the other end of the wire. In the ideal situation, the cases R L DR H and R H DR L represent a secure bit exchange event because they cannot be distinguished by the measured meansquare values. Eve can do the very same measurements but she has no knowledge about any of the resistance choices thus she is unable to extract the key bits from the measured loop resistance. The KLJN protocol can also be applied to several other wire networks such as electronic equipment that do not desire to be reverse engineered. However, in this study we focus on applying KLJN protocol on the smart grid. Utilizing the Smart Power Grid for Information Theoretic Secure Key Exchange The disadvantage of the KLJN key exchange protocol is that it requires a wire connection. Investors are hesitant to cover the cost of new infrastructure for solely the purpose of security. On the other hand, virtually every building in the civilized world is connected to the electrical power grid. This fact is very motivating to explore the possibility of using the power grid as the infrastructure for the KLJN protocol. However, only the single loop shown in Figure 1 is unconditionally secure. When Alice and Bob are two remote hosts in the smart grid, they should indeed experience a single loop connection as in Figure 1. Thus, for smart grid applications, proper filters must be installed and controlled for the KLJN frequency band where the key exchange operates. Though simple examples have been outlined to prove that a KLJN key exchange between two remote points in a radial power grid with filters [27,28] can be achieved, neither details about the structure of the filter units nor network protocols to connect every host on the grid with all other hosts have been shown. The method described in [28] is of high speed because, if the units do simultaneous key exchange they have a joint network key. Thus the units must trust each other. In the present system the units have independent keys. The present paper aims to make the first steps in this direction by presenting a working scheme with scaling analysis of the speed of key exchange versus network size. We limit our network to a one-dimensional linear chain network to utilize the smart power grid for KLJN secure key exchange. We show and analyze a protocol to efficiently supply every host with proper secure keys to separately communicate with all the other hosts. Discussions and Results Because the pattern of connections between KLJN units must be varied to provide the exchange of a separate secure key for each possible pair of host, the network of filters and their connections must be varied accordingly. The power line filter technology is Figure 1. The core of the idealized KLJN key scheme [22][23][24][25]. This simple system is secure only against passive attacks in the idealized case (mathematical limit). Security enhancements (including filters) to provide protection against invasive attacks [22][23][24][25] and against other types of vulnerabilities are not shown. In practical applications electronic noise generators emulate an enhanced Johnson noise with a publicly agreed high effective temperature. doi:10.1371/journal.pone.0070206.g001 already available [29,30] and we will show that the required results can be achieved by switching on/off proper filtering units, in a structured way in the smart grid. We will need filters to pass or reject the KLJN frequency band B kljn and/or the power frequency (50 or 60 Hz). When both B kljn and f p are passed, it is a short; and when both of them are rejected, it is a break. We will call these filters ''switched filters''. Switched Filters We call the functional units connected to the smart power grid hosts. A host is able to execute a KLJN key exchange toward left and right in a simultaneous way. That means each host has two independent KLJN units. The filter system must satisfy the following requirements: i) Hosts that currently do not execute KLJN key exchange should not interfere with those processes even if the KLJN signals pass through their connections. ii) Moreover, each host should be able to extract electrical power from the electric power system without disturbing the KLJN key exchanges. We define the size of a network as being of size N when that network has Nz1 hosts. An example of a network of size N = 7 is illustrated in Figure 2. Intermediate hosts in the network can be in two different states according to the need: a) State 1 is defined when KLJN bandwidth B kljn is not allowed into the host. b) State 2 is defined when KLJN bandwidth B kljn is allowed into the host. Hosts at the two ends can be in similar situations except that they can communicate in only a single directions, thus they are special, limited cases of the intermediate hosts to which we are focusing our considerations when discussing filters. Filter boxes at each host will distribute the KLJN signals and the power, and they are responsible for connecting the proper parties for the KLJN key exchange and to supply the hosts with power, see Figure 3. The filters boxes can be controlled either by a central server and/or an automatic algorithm. In the following section we discuss the protocol of this control. Each filter box has three switched filters and a corresponding output wire, see The properly controlled filter boxes will provide non-overlapping KLJN loops between the hosts, see below. KLJN loops need to be non-overlapping loops because the KLJN protocol is fundamentally peer-to-peer. If overlapping loops were allowed then there is a possibility that Eve might be in between and will require the trust of the intermediate hosts. A problem with peer-topeer networks is that they require direct connections. QKD also require direct connections. The reason for having two KLJN units per host is to decrease the time needed to connect every host by having simultaneous loops toward left and right, without overlapping. Figure 4 shows an example for N = 7. The solid black line means that both KLJN bandwidth and power frequency is passing through (ordinary wire: the original line). The (red) dotted lines carry B kljn (f p is rejected). The (blue) dashed lines indicate the opposite situation: only the power frequency is passing and the KLJN bandwidth is rejected. When there is a key exchange between the first host (host 0) and the last host (host 7) over the whole network ( Figure 4) then all the host in between (host 1 through host 6) are not allowed to access the KLJN band. In this state the filter boxes of host 1 through 6 must separate their respective host from the KLJN band, and at the same time supply them by power. We call this working mode of the filter boxes of non-active hosts State 1. The wiring and frequency transfer of the Filter Box in State 1 are shown in Figure 5 and Tables 1,2. See Figures 6 and 7, as another example, for seven key exchanges occurring simultaneously with every host in that network active (allowed access to the KLJN band). The power filters of these hosts must separate the KLJN loops by rejecting B kljn . We call this working mode of the filter boxes of hosts executing key exchange State 2. The wiring and frequency transfer of the Filter Box in State 2 are shown in Figure 6 and Tables 3,4. In this section we have shown that the line can be packed with non-overlapping KLJN loops to execute simultaneous key exchanges between selected hosts. In the next section, we propose a network protocol to provide secure keys for each host to be able to communicate securely via the internet or other publicly accessible channels between arbitrary pairs of hosts. The time requirement of key exchange over the whole smart grid will also be analyzed versus the network size N. Protocol and Speed To quickly and efficiently connect every host with all other host in the same one-dimensional network we need to establish a protocol. The protocol must make every possible connection in the network, must not overlap loops, and must be quick and efficient by making as many simultaneous loops without overlapping as possible. To determine the time and speed requirements to establish a KLJN secure key exchange we must first define terms. In the classical KLJN system, where only the noise existed in the wire, the low-frequency cutoff of the noise was 0 Hz and the high-frequency cut-off was B kljn . In the case of KLJN in a smart grid, this situation will be different because of the power frequency. However, at short distances (less than 10 miles), the B kljn band can be beyond the power frequency f p and the difference is negligible. Then the shortest characteristic time in the system is the correlation time t kljn of the noise (t kljn &1=B kljn ). B kljn is determined by the distance L between Alice and Bob so that B kljn vvc=L [21] (for example, B kljn vv100 kHz for L = 1 kilometer). Alice and Bob must make a statistics on the noise, which typically requires around 100 t kljn duration [25] (or 0.01 seconds if we use B kljn~1 0 kHz) to have a sufficiently high fidelity (note, faster performance is expected in advanced KLJN methods [31]). A bit exchange (BE) occurs when Alice and Bob have different resistor values, this occurs on average of 200 t kljn or 0.02 seconds if B kljn~1 0 kHz. The length of the secure key exchange can be any arbitrary length. For example if we have a key length of 100 bits then, we need 100 BE which requires on average 20000 t kljn which is approximately 2 seconds if B kljn is 10 kHz. Once the KLJN secure key has been exchanged the total amount of time needed to complete this is one KLJN secure key Exchange period (KE). While the key exchange is slow, the system has the advantage that it is running continuously (not only during the handshake period like during common secure internet protocols) thus large number of secure key bits are produced during the continuous operation. For the sake of simplicity, we use the pessimistic estimation by assuming a uniform duration for KE determined by the largest distance in the network even though short distances can exchange keys at a higher speed. The protocol we propose here first connects the nearest neighbor of every host; this allows the highest number of simultaneous non-overlapping loops per KE and only requires one KE to complete this first step. The protocol then connects the second nearest neighbors; this allows the second highest numbers of simultaneous loops per KE. However, due to the requirement of avoiding overlapping loops, connecting each pairs of second nearest neighbors requires two KEs. The protocol then connects the third nearest neighbors which requires 3 KEs to complete and connects the third most simultaneous loops per KE. This procedure continues until the i-th nearest neighbor is equal to or less than half of the size of the network. If the number of steps i between the i-th nearest neighbors satisfies the relation iwN=2 then, to avoid overlapping loops, only one connection per KE is possible. As an example, we will show in the next section that for N = 7 (see Figure 2) 16 KEs (or approximately 32 seconds if B kljn is 10 kHz) are required when the keys are 100 bits long. Using this protocol, the analytic form of the exact time required to fully arm every host with enough keys to securely communicate with everybody in the network is dependent on the size of the network and whether the network has an even or odd size. In the following sections we will deduce the analytic relations and show examples. 2.2.1 The network size N is an odd number. We illustrate the calculation of time requirement with examples shown in the following figures. A general formula for an arbitrary size network when N is odd is given later. In this example we have a network of size N~7. We have 8 host with index i 0ƒiƒ7 ð Þ . We have 7 intermediate connections between the first and last host. The first step in the protocol connects the nearest neighbors, see Figure 7. The second step in the protocol will then connect the second nearest neighbors, see Figure 8. The protocol will then connect the third closest neighbors as shown in Figure 9. This will take 3 KEs to complete and is not as efficient as the first two steps in the protocol but still has simultaneous loops in two of its KE steps. The protocol will then connect the fourth nearest neighbors as shown in Figure 10. This is above the midpoint for our example with N = 7 and is the slowest and least efficient step in the protocol. The midpoint is considered when the distance between Alice and Bob is equal to half the length of the network. These steps will take 4 KEs to complete. Simultaneous loops with disconnected hosts are no longer possible beyond the midpoint. The slowest and least efficient steps occur at the midpoint of the protocol. The protocol will then connect the fifth nearest neighbors as shown in Figure 11. This step will take 3 KEs to complete. It is also inefficient since it is beyond the midpoint thus only a single loop is possible, but it requires fewer KEs since there are only three such pairs. The protocol will then connect the sixth nearest neighbors as shown in Figure 12. This step will take 2 KEs because there are only two possibilities. The protocol will then connect the seventh closest neighbors as shown in Figure 13. This will take 1 KE since there is only one such pair of hosts. This completes the protocol for an example of sizeN~7. Notice the pattern that occurs for N being odd. We have a pattern of 1 KE, 2 KE, 3 KE, 4 KE, 3 KE, 2 KE, and 1 KE. This is essentially Gauss's counting technique up to N/2 and back. The total number of KEs needed will be 1KE+2KE+3KE+4KE+3-KE+2KE+1KE = 16KE. The speed or time requirement of the protocol for a network of arbitrary size N with N being odd is Nz1 2 2 KEs and can be derived as follows. Since N is odd we can express it as; N~2nz1: ð2Þ To find the midpoint we can solve n and express it in terms of N, this gives the following; N{1 2~n :ð3Þ The pattern when N is odd has the following form; 1z2z Á Á Á z n{1 ð Þznz n{1 ð Þz Á Á Á z2z1~N {1 2 2 :ð4Þ1z2z Á Á Á z N{1 2 {1 z N{1 2 z N{1 2 {1 z Á Á Á z2z1~N {1 2 2 :ð5Þ We know from Gauss's counting method that, 1z2z Á Á Á zN~N Nz1 ð Þ 2 :ð6Þ In our pattern we can use Gauss's counting method twice to find the sum as follows. 1z2z Á Á Á z N{1 2 {1 |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} N{1 2 {1 À Á N{1 2 À Á 2 z N{1 2 z N{1 2 {1 z Á Á Á z2z1 |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} N{1 2 {1 À Á N{1 2 À Á 2~N {1 2 2 :ð7Þ This simplifies to N{1 2 N{1 2 {1 2 0 B B @ 1 C C A z N{1 2 z N{1 2 N{1 2 {1 2 0 B B @ 1 C C Ã N{1 2 2 :ð8Þ Thus the speed of the network is proportional to N 2 4 with N being odd and the size of the network. The pattern for when N is even is similar. 2.2.2 The network size N is an even number. For the sake of easier understanding and for those without a communications background, we will again illustrate the calculation of time requirement with examples shown in the following figures. In this case, we have an even number as network size N = 8. We have 9 host with index i 0ƒiƒ8 ð Þ . We have 8 intermediate connections between the first and last host. The first step in the protocol connects the nearest neighbors. This step is the quickest and most efficient. It has the most simultaneous non-overlapping loops and requires only one KE to complete. Figure 14 illustrates this first step in the protocol. The second step in the protocol will then connect the second nearest neighbors as shown in Figure 15. This step will take two KEs to complete and has the second most simultaneous non- lapping loops. It is the second quickest and second most efficient step. The protocol will then connect the third nearest neighbors as shown in Figure 16. This will take 3 KEs to complete and is not as efficient as the first two steps in the protocol but still has simultaneous loops in this example of N~8: . The protocol will then connect the fourth nearest neighbors as shown in Figure 17. This is at the midpoint for our example with N~8 and is the slowest and least efficient step in the protocol. The midpoint is defined when the distance between Alice and Bob is equal to half the length of the network. This step will take 4 KEs to complete. The slowest and least efficient steps occur at the midpoint of the protocol. The protocol will then connect the fifth nearest neighbors as shown in Figure 18. This step will take 4 KEs to complete. It is not efficient since it is at midpoint. The protocol will then connect the sixth nearest neighbors as shown in Figure 19. This step will take 3 KEs because there are only three possibilities at this distance in this example of a network of size N~8. The protocol will then connect the seventh nearest neighbors as shown in Figure 20. This will take 2 KEs since there are only two pairs of host with a length of seven hosts between them. The last step in the protocol connects the first and last hosts. This step is the least efficient and requires the entire length of the network. Since there is only one pair of host at this length, this step requires only one KE. This last step is illustrated in Figure 21. Notice the pattern that occurs for N being even. We have 1 KE, 2 KE, 3 KE, 4 KE, 4 KE, 3 KE, 2 KE, and 1 KE. This is essentially Gauss's counting technique up to N/2 and back. The total number of KEs needed will be 1KEz2KEz3KEz 4KEz4KEz3KEz2KEz1KE~20KE: The time needed to connect the entire network will take 20 KEs which is approximately 40 seconds if B kljn is 10 kHz and if the key is 100 bits long. The speed or time requirement of the protocol for a network of size N with N being even between the first and last host is N 2 4 z N 2 KEs and can be derived as follows. With N~8 the pattern in our example is; N 2 4 z N 2~2 0KE:ð9Þ Since N is even we can express it as; N~2n: ð10Þ To find the midpoint we can solve n and express it in terms of N, this gives the following; N 2~n :ð11Þ The general pattern when N is even has the following form; Expressing n in terms of N gives; 1z2z Á Á Á znznz Á Á Á z2z1~N 2 4 z N 2 :ð12Þ1z2z Á Á Á z N 2 z N 2 z Á Á Á z2z1~N 2 4 z N 2 :ð13Þ We know from Gauss's counting method that, 1z2z Á Á Á zN~N Nz1 ð Þ 2 :ð14Þ In our pattern we can use Gauss's counting method twice to find the sum as follows. 1z2z Á Á Á z N 2 |fflfflfflfflfflfflfflfflfflfflffl ffl{zfflfflfflfflfflfflfflfflfflfflffl ffl} N 2 À Á N 2 z1 À Á 2 z N 2 z Á Á Á z2z1 |fflfflfflfflfflfflfflfflfflfflffl ffl{zfflfflfflfflfflfflfflfflfflfflffl ffl} N 2 À Á N 2 z1 À Á 2~N 2 4 z N 2 :ð15ÞN 2 z1 2 z N 2 N 2 z1 2~N 2 4 z N 2 :ð16Þ This simplifies to N 2 N 2 z1 ~N 2 4 z N 2 :ð17Þ Thus the speed of the network is proportional to N 2 4 with N being the size of the network and even. Limitations of the study, open questions, and future work To fully implement the KLJN key exchange protocol over the grid will require solutions to further engineering problems. This paper presents results of our early work, which focuses on the system-concept in a one-dimensional network. Some of the limitations, open questions, and future work are discussed below. Limitations The main limitation of the KLJN protocol is that it is a peer-topeer network. This will limit the number of simultaneous KLJN key exchanges a host can have. Since overlapping loops are not allowed, the time required scales quadratically with the number of hosts. Another limit is that the KLJN bandwidth is dependent on the distance between Alice and Bob and slows down for large distances. These limitations make it impractical to connect millions of hosts via a linear chain thus other topologies (and perhaps bridges or routers) will be needed to connect such chains with each other. Practical limitations in the power system, such as tapchanging transformers and other devices may also require bridges to couple the KLJN signal around these devices. Open Questions The related technical challenges need further researches that are straightforward. For example distribution transformers can shield most of the signals sent from one phase from the load side but there are many ways to get around them with the KLJN band. We don't investigate the problems of phase-correcting inductors and capacitors since they are separated by the power filters from the KLJN band. Research and development will be needed for some of these problems including how to setup filters in each node. Accuracies are typically within a few percent. In the experimental demo the cable resistance was 2% of total loop resistance. In practice, the impedance of the power grid would need to be taken into account. Future Work Future work will, among others, include protocols for several other power grid topologies. Setting up filters on the power grid and implementing all the filters will also be studied. Hacking attacks against filters and the defense against them is also an interesting open problem. Conclusions We have introduced a protocol to offer unconditionally secure key exchange over the smart grid. We used a reconfigurable filter system and proposed a special protocol to create non-overlapping single loops in the smart power grid for the realization of the Kirchhoff-Law-Johnson-(like)-Noise secure key distribution system. The protocol is valid for one-dimensional networks like the radial electrical power distributions grid. We carried out a scaling analysis for the speed of the protocol versus the size of the grid. When properly generalized, such a system has the potential to achieve unconditionally secure key distribution over the smart power grid of arbitrary dimensions. Before the implementation of the protocol can take place several practical questions must be answered such as the impact of finite and possibly varying wire resistance, capacitance and power load on the security; and the applications of relevant privacy amplification methods. Other questions include changing size N, hacking attacks against the filter control and the relevant defense tools. We also discussed the limitations of the KLJN key exchange protocol, open questions surrounding the implementation on the smart grid, and future work required. Since this paper is a systemconcept study we leave the details to future work. Figure 2 . 2Example of a one-dimensional grid: a chain network. This example has a network of size N = 7. doi:10.1371/journal.pone.0070206.g002 Figure 3 3Left KLJN Filter for the KLJN key exchange toward left, b) The Right KLJN Filter for the KLJN key exchange toward right. c) The Power Filter to supply power to host. Figure 3 . 3Building blocks in a filter box. doi:10.1371/journal.pone.0070206.g003 Figure 4 . 4Example for network of size N = 7. Each host is connected to a filter box and the filters boxes are connected to the power grid. Note how each host has three wire connections to its filter box. doi:10.1371/journal.pone.0070206.g004 Figure 5 . 5The filter box of the inactive host (when it is not executing KLJN key exchange): State 1. Everything is passing between left and right and the host can access only the power. Filter A is passing everything (shorted Figure 6 . g006 Figure 7 . 6g0067The filter box of the active host (when it is executing a KLJN key exchange): State 2. The power is passing between left and right but the KLJN band is not and the left and right KLJN units are separated while doing a key exchange toward left and right. State 1 is when the host is not allowed to access KLJN band. State 2 is when the host is allowed to access KLJN band. This filter box is in State 2. doi:10.1371/journal.pone.0070206.The first step in the protocol connects the nearest neighbors. This step is the quickest and most efficient. It has the most non-overlapping simultaneous loops and requires only 1 KE to complete. Every host in this step has access to KLJN band and thus are in State 2. doi:10.1371/journal.pone.0070206.g007 Figure 8 . 8The second step in the protocol connects the second nearest neighbors. This step is the second quickest and the second most efficient. It has the second most non-overlapping simultaneous loops and requires 2 KEs to complete. doi:10.1371/journal.pone.0070206.g008 Figure 9 . g009 Figure 10 . 9g00910The third step in the protocol connects the third nearest neighbors. This step is not as efficient as the first two steps but still has simultaneous loops. This step requires 3 KEs to complete. doi:10.1371/journal.pone.0070206.The fourth step in the protocol connects the fourth nearest neighbors. This step is the slowest and least efficient step in the protocol in our example of N = 7. This step requires 4 KEs to complete. doi:10.1371/journal.pone.0070206.g010 Figure 11 . 11The fifth step in the protocol connects the fifth nearest neighbors. This step is not efficient since simultaneous nonoverlapping loops with disconnected hosts cannot occur. doi:10.1371/journal.pone.0070206.g011 Figure 13 . g013 Figure 12 . 13g01312Only one key exchange is performed in this step. Host 1 through 6 are not allowed access to KLJN band thus they are in State 1. The seventh and the last step. This step is not efficient but only requires one KE since there is only one such pair of hosts. doi:10.1371/journal.pone.0070206.The sixth step in the protocol connects the sixth nearest neighbors. This step requires only 2 KEs since there are only two possibilities. doi:10.1371/journal.pone.0070206.g012 Figure 14 . 14The first step in the protocol connects the nearest neighbors. This step is the quickest and most efficient. It has the most non-overlapping simultaneous loops and requires only 1 KE to complete. doi:10.1371/journal.pone.0070206.g014 Figure 15 . g015 Figure 17 . 15g01517The second step in the protocol connects the second nearest neighbors. This step requires 2 KEs to complete. doi:10.1371/journal.pone.0070206.The fourth step in the protocol connects the fourth nearest neighbors. It requires 4 KEs to complete. doi:10.1371/journal.pone.0070206.g017 Figure 18 . 18The fifth step in the protocol connects the fifth nearest neighbors. This step is not efficient since simultaneous nonoverlapping loops with disconnected hosts cannot occur. It requires 4 KEs to complete. doi:10.1371/journal.pone.0070206.g018 Figure 19 . g019 Figure 20 . 19g01920The sixth step in the protocol connects the sixth nearest neighbors. This step requires only 3 KEs since it is the third to last step and there are only three possibilities. doi:10.1371/journal.pone.0070206.The seventh step in this example of a network of size N~8. This step is not efficient but only requires two KEs since there are only two such pairs of host. doi:10.1371/journal.pone.0070206.g020 Figure 16 . 16The third step in the protocol connects the third nearest neighbors. This step requires 3 KEs to complete. doi:10.1371/journal.pone.0070206.g016 N 2 Figure 21 . 21The last step in our example with N = 8. This step is not efficient but only requires one KE since there is only one pair of hosts that are eight hosts apart. doi:10.1371/journal.pone.0070206.g021 Table 1 . 1). Filter B is disconnected. Filter C is passing B kljn only. Filters and E and D are passing f p only. State 1 is when the host is not allowed to access KLJN band. State 2 is when the host is allowed to access KLJN band. This filter box is in State 1. doi:10.1371/journal.pone.0070206.g005 Truth table of the KLJN Filters in State 1 (inactive host).KLJN Filters Filter A Filter B KLJN B kljn Allowed Yes No Power Frequency Allowed Yes No doi:10.1371/journal.pone.0070206.t001 Table 2. Truth table of the Power Filter in State 1 (inactive host). Power Filter Filter C Filter D Filter E KLJN B kljn Allowed Yes No No Power Frequency Allowed No Yes Yes doi:10.1371/journal.pone.0070206.t002 Table 3 . 3Truth table of left KLJN filter when a host is in State 2 (active host).KLJN Filter Filter A Filter B B kljn allowed No Yes f p allowed Yes No doi:10.1371/journal.pone.0070206.t003 Table 4 . 4Truth table of power filter when a host is in State 2 (active host).Power Filter Filter C Filter D Filter E B kljn allowed No No No f p allowed No Yes Yes doi:10.1371/journal.pone.0070206.t004 PLOS ONE | www.plosone.org July 2013 | Volume 8 | Issue 7 | e70206 AcknowledgmentsDiscussions with Mladen Kezunovic and Karen Butler-Perry about smart grid security are appreciated. LK is grateful to Horace Yuen for discussions about physical secure key exchange systems. Discussions with Yessica Saez are also appreciated.Author ContributionsWrote the paper: EG LK RB PE. Analyzed the theoretical data: EG LK RB PE. Contributed to the theoretical analysis tools: EG LK RB PE. Obama to share cybersecurity priorities with congress. Blomberg News. E Engleman, J Robertson, Engleman E, Robertson J (2013) Obama to share cybersecurity priorities with congress. Blomberg News, February 26. http://www.bloomberg.com/news/ 2013-02-27/obama-to-share-cybersecurity-priorities-with-congress.html. Ac- cessed June 26, 2013. Toward a smart grid. S M Amin, B F Wollenberg, IEEE Power Energy Mag. 3Amin SM, Wollenberg BF (2008) Toward a smart grid. IEEE Power Energy Mag. 3: 114-122. Smart Fault Location for Smart Grids. M Kezunovic, IEEE Trans. Smart Grid. 2Kezunovic M (2011) Smart Fault Location for Smart Grids. IEEE Trans. Smart Grid 2: 11-22. Security and privacy challenges in the smart Grid. P Mcdaniel, S Mclaughlin, IEEE Security & Privacy. 7McDaniel P, McLaughlin S (2009) Security and privacy challenges in the smart Grid. IEEE Security & Privacy vol. 7: 75-77. Towards a Framework for Cyber Attack Impact Analysis of the Electric Smart Grid. D Kundur, F Xianyong, L Shan, T Zourntos, Butler-Purry, Kl, 10.1109/SMARTGRID.2010.5622049Proc. First IEEE International Conference on Smart Grid Communications. First IEEE International Conference on Smart Grid CommunicationsKundur D, Xianyong F, Shan L Zourntos T Butler-Purry, KL (2010) Towards a Framework for Cyber Attack Impact Analysis of the Electric Smart Grid. Proc. First IEEE International Conference on Smart Grid Communications: 244-249. DOI: 10.1109/SMARTGRID.2010.5622049. Information theoretic security. Y Liang, H V Poor, S Shamai, 10.1561/0100000036Foundations Trends Commun. Inform. Theory. 5Liang Y, Poor HV, Shamai S (2008) Information theoretic security. Foundations Trends Commun. Inform. Theory 5: 355-580. doi: 10.1561/0100000036. On the Foundations of Quantum Key Distribution -Reply to Renner and Beyond. H P Yuen, Yuen HP (2012) On the Foundations of Quantum Key Distribution -Reply to Renner and Beyond. manuscript http://arxiv.org/abs/1210.2804. Full-field implementation of a perfect eavesdropper on a quantum cryptography system. I Gerhardt, Q Liu, A Lamas-Linares, J Skaar, C Kurtsiefer, V Makarov, 10.1038/ncomms1348Nature Communications. 2Gerhardt I, Liu Q, Lamas-Linares A, Skaar J, Kurtsiefer C, Makarov V (2011) Full-field implementation of a perfect eavesdropper on a quantum cryptography system. Nature Communications 2. doi: 10.1038/ncomms1348. Hacking commercial quantum cryptography systems by tailored bright illumination. L Lydersen, C Wiechers, C Wittmann, D Elser, J Skaar, 10.1038/nphoton.2010.214Nature Photonics. 4Lydersen L, Wiechers C, Wittmann C, Elser D, Skaar J, et al. (2010) Hacking commercial quantum cryptography systems by tailored bright illumination. Nature Photonics 4: 686-689. doi: 10.1038/nphoton.2010.214. Experimentally faking the violation of Bell's inequalities. I Gerhardt, Q Liu, A Lamas-Linares, J Skaar, V Scarani, 10.1103/PhysRevLett.107.170404Physical Review Letters. 107Gerhardt I, Liu Q, Lamas-Linares A, Skaar J, Scarani V, et al. (2011) Experimentally faking the violation of Bell's inequalities. Physical Review Letters 107. doi: 10.1103/PhysRevLett.107.170404. Fakes states attack using detector efficiency mismatch on SARG04, phase-time, DPSK, and Ekert protocols. V Makarov, J Skaar, Quantum Information & Computation. 8Makarov V, Skaar J (2008) Fakes states attack using detector efficiency mismatch on SARG04, phase-time, DPSK, and Ekert protocols. Quantum Information & Computation 8: 622-635. After-gate attack on a quantum cryptosystem. C Wiechers, L Lydersen, C Wittmann, D Elser, J Skaar, doi: 10.1088/ 1367-2630/13/1/013043New Journal of Physics. 13Wiechers C, Lydersen L, Wittmann C, Elser D, Skaar J, et al. (2011) After-gate attack on a quantum cryptosystem. New Journal of Physics 13. doi: 10.1088/ 1367-2630/13/1/013043. Thermal blinding of gated detectors in quantum cryptography. L Lydersen, C Wiechers, C Wittmann, D Elser, J Skaar, 10.1364/oe.18.027938Optics Express. 18Lydersen L, Wiechers C, Wittmann C, Elser D, Skaar J, et al. (2010) Thermal blinding of gated detectors in quantum cryptography. Optics Express 18: 27938- 27954. doi: 10.1364/oe.18.027938. Device calibration impacts security of quantum key distribution. N Jain, C Wittmann, L Lydersen, C Wiechers, D Elser, 10.1103/PhysRevLett.107.11051Physical Review Letters. 107Jain N, Wittmann C, Lydersen L, Wiechers C, Elser D, et al. (2011) Device calibration impacts security of quantum key distribution. Physical Review Letters 107. doi: 10.1103/PhysRevLett.107.11051. Tailored bright illumination attack on distributed-phase-reference protocols. L Lydersen, J Skaar, V Makarov, 10.1080/09500340.2011.565889Journal of Modern Optics. 58Lydersen L, Skaar J, Makarov V (2011) Tailored bright illumination attack on distributed-phase-reference protocols. Journal of Modern Optics 58: 680-685. doi: 10.1080/09500340.2011.565889. Controlling a superconducting nanowire single-photon detector using tailored bright illumination. L Lydersen, M K Akhlaghi, A H Majedi, J Skaar, V Makarov, 10.1088/1367-2630/13/11/113042New Journal of Physics. 13Lydersen L, Akhlaghi MK, Majedi AH, Skaar J, Makarov V (2011) Controlling a superconducting nanowire single-photon detector using tailored bright illumination. New Journal of Physics 13. doi: 10.1088/1367-2630/13/11/ 113042. Comment on ''Resilience of gated avalanche photodiodes against bright illumination attacks in quantum cryptography. L Lydersen, V Makarov, J Skaar, 10.1063/1.3658806Applied Physics Letters. 9899Appl. Phys. Lett.Lydersen L, Makarov V, Skaar J (2011) Comment on ''Resilience of gated avalanche photodiodes against bright illumination attacks in quantum cryptography'' Appl. Phys. Lett. 98, 231104 (2011). Applied Physics Letters 99. doi: 10.1063/1.3658806. Controlling an actively-quenched single photon detector with bright light. S Sauge, L Lydersen, A Anisimov, J Skaar, V Makarov, Optics Express. 19Sauge S, Lydersen L, Anisimov A, Skaar J, Makarov V (2011) Controlling an actively-quenched single photon detector with bright light. Optics Express 19: 23590-23600. Superlinear threshold detectors in quantum cryptography. L Lydersen, N Jain, C Wittmann, O Maroy, J Skaar, 10.1103/PhysRevA.84.032320Physical Review A. 84Lydersen L, Jain N, Wittmann C, Maroy O, Skaar J, et al. (2011) Superlinear threshold detectors in quantum cryptography. Physical Review A 84. doi: 10.1103/PhysRevA.84.032320. Avoiding the blinding attack in QKD reply. L Lydersen, C Wiechers, C Wittmann, D Elser, J Skaar, doi: 10.1038/ nphoton.2010.278Nature Photonics. 4Lydersen L, Wiechers C, Wittmann C, Elser D, Skaar J, et al. (2010) Avoiding the blinding attack in QKD reply. Nature Photonics 4: 801-801. doi: 10.1038/ nphoton.2010.278. Controlling passively quenched single photon detectors by bright light. V Makarov, 10.1088/1367-2630/11/6/065003New Journal of Physics. 11Makarov V (2009) Controlling passively quenched single photon detectors by bright light. New Journal of Physics 11. doi: 10.1088/1367-2630/11/6/065003. Totally secure classical communication utilizing Johnson (2like) noise and Kirchoff's law. L B Kish, doi: 10.1016/ j.physleta.2005.11.062Physics Letters A. 352Kish LB (2006) Totally secure classical communication utilizing Johnson (2like) noise and Kirchoff's law. Physics Letters A 352: 178-182. doi: 10.1016/ j.physleta.2005.11.062. Protection against the man-in the-middle-attack for the Kirchhoff-loop-Johnson(2like)-noise cipher and expansion by voltage-based security. L B Kish, doi: 10.1142/ s0219477506003148Fluctuation and Noise Letters. 6Kish LB (2006) Protection against the man-in the-middle-attack for the Kirchhoff-loop-Johnson(2like)-noise cipher and expansion by voltage-based security. Fluctuation and Noise Letters 6: L57-L63. doi: 10.1142/ s0219477506003148. Unconditional security by the laws of classical physics. R Mingesz, L B Kish, Z Gingl, C G Granqvist, H Wen, Metrology and Measurement Systems. 20open accessMingesz R, Kish LB, Gingl Z, Granqvist CG, Wen H, et al. (2013) Unconditional security by the laws of classical physics. Metrology and Measurement Systems 20:3-16; (open access) http://www.metrology.pg.gda. pl/full/2013/M&MS_2013_003.pdf. Johnson(2like)-Noise-Kirchhoff-loop based secure classical communicator characteristics, for ranges of two to two thousand kilometers, via model-line. R Mingesz, Z Gingl, L B Kish, doi: 10.1016/ j.physleta.2007.67.086Physics Letters A. 372Mingesz R, Gingl Z, Kish LB (2008) Johnson(2like)-Noise-Kirchhoff-loop based secure classical communicator characteristics, for ranges of two to two thousand kilometers, via model-line. Physics Letters A 372: 978-984. doi: 10.1016/ j.physleta.2007.67.086. Unconditionally secure computers, algorithms and hardware, such as memories, processors, keyboards, flash and hard drives. L B Kish, O Saidi, 10.1142/s0219477508004362Fluctuation and Noise Letters. 8Kish LB, Saidi O (2008) Unconditionally secure computers, algorithms and hardware, such as memories, processors, keyboards, flash and hard drives. Fluctuation and Noise Letters 8: L95-L98. doi: 10.1142/s0219477508004362. Information networks secured by the laws of physics. L B Kish, F Peper, doi: 10.1587/ transcom.E95.B.1501Ieice Transactions on Communications. 95Kish LB, Peper F (2012) Information networks secured by the laws of physics. Ieice Transactions on Communications. E95B: 1501-1507. doi: 10.1587/ transcom.E95.B.1501. Totally secure classical networks with multipoint telecloning (teleportation) of classical bits through loops with Johnson-like noise. Fluctuation and noise letters. L B Kish, R Mingez, 10.1142/s02194775060036286Kish LB, Mingez R (2006) Totally secure classical networks with multipoint telecloning (teleportation) of classical bits through loops with Johnson-like noise. Fluctuation and noise letters 6: L447-L447. doi: 10.1142/s0219477506003628. Coupled Inductor Filters: A Basic Filter Building Block. R S Balog, P T Krein, IEEE Transactions on Power Electronics. 28Balog RS, Krein PT (2013) Coupled Inductor Filters: A Basic Filter Building Block. IEEE Transactions on Power Electronics 28: 537-546. A new hybrid active power filter (APF) topology. S Kim, P N Enjeti, IEEE Transactions on Power Electronics. 17Kim S, Enjeti PN (2002) A new hybrid active power filter (APF) topology. IEEE Transactions on Power Electronics 17: 48-54. Enhanced secure key exchange systems based on the Johnsonnoise scheme. L B Kish, Metrology & Measurement Systems XX. Kish LB (2013) Enhanced secure key exchange systems based on the Johnson- noise scheme. Metrology & Measurement Systems XX: 191-204. Available: http://www.degruyter.com/view/j/mms.2013.20.issue-2/mms-2013-0017/ mms-2013-0017.xml?format = INT.
[]
[ "Exactly solvable models and spontaneous symmetry breaking ⋆", "Exactly solvable models and spontaneous symmetry breaking ⋆" ]
[ "\nInstitute of Physics\nSlovak Academy of Sciences\nDúbravská cesta 9845 11BratislavaSlovakia\n" ]
[ "Institute of Physics\nSlovak Academy of Sciences\nDúbravská cesta 9845 11BratislavaSlovakia" ]
[ "LIGHTCONE 2011" ]
We study a few two-dimensional models with massless and massive fermions in the hamiltonian framework and in both conventional and light-front forms of field theory. The new ingredient is a modification of the canonical procedure by taking into account solutions of the operator field equations. After summarizing the main results for the derivative-coupling and the Thirring models, we briefly compare conventional and light-front versions of the Federbush model including the massive current bosonization and a Bogoliubov transformation to diagonalize the Hamiltonian. Then we sketch an extension of our hamiltonian approach to the two-dimensional Nambu -Jona-Lasinio model and the Thirring-Wess model. Finally, we discuss the Schwinger model in a covariant gauge. In particular, we point out that the solution due to Lowenstein and Swieca implies the physical vacuum in terms of a coherent state of massive scalar field and suggest a new formulation of the model's vacuum degeneracy.Keywords solvable relativistic models · operator solutions · vacuum structure · Bogoliubov transformation · spontaneous symmetry breakingIntroductionExactly solvable models may seem to be almost a closed chapter in the development of quantum field theory. Our original aim for looking at this class of simple relativistic theories was to learn about their properties in the light-front (LF) formulation and compare that picture with the standard one (by the latter we mean conventional operator formalism in terms of usual space-like (SL) field variables). Surprisingly enough, a closer look at these models revealed certain inconsistencies and contradictions in the known SL solutions. For example, the Fock vacuum was taken as the true ground state for calculations of the correlation functions in the Thirring model [1] although a derivation of the model's Hamiltonian shows that it is non-diagonal when expressed in terms of corresponding creation and annihilation operators. Hence the Fock vacuum cannot be its (lowest-energy) eigenstate. Another example is the (massless) Schwinger model, often invoked as a prototype for more complicated gauge theories. It turns out that the widely-accepted covariant-gauge solution [2] involves some unphysical degrees of freedom as a consequence of residual gauge freedom. A rigorous analysis[3]in the axiomatic spirit showing some spurious features of the solution [2] remained almost unnoticed and a less formal, e.g. a hamiltonian study correcting the physical picture seems to be lacking in literature.Defining property of the soluble models is that one can write down operator solutions of the field equations. Hence, one should be able to extract their physical content completely. A novel feature that ⋆
10.1007/s00601-012-0318-1
[ "https://arxiv.org/pdf/1111.6660v1.pdf" ]
119,276,918
1111.6660
98b742d6f34200944682e7343cba394399f5fb60
Exactly solvable models and spontaneous symmetry breaking ⋆ May, 2011 Institute of Physics Slovak Academy of Sciences Dúbravská cesta 9845 11BratislavaSlovakia Exactly solvable models and spontaneous symmetry breaking ⋆ LIGHTCONE 2011 May, 2011Few Body Systems manuscript No. (will be inserted by the editor) L 'ubomír Martinovic Present address: BLTP JINR, 141 980 Dubna, Russia We study a few two-dimensional models with massless and massive fermions in the hamiltonian framework and in both conventional and light-front forms of field theory. The new ingredient is a modification of the canonical procedure by taking into account solutions of the operator field equations. After summarizing the main results for the derivative-coupling and the Thirring models, we briefly compare conventional and light-front versions of the Federbush model including the massive current bosonization and a Bogoliubov transformation to diagonalize the Hamiltonian. Then we sketch an extension of our hamiltonian approach to the two-dimensional Nambu -Jona-Lasinio model and the Thirring-Wess model. Finally, we discuss the Schwinger model in a covariant gauge. In particular, we point out that the solution due to Lowenstein and Swieca implies the physical vacuum in terms of a coherent state of massive scalar field and suggest a new formulation of the model's vacuum degeneracy.Keywords solvable relativistic models · operator solutions · vacuum structure · Bogoliubov transformation · spontaneous symmetry breakingIntroductionExactly solvable models may seem to be almost a closed chapter in the development of quantum field theory. Our original aim for looking at this class of simple relativistic theories was to learn about their properties in the light-front (LF) formulation and compare that picture with the standard one (by the latter we mean conventional operator formalism in terms of usual space-like (SL) field variables). Surprisingly enough, a closer look at these models revealed certain inconsistencies and contradictions in the known SL solutions. For example, the Fock vacuum was taken as the true ground state for calculations of the correlation functions in the Thirring model [1] although a derivation of the model's Hamiltonian shows that it is non-diagonal when expressed in terms of corresponding creation and annihilation operators. Hence the Fock vacuum cannot be its (lowest-energy) eigenstate. Another example is the (massless) Schwinger model, often invoked as a prototype for more complicated gauge theories. It turns out that the widely-accepted covariant-gauge solution [2] involves some unphysical degrees of freedom as a consequence of residual gauge freedom. A rigorous analysis[3]in the axiomatic spirit showing some spurious features of the solution [2] remained almost unnoticed and a less formal, e.g. a hamiltonian study correcting the physical picture seems to be lacking in literature.Defining property of the soluble models is that one can write down operator solutions of the field equations. Hence, one should be able to extract their physical content completely. A novel feature that ⋆ has been overlooked so far is a necessity to formulate these models in terms of true field degrees of freedom -the free fields. As we show below, the operator solutions are always composed from free fields. One should take this information into account by re-expressing the Lagrangiam and consequently the Hamiltonian in terms of these variables. One can summarize the above statement by a slogan: "work with the right Hamiltonian!" Recently, we applied this strategy to the massive derivative-coupling model (DCM), the Thirring and the Federbush models. The brief summary of the achieved results is given in the following section. Then we extend our approach to the case of the chiral Gross-Neveu model, the Thirring-Wess and the Schwinger model. Not all the details are worked out at the present stage -we formulate the main ideas and indicate the strategy to be followed. The full treatment of the models will be given separately [4]. We conclude the present paper with a brief description of the symmetry-breaking pattern fully based on the light-front dynamics. 2 Hamiltonian study of three models -DCM, Thirring and Federbush The model with derivative coupling [5] turns out to be almost a trivial one. Its Lagrangian L = i 2 Ψ γ µ ↔ ∂ µ Ψ − mΨ Ψ + 1 2 ∂ µ φ∂ µ φ − 1 2 µ 2 φ 2 − g∂ µ φJ µ , J µ = Ψ γ µ Ψ.(1) leads to the Dirac equation which is explicitly solved in terms of a free scalar and fermion field: Ψ (x) =: e igφ(x) : ψ(x), iγ µ ∂ µ ψ(x) = mψ(x).(2) The massive scalar field φ(x) obeys the free Klein-Gordon equation as a consequence of the current conservation. The conventional canonical treatment yields a surprising result: the obtained LF Hamiltonian is a free one while the SL Hamiltonian contains an interacting piece, which is non-diagonal in terms of Fock operators and hence its true ground state (which can be obtained by a Bogoliubov transformation) differs from the Fock vacuum. So the physical pictures in two quantization schemes contradict each other. The explanation is simple. One observes that the solution (2) means that there is no independent interacting field -it is composed from the free fields. We have to insert the solution to the Lagrangian first (analogously to inserting a constraint into a Lagrangian), then calculate conjugate momenta and derive the Hamiltonian. In this way, a free Lagrangian and Hamiltonian are found also in the SL case. The new procedure does not alter the LF result. The correlation functions in the two schemes coincide as well. They are built from free scalar and fermion two-point functions. The Thirring model [6] with its Lagrangian describing a self-interacting massless Fermi field L = i 2 Ψ γ µ ↔ ∂ µ Ψ − 1 2 gJ µ J µ , J µ = Ψ γ µ Ψ(3) is a more complicated theory. The simplest solution is similar to Eq.(2) but the elementary scalar field φ is replaced by the composite field j(x) defined via J µ = j µ = 1 √ π ∂ µ j. The corresponding Hamiltonian H is non-diagonal in composite boson operators c, c † built from fermion bilinears according to j µ (x) = − i √ 2π dk 1 √ 2k 0 k µ c(k 1 )e −ik.x − c † (k 1 )e ik.x ,(4)c(k 1 ) = i √ k 0 dp 1 θ p 1 k 1 b † (p 1 )b(p 1 + k 1 ) − (b → d) + ǫ(p 1 )θ p 1 (p 1 − k 1 ) d(k 1 − p 1 )b(p 1 ) . (5) A diagonalization by a Bogoliubov transformation U HU −1 implemented by a unitary operator U [γ(g)], where γ is a suitably chosen function, generates the true ground state as |Ω = N exp − κ +∞ −∞ dp 1 c † (p 1 )c † (−p 1 ) |0 .(6) Here N is a normalization factor and κ is a g-dependent function. |Ω corresponds to a coherent state of pairs of composite bosons with zero values of the total momentum, charge and axial charge. Thus no chiral symmetry breaking occurs in the model at least for g ≤ π where the diagonalization is valid. 3 The Federbush model [7] is the only known massive solvable model. The solvability comes from a specific current-current coupling between two species of massive fermions described by the Lagrangian L = i 2 Ψ γ µ ↔ ∂ µ Ψ − mΨΨ + i 2 Φγ µ ↔ ∂ µ Φ − µΦΦ − gǫ µν J µ H ν .(7) Here the currents are J µ = Ψ γ µ Ψ, H µ = Φγ µ Φ. The coupled field equations iγ µ ∂ µ Ψ (x) = mΨ (x) + gǫ µν γ µ H ν (x)Ψ (x), iγ µ ∂ µ Φ(x) = µΦ(x) − gǫ µν γ µ J ν (x)Φ(x)(8) have the solution of the form (2) with two "integrated currents" j(x) and h(x). One again finds that the structure of the SL and LF Hamiltonians coincides only when the operator solution is implemented in the Lagrangians. However, the SL Hamiltonian is not diagonal and a Bogoliubov transformation is needed to find the physical ground state. This requires a generalization of Klaiber's massless bosonization yielding a complicated substitute for c(k 1 ) of Eq.(5). In a sharp contrast, the LF massive bosonization is as simple as the SL massless one. The model is very suitable for a non-perturbative comparison of the two forms of the relativistic dynamics. This is because 2-D massless fields cannot be treated directly in the LF formalism (only as the massless limits of massive theories -this is obvious already from the LF massive two-point functions). Exponentials of the massive composite fields are more singular than the massless once. They have to be defined using the "triple-dot ordering" [8,9] which generalizes the normal ordering (subtractions of the VEVs order by order). We avoid this by bosonization of the massive current. The price we pay is complicated commutators at unequal times that are needed for computation of correlation functions. In any case, the correlators in the SL and LF version of the theory should coincide in form. A remarkable albeit for the moment only a conjectured scenario is that this will indeed happen with complicated operator structures plus non-trivial vacuum structure in the SL case and with much simpler operator part plus the Fock vacuum in the LF case. Chiral Gross-Neveu model The Lagrangian and field equations of the chiral Gross-Neveu model [10] are L = i 2 Ψ γ µ ↔ ∂ µ Ψ − g 2 Ψ Ψ 2 + Ψ iγ 5 Ψ 2 , iγ µ ∂ µ Ψ = g Ψ Ψ Ψ − Ψ γ 5 Ψ γ 5 Ψ .(9) This theory is a 2-D version of the Nambu-Jona-Lasinio model. When rewritten in the component form, one realizes that the above equations up to the sign coincide with those in the Thirring model: i ∂ 0 − ∂ 1 Ψ 1 = 2gΨ † 2 Ψ 1 Ψ 2 = −2gΨ † 2 Ψ 2 Ψ 1 = −g(j 0 + j 1 ), i ∂ 0 + ∂ 1 Ψ 2 = 2gΨ † 1 Ψ 2 Ψ 1 = −2gΨ † 1 Ψ 1 Ψ 2 = −g(j 0 − j 1 ).(10) Inserting the solution known from the Thirring model into the Lagrangian, we find the Hamiltonian H = +∞ −∞ dx 1 − iψ † α 1 ∂ 1 ψ + 1 2 g j 0 j 0 − j 1 j 1 + g 2 ψψ 2 − ψγ 5 ψ 2 .(11) We can bosonize the scalar densities Σ(x) = ψψ = ψ † 1 ψ 2 + ψ † 2 ψ 1 , Σ 5 (x) = iψγ 5 ψ = i ψ † 1 ψ 2 − ψ † 2 ψ 1 : Σ(x) = +∞ −∞ dk 1 2π A(k 1 , t)e ik 1 x 1 + A † (k 1 , t)e −ik 1 x 1 , Σ 5 (x) = +∞ −∞ dk 1 2π B(k 1 , t)e ik 1 x 1 + H.c.(12) With the Fock expansions for the Fermi field and after the Fourier transform, we obtain A(k 1 , t) = +∞ −∞ dp 1 1 2 b † (−p 1 )b(k 1 − p 1 ) + d † (−p 1 )d(k 1 − p 1 ) θ p 1 (k 1 − p 1 ) e i2p 1 t − θ(p 1 k 1 )ǫ(p 1 ) d(−p 1 )b(p 1 + k 1 ) + b(−p 1 )d(p 1 + k 1 ) e −i2p 1 t e −ik 1 t ,(13) and similarly for B(k 1 , t). One then has to diagonalize the Hamiltonian H = H 0 + H 1 + H 2 where H 1 = − g π +∞ −∞ dk 1 |k 1 | c † (k 1 )c † (−k 1 ) + c(k 1 )c(−k 1 ) , . H 2 = − g 4π +∞ −∞ dk 1 |k 1 | 2A † (k 1 )A(k 1 ) + A † (k 1 )A † (−k 1 ) + A(k 1 )A(−k 1 ) + A → B .(14) Obviously |0 is not an eigenstate of H. The true vacuum has to be found by a Bogoliubov transformation. It will be different than the Thirring-model vacuum and probably non-invariant under Q 5 . This remains to be verified. We also leave for future work generalization of the model to N f flavours. Thirring-Wess model This model [11] is simpler than the Schwinger model because the nonzero bare mass of the vector field removes gauge invariance with all its subtleties. The corresponding Lagrangian L = i 2 Ψ γ µ ↔ ∂ µ Ψ − 1 4 G µν G µν + µ 2 0 B µ B µ − eJ µ B µ , G µν = ∂ µ B ν − ∂ ν B µ .(15) leads to the field equations -the Dirac and Proca equations: iγ µ ∂ µ Ψ (x) = eγ µ B µ (x)Ψ (x), ∂ µ G µν (x) + µ 2 0 B µ (x) = eJ µ (x).(16) The rhs of the second equation reduces to (∂ ν ∂ ν +µ 2 0 )B ν since ∂ ν B ν = 0 due to the current conservation. The latter condition also permits us to write down the solution of the corresponding Dirac equation as Ψ (x) = exp − ie 2 γ 5 +∞ −∞ dy 1 ǫ(x 1 − y 1 )B 0 (y 1 , t) ψ(x), γ µ ∂ µ ψ(x) = 0.(17) Product of two fermion operators has to be regularized by a point-splitting. The integral in the exponent contributes naturally to find (j µ and j µ 5 are the free currents) J µ (x) = j µ (x) − e π B µ (x), J µ 5 (x) = j µ 5 (x) − e π ǫ µν B ν (x).(18) Inserting the above J µ (x) to the Proca equation, one finds that the the bare mass is replaced by µ 2 = µ 2 0 + e 2 /π and that this equation can be easily inverted since only the free fields are involved. Thus, there is no dynamically independent vector field. Following our method, we insert the solutions for B µ and Ψ into Lagrangian and then derive the Hamiltonian. The question if the latter will be diagonal or will have to be diagonalized, together with other properties of the model, is under study. The Schwinger model in the Landau gauge The masslessness of the vector field makes the Schwinger model more subtle than was the previous one. The key question is to correctly handle the gauge variables since in the covariant gauge ∂ µ A µ = 0 not all gauge freedom has been removed. We implement the gauge condition in the Lagrangian as [12]: L = i 2 Ψ γ µ ↔ ∂ µ Ψ − 1 4 F µν F µν −eJ µ (x)A µ (x)−G(x)∂ µ A µ (x)+ 1 2 (1−γ)G 2 (x), F µν = ∂ µ A ν −∂ ν A µ . (19) The gauge-fixing terms furnish the component A 0 with the conjugate momentum, Π A 0 (x) = −G(x). Moreover, they guarantee restriction to an arbitrary covariant gauge in which neither the condition ∂ µ A µ (x) = 0 nor the Maxwell equations ∂ µ F µν (x) = eJ ν (x) hold as operator relations. The gauge fixing field obeys ∂ µ ∂ µ G(x) = 0 so that positive and negative-frequency parts G (±) (x) are well defined. 5 Our strategy is to proceed in the spirit of K. Haller's generalization [12] of the Gupta-Bleuler quantization, in which the unphysical components of the gauge field are represented as ghost degrees of freedom of zero norm, carrying vanishing momentum and energy. To ensure that we are dealing with the original 2-dimensional QED, we have to restrict the theory to the physical subspace G (+) |phys = 0. We will choose γ = 1 in the above Lagrangian. Then the gauge condition is an operator relation while the (modified) Maxwell and Dirac equations read ∂ µ F µν (x) = eJ ν (x) − ∂ ν G(x), iγ µ ∂ µ Ψ (x) = eγ µ A µ (x)Ψ (x).(20) The solution of the latter is completely analogous to the Thirring-Wess model case, Eq.(17). Again, the vector and axial-vector currents have to be calculated via point-splitting with an important difference: the exponential of the line integral of the gauge field must be inserted in the current definition to compensate for the violation of gauge invariance due to the point splitting. After inserting the calculated interacting current into the Maxwell equations, we have to express the Lagrangian and Hamiltonian in terms of the free fields as before. The physical picture will become transparent if we make a unitary transformation to the Coulomb-gauge representation [12]. Before performing this step, let us indicate the main ingredients of the original covariant-gauge solution [2,13] and point out a problem with it. The starting point was the Ansatz for the gauge field and the currents, (∂ µ = ǫ µν ∂ ν ) A µ = − √ π e ∂ µ Σ + ∂ µη , J µ = Ψ γ µ Ψ = − 1 √ π∂ µ Φ, J µ 5 = Ψ γ µ γ 5 Ψ = − 1 √ π ∂ µ Φ,(21) where Σ,η and Φ are so far unspecified scalar fields. In the ∂ µ A µ = 0 gauge, one finds ∂ µ ∂ µη = 0 and F µν = √ π e ǫ µν ∂ ρ ∂ ρ Σ. From the anomalous divergence of the axial current ∂ µ J µ 5 = e 2π e µν F µν (22) one concludes that ∂ µ ∂ µ Φ = ∂ µ ∂ µ Σ or Φ = Σ + h with the free massless field h obeying ∂ µ ∂ µ h = 0. Then the vector current is J µ = − 1 √ π∂ µ + L µ , L µ = − 1 √ π∂ µ h.(23) From the Maxwell eqs.∂ ν ∂ ρ ∂ ρ + e 2 π Σ − e 2 √ π L ν = 0(24) one concludes that∂ µ L µ = 0 or ∂ ρ ∂ ρ + e 2 π Σ = 0.(25) One component of the gauge field, namely Σ(x), became massive. For consistency reasons, L µ can vanish only weakly, ψ|L µ (x)|ψ = 0. With the above Ansatz for A µ , Dirac eq. becomes iγ µ ∂ µ Ψ = − √ πγ µ γ 5 ∂ µ Σ + η Ψ with the solution Ψ (x) =: e i √ πγ 5 Σ(x)+η(x) : ψ(x).(26) Calculation of the currents via the point-splitting yields identification h(x) = η(x) + ϕ(x), where ϕ(x) is the "potential" (integrated current) of the free currents. From [A 1 (x 1 ), π 1 (y 1 )] = iδ(x 1 − y 1 ) and with π 1 = F 01 = e √ π Σ one gets the equal-time commutator [Σ(x 1 ), ∂ 0 Σ(y 1 )] = iδ(x 1 − y 1 ) of a canonical scalar field. Inserting Eq.(26) to the original Lagrangian, we derive the physical part of the Hamiltonian as : H = +∞ −∞ dx 1 − iψ † α 1 ∂ 1 ψ + 1 2 e 2 π Σ 2 .(27) This Hamiltonian is non-diagonal when expressed in terms of Fock operators of Σ(x). A Bogoliubov transformation is necessary. A coherent-type of vacuum state will be obtained as the true vacuum. An interesting aspect of the model is its vacuum degeneracy and the theta vacuum. In the work [2] the mechanism generating multiple vacua is based on "spurion" operators. These however have been shown to be an artifact of the incorrect treatment of residual gauge freedom [3]. What can be the true mechanism of the vacuum degeneracy in the Scwinger model? We believe that it is the presence of a gauge zero mode in the finite-volume treatment of the model [14] together with a quantum implementation of residual invariance under large gauge transformations as desribed in [15]. This leads us naturally to the finite-volume reformulation of our approach outlined in the first part of this section. Spontaneous symmetry breaking in LF theory with fermions At the LC workshop in Valencia, Marvin Weinstein criticised the way how the LF theory describes spontaneous symmetry breaking, saying that Goldstone (or hidden) symmetry is a chalenge for LF theory. Where is vacuum degeneracy? Actually the latter can be described if one takes into account a mechanism based on the presence of dynamical fermion zero modes [16]. O(2)-symmteric sigma model provides us with a good example. Its Hamiltonian is symmetric under axial-vector transformations Ψ + (x) → e −iβγ 5 Ψ + (x) = V (β)Ψ + (x)V −1 (β), V (β) = exp{−iβQ 5 }, Q 5 = V d 3 xJ + 5 (x).(28) The operator of the axial charge will not annihilate the LF vacuum since in addition to the normalmode part (which annihilates it) it contains also the zero-mode term, Q 5 = Q 5 N + Q 5 0 , where Q 5 0 = p ⊥ ,s 2s b † 0 (p ⊥ , s)d † 0 (−p ⊥ , −s) + H.c. + b † 0 (p ⊥ , s)b 0 (p ⊥ , s) − d † 0 (p ⊥ , s)d 0 (p ⊥ , s) .(29) The term b † 0 (p ⊥ )d † 0 (−p ⊥ ) will generate an infinite set of degenerate vacuum states. One has all properties for deriving the Goldstone theorem in the usual way. We conclude with the statement that the realm of exactly solvable models still offers us certain surprises and room for improvement. And that degenerate vacua exist in the LF formalism in spite of its kinematically defined vacuum state. The Thirring model. B Klaiber, In: Lectures in Theoretical Physics. XaKlaiber B. (1968) The Thirring model, In: Lectures in Theoretical Physics, Vol. Xa, Boulder 1976: 141-176 Quantum electrodynamics in two-dimensions. J H Lowenstein, J A Swieca, J A , Ann. Phys. 68Lowenstein J. H. and J. A. Swieca J. A. (1971) Quantum electrodynamics in two-dimensions, Ann. Phys. 68: 172-195 The Schwinger Model Revisited. G Morchio, F Strocchi, Ann. Phys. 188Morchio G. and Strocchi F. (1988) The Schwinger Model Revisited, Ann. Phys. 188: 217-238 . L Martinovic, P Grange, work in preparationL. Martinovic and P. Grange, work in preparation Infrateilchen in der Quantenfeldtheorie. B Schroer, Fort. der Physik. 11Schroer B. (1963) Infrateilchen in der Quantenfeldtheorie, Fort. der Physik 11: 1-31 A soluble relativistic field theory. W Thirring, Ann. Phys. 3Thirring W. (1958) A soluble relativistic field theory, Ann. Phys. 3: 91-112 A Two-Dimensional Relativistic Field Theory. K Federbush, Phys. Rev. 121Federbush K. (1961) A Two-Dimensional Relativistic Field Theory, Phys. Rev. 121: 1247-1249 Introduction to Some Aspects of the Relativistic Dynamics of Quantized fields. A S Wightman, Cargese Lectures in Theoretical Physics. new YorkGordon and Breach171Wightman A. S. (1964) Introduction to Some Aspects of the Relativistic Dynamics of Quantized fields, In: Cargese Lectures in Theoretical Physics, Gordon and Breach, new York 1967: 171 Equivalence of Sine-Gordon and The Thirring Model and Cummulative Mass effects. B Schroer, T Truong, Phys. Rev. 15Schroer B. and Truong T. (1977) Equivalence of Sine-Gordon and The Thirring Model and Cummulative Mass effects, Phys. Rev. D15: 1684-1693 Dynamical symmetry breaking in asymptotically free field theories. D Gross, A Neveu, Phys. Rev. D. 10Gross D. and Neveu A. (1974) Dynamical symmetry breaking in asymptotically free field theories, Phys. Rev. D 10: 3235-3253 Solution of a field theory model in one space and one time dimensions. W Thirring, J Wess, Ann. Phys. 27Thirring W. and Wess J. (1964) Solution of a field theory model in one space and one time dimensions, Ann. Phys. 27: 331-337 Quantum gauge equivalence in QED. K Haller, E Lim-Lombridas, Found. of Phys. 24Haller K. and Lim-Lombridas E. (1994) Quantum gauge equivalence in QED, Found. of Phys. 24: 217-247 Nonperturbative methods in two-dimensional quantum field theory. E Abdalla, M C Abdalla, K D Rothe, World Scientific832SingaporeAbdalla E., Abdalla M.C.B and Rothe K. D. (2001) Nonperturbative methods in two-dimensional quantum field theory, World Scientific, Singapore, 832 p. QED on a circle. J E Hetrick, Y Hosotani, Phys. Rev. D. 38Hetrick J. E. and Hosotani Y. (1988) QED on a circle, Phys. Rev. D 38: 2621-2624 Gauge symmetry and the light-front vacuum structure. L Martinovic, Phys. Lett. 509Martinovic L. (2001) Gauge symmetry and the light-front vacuum structure, Phys. Lett. B509: 355-364 Fermionic zero modes and spontaneous symmtery breaking on the light front. L Martinovic, J P Vary, Phys. Rev. 64Martinovic L. and Vary J. P. (2001), Fermionic zero modes and spontaneous symmtery breaking on the light front, Phys. Rev. D64: 15016-15020
[]
[ "Non-commutativity on another Minkowski space-time: Vierbein formalism and Higgs approach", "Non-commutativity on another Minkowski space-time: Vierbein formalism and Higgs approach" ]
[ "Abolfazl Jafari \nDepartment of Physics\nFaculty of Science\nShahrekord University\nP. O. Box 115ShahrekordIran\n" ]
[ "Department of Physics\nFaculty of Science\nShahrekord University\nP. O. Box 115ShahrekordIran" ]
[]
We extend the non-commutative coordinates relationship into other than the Minkowski space-time. We clarify the non-commutativity dependency to the geometrical structure. As well as, we find an inverse map between Riemann's normal and global coordinates. Furthermore, we show that behavior of the corresponding coordinates non-commutativity like as a tensor. All results summarized for the Schwarzschild metric. And they summarized for the space-time in the presence of weak gravitational waves.
null
[ "https://arxiv.org/pdf/1811.09670v1.pdf" ]
119,395,232
1811.09670
caa72db8bd259bd3e74f49b1797dbb3b07881497
Non-commutativity on another Minkowski space-time: Vierbein formalism and Higgs approach 10 Nov 2018 Abolfazl Jafari Department of Physics Faculty of Science Shahrekord University P. O. Box 115ShahrekordIran Non-commutativity on another Minkowski space-time: Vierbein formalism and Higgs approach 10 Nov 2018(Dated: November 27, 2018)arXiv:1811.09670v1 [physics.gen-ph]numbers: 0367Mn7323-b7445+c7478Na Keywords: Non-commutative CoordinatesVierbein FieldPseudo- Riemann's ManifoldsSchwarzschild UniverseWeak Gravitational Waves We extend the non-commutative coordinates relationship into other than the Minkowski space-time. We clarify the non-commutativity dependency to the geometrical structure. As well as, we find an inverse map between Riemann's normal and global coordinates. Furthermore, we show that behavior of the corresponding coordinates non-commutativity like as a tensor. All results summarized for the Schwarzschild metric. And they summarized for the space-time in the presence of weak gravitational waves. INTRODUCTION Space-time at local is flat and the asymptotic conception of a more general space-time. It also has a correlation characteristic. When a pseudo-Riemann's manifold gets the notion of correlation, then a system of measurements is available. One can always span each neighborhood of an event ′ p ′ in space-time by local coordinates if it is possible to exist a unique geodesic joining between any points of the neighborhood and ′ P ′ . Therefore, any Riemannian manifold equipped with normal coordinates [1,2]. Minkowski space-time has an asymptotic conception for any solutions to Einstein's equation which is descriptive of a free-falling frame in this notion. Minkowski space-time always is available if the measurements are in the inertial frame. Vierbein field is a tool for generalizing a theorem from the tangential space to its main one. In general relativity, a vierbein field is a set of four orthonormal vector fields. Express of all tensorial quantities on a manifold is by using the frame field and its dual co-frame field. The set of the vierbein field contains "d"-vector fields. Their numbers are proportional to the dimension of space-time. One of them is time-like while rests are space-like. They are on a Lorentz manifold and interpreted as a model of space-time. It is important to recognize that frame fields are geometric quantities. They made based on the metric tensor of a general space-time. Vierbein fields make sense independent of a choice of a coordinate chart. They do the notions of orthogonality and length. Therefore, vierbein formalism is an official approach. By these, Minkowski version of some physical quantities generalizes into a general space-time. Vierbein field a α µ has two kinds of indices. µ indicates the coordinates of a general space-time and α is only a counter. Changes in µ occur by the metric tensor while the number of vierbein fields enumerated by α. The formalism is invariance under the Lorentz transformation α as well as µ. Vierbein counters are rising by η αβ , while the spacetime indices with the metric g µν . Lorentz indices run from zero to "d ", dimension of space-time [1,3,4]. Vierbein field is as a square root of the metric tensor [1][2][3][4][5] g µν (x) = a α µ (y)η αβ a β ν (y). Clearly, in the whole of a manifold, the vierbein field given by a α µ (y) = δ α µ when gravity is absent. There is another way to find the non-commutativity dependency to the geometrical characteristics. This way named Higgs approach. For a small neighborhood of ′ P ′ in M , global and local coordinates related to each other. Also, the local metric of this region achieves by employing the Higgs approach. We deal with the non-commutative coordinates when we attempt to make of change in physics for short distances [6]. We can introduce the non-commutative framework with a replacement of Hermitian operatorsŷ µ instead of local coordinates y µ . So that obeys the following relation [ŷ µ ,ŷ ν ] = ıθ µν (ŷ),(2) which for Minkowski space-time, Eq.(2) gets the following form [ŷ µ ,ŷ ν ] = ıθ µν(3) θ µν is a constant and real parameter [6][7][8][9][10][11][12]. In the limit of T p M , the results of the non-commutative version of physics depend on the choice of an event in the spacetime. For, we determineθ(ŷ) up to the first order of the Riemann curvature tensor. Since many properties of tangential spaces induced from the base manifolds direct. So, the non-commutative relationship on T p M comes from M . Obviously that why we are studying for the main non-commutativity of coordinates on the M . The purpose of this paper is to recognize the dependence ofθ on coordinates. We also find another mapping other than vierbein formalism. So that, it helps us to get back the Minkowskian coordinates non-commutativity into the original manifold. Then, we compare the results of the two proposed methods. PRESENTATION OF THE THEORY We make two sets of vierbein fields based on the given metrics, the metric of weak gravitational waves and Schwarzschild universe. We assume an event of spacetime "P " (with the coordinate ′ Y ′ p ). Suppose that its relevant geometry quantities are slowly varying functions. Local and global coordinates (y and x) span the pseudo-Riemann manifolds in the partial and whole states. For a point ′ Y ′ p of space-time, there is a fundamental relationship linking them to one another x µ = Y µ p + y µ + A µ αβ (Y p )y α y β + B µ αβγ (Y p )y α y β y γ + · · ·(4) A µ αβ and B µ αβγ are the Christopher coefficients and their derivatives. The amount of them calculated at the event point. The set of coordinates ′ Y ′ p indicates a global framework. Also the validity of Eq.(4) restricted to a small neighborhood of T p M . Obviously that, the variation of the general coordinate ′ Y ′ p is small compared to the local coordinate ′ y ′ . Nevertheless, metric expansion becomes [1,3] g αβ (y) = g αβ (Y p ) − 1 3 R αµβν (Y p )y µ y λ − 1 3! R αγβν;µ (Y p )y ν y µ y γ + · · · ,(5)of course, g αβ (Y p ) ≈ η αβ , and Γ α βγ (y) = Γ α βγ (Y p ) + ∂ µ Γ α βγ (Y p )y µ + · · ·(6) Hence, at the origin of a framework, Riemann's curvature tensors will be R α βµν (Y p ) = ∂ ν Γ α βµ − ∂ µ Γ α βν | calculated at Yp (7) Static case of T p M -Schwarzschild universe is a static solution to Einstein's equation [4,13]. We assume that T p M is a falling space along a geodesic of Schwarzschild space-time. Hyper-surface of space-time with constant time is perpendicular to the geodesic. So, its affine connections will be time-independent [1,3]. Based on Eq.(5), we provide a set of vierbein fields which specialized for Schwarzschild universe. In matrix representation, vierbein fields found to be: [14,15] a α µ =     1 − 1 2 R 0 l0m y l y m − 1 6 R 0 l1m y l y m − 1 6 R 0 l2m y l x m − 1 6 R 0 l3m y l y m − 1 2 R 1 l0m y l y m 1 − 1 6 R 1 l1m y l y m − 1 6 R 1 l2m y l y m − 1 6 R 1 l3m y l y m − 1 2 R 2 l0m y l y m − 1 6 R 2 l1m y l y m 1 − 1 6 R 2 l2m y l y m − 1 6 R 2 l3m y l y m − 1 2 R 3 l0m y l y m − 1 6 R 3 l1m y l y m − 1 6 R 3 l2m y l y m 1 − 1 6 R 3 l3m y l y m     .(8) which simplify to the following expression a α µ = δ α µ − ξ α µ 6 R α lµk y l y k ,(9) where ξ α i = 1 and ξ α 0 = 3. To get the coordinates noncommutative relationship on the M from the version of T p M , we can writẽ θ µν = a α µ θ αβ a β ν .(10) Now, by substitution y µ withŷ µ and Eq.(9) in Eq.(10), we get coordinate dependency ofθ(ŷ) θ µν = θ µν − ξ α ν 6 θ µα R α lνkŷ lŷk − ξ α µ 6 θ αν R α lµkŷ lŷk .(11) When θ 0i = 0, the special case of Eq.(11) becomes θ ij = θ ij − 1 6 θ ik R k ljmŷ lŷm − 1 6 θ kj R k limŷ lŷm .(12) Higgs mapping-In this section, we set a local framework on a time-independent small subset of M at an event point "P " of space-time. Then, global coordinate "x" belonging to M can be written in term of the coordinate of "P " also local coordinate "y". The coordinates "y µ " can span the small neighborhood around of ′ P ′ . Therefore, x µ = Y µ p + y µ + A µ αβ (p)y α y β + B µ αβ,γ (p)y α y β y γ + · · · .(13) Where ′ Y ′ P is a general coordinate of ′ P ′ . Since, Eq.(5) derived from the definition of Eq.(13), so A µ αβ (p) is zero. By excluding the upper-order terms, Eq.(13) becomes: x µ = Y µ p + y µ + B µ αβγ (p)y α y β y γ .(14) B µ αβγ is a combination of Christopher coefficients, B µ αβγ = λ µ Γ µ αβ,γ , in which λ µ calculated for Schwarzschild universe by Higgs approach [16]. By this choice, Christopher coefficients independent from time. Therefore, for Schwarzschild universe, x µ = Y µ (P ) + y µ + λ µ Γ µ αβ,ν (P )y α y β y ν . The coefficients λ 0 and λ i 's get the values by comparing dx ν η νµ dx µ = {−1 − 4λ 0 Γ 0 m0,n y m y n }dy 0 dy 0 + {1 + 2λ 1 Γ 1 00,1 · · · }dy 1 dy 1 + · · · and Eq.(5). Our calculations show that, λ temporal = − 1 4 and λ spatial = 1 2 . With the condition: θ 0i = 0, we havẽ θ ij = θ ij − 1 6 θ kj R i mkn y m y n − 1 6 θ ik R j mkn y m y n ,(16) which specialized for Schwarzschild universe. One can see thatθ ij in Eq. (16) is an anti-symmetric tensor, because they transform by the tetrad approach. Dynamic case of T p M -Due to general relativity, the curvature of space-time affected by massive bodies. Also, the gravitational waves are one of the solutions to Einstein's equation, which occurs due to the change in the distance of the two massive bodies. So, gravitational waves are ripples in the curvature of space-time which generated by gravitational interactions. These waves are coming from the depths of space and outward from their source at the speed of light. They have various amplitudes and frequencies. We especially assume a state that the space-time filled with weak gravitational effects. These effects are static waves and depend on time only. Gravitational waves explained by the assumption of small fluctuation in the metric tensor [17][18][19][20]. They propagate in the z-direction and is at least time and no direction dependent. In this case, we decompose the metric tensor as: ds 2 = η µν dx µ dx ν + h ij (t)dx i dx j ,(17) Latin indices run from 1 to 3 while Greek take on the values 0,1,2,3. Indeed, the first order perturbation correction h ij is a function of coordinates which we assume to be only time dependent. The simple form of gravitational background is as follows: h µν =     0 0 0 0 0 0 h 12 0 0 h 21 0 0 0 0 0 0     ,(18) with D µ h µν = 0, where D µ denoting the covariant derivative. By setting h 12 = h 21 as " × " polarization are familiar [20,21]. For the explaining of the frame field of T p M , it is necessary to use a set of localized coordinates. We mention that T x M contains a first order of Riemann's curvature tensor. The non zero components of the Christopher connections are Γ 0 12 , Γ 1 02 and Γ 2 01 . Also, only non zero independent component of Riemann's curvature tensor is; R 1 020 = + 1 2ḧ 12 . It is possible to find vierbein field based on the modified version of the relation found in Ref. [14,15]. In Ref. [14,15] has offered a relation to make the vierbein field. This validity is only for the static state of T p M . So that, for the case of gravitational waves, we have to change it. Our finding vierbein field satisfies the corresponding motion equation: D µ a α ν = ∂ µ a α ν − a α σ Γ σ µν = 0. Also, we know that the non-zero elements of Christopher connections are Γ 1 02 = Γ 2 01 = −ḣ 2 . So, solving the equation of motion of time-dependent only vierbein fields are a α ν = dt a α σ Γ σ 0ν which reduced to a 2 1 = a 2 2 Γ 2 01 = −ḣ 2 a 2 2 and a 1 2 = a 1 1 Γ 1 02 = −ḣ 2 a 1 1 . In the absence of gravity, we find coordinates such that a α µ (y) = δ α µ . So that, we can set a α µ = δ α µ + Γ α 0µ . In the language of Riemann's curvature tensors, we present components of the vierbein field: a α µ = δ α µ − t t dtdtR α 0µ0 (t) whose form is as follows a α µ =     1 0 0 0 0 1 − h 2 0 0 − h 2 1 0 0 0 0 1     .(19) In this way, we can replicate Eq.(10);θ µν = a α µ θ αβ a β ν . With direct substitution and in the matrix representation, we have     0θ 01θ02θ03 θ 10 0θ 12θ13 θ 20θ21 0θ 23 θ 30θ31θ32 0     =     0 θ 01 − h 2 θ 02 − h 2 θ 01 + θ 02 θ 03 θ 10 − h 2 θ 20 0 θ 12 θ 13 − h 2 θ 23 − h 2 θ 10 + θ 20 θ 21 0 − h 2 θ 13 + θ 23 θ 30 θ 31 − h 2 θ 32 − h 2 θ 31 + θ 32 0    (20) Therefore, in the presence of gravitational waves, the non-commutativity dependency on coordinates is achievable. Here, we should use the extension of the metric interrupt. In this manner, the obtained result simplified toθ µν = θ µν −θ µβ t t dtdtR β 0ν0 (t) − θ βν t t dtdtR β 0µ0 (t).(21) in which, t t dtdtR β 0µ0 (t) = h βµ (t) 2 . CONCLUSION This work is in the first order of Riemann's curvature tensors and at the level of tangential space. In these limits, by employing vierbein field, we found coordinates dependency of generalized non-commutative relationship. We made local fields which satisfied the complement relation of Eq.(1). So, we derived the coordinates dependency for other than the Minkowski spacetime. We specialized the generalized non-commutativity coordinates relationship for Schwarzschild universe as well as for space-time in the presence of weak gravitational waves. Also, we showed that behavior of the coordinates non-commutativity like as a tensor. Based on Higgs approach, we found a different way to find the coordinates non-commutativity dependence on local coordinates. Obtained results showed that these methods are comparable. ACKNOWLEDGMENTSThe author thanks Shahrekord University for supporting this work with a research grant. L Parker, D J Tomas, Quantum Field Theory in Curved Space-time. LondonCambridge University PressL. Parker, D. J. Tomas, Quantum Field Theory in Curved Space-time, (London: Cambridge University Press, 2009), p. 144-151 C Hans, Ohanian, Gravitation and Space-time. New YorkW. W. Norton and CompanyHans C. Ohanian, Gravitation and Space-time, (New York: W. W. Norton and Company, 1976), p. 141-150 N D Birrell, P C W Davies, Quantum Fields in Curved Space. LondonCambridge University PressN. D. Birrell, P. C. W. Davies, Quantum Fields in Curved Space, (London: Cambridge University Press, 1982), p. 81-88 S Weinberg, Gravitation and cosmology. New YorkJohn Wiley and Sons, IncS. Weinberg, Gravitation and cosmology, (New York: John Wiley and Sons, Inc., 1972), F De Felic, C J S Clarke, Relativity on curved manifold. New YorkCambridge University PressF. De Felic, C. J. S. Clarke, Relativity on curved mani- fold, (New York: Cambridge University Press, 1990), Non-commutative Space-time: Symmetries in Non-commutative Geometry and Field Theory. Berlin HeidelbergSpringer774Lecture. Notes in Phys. et alLecture. Notes in Phys, Non-commutative Space-time: Symmetries in Non-commutative Geometry and Field Theory, Vol. 774, P. Aschieri, M. Dimitrijevic, P. Kul- ish, et al, (Berlin Heidelberg: Springer, 2009), P. 1-4 A Connes, M Marcolli, Non-commutative Geometry, Quantum Fields and Motives. LondonAcademic PressA. Connes, M. Marcolli, Non-commutative Geometry, Quantum Fields and Motives, (London: Academic Press, 1994), p. 1-31 . D J Gross, N A Nekrasov, JHEP. 010344D. J. Gross, N. A. Nekrasov, JHEP, 0103: 044 (2001) . R J Szabo, Phys Rept, 378R. J. Szabo, Phys. Rept, 378: 207-299 (2003) . A Fischer, R J Szabo, JHEP. 090231A. Fischer, R. J. Szabo, JHEP, 0902: 031 (2009) . N Seiberg, E Witten, JHEP. 990932N. Seiberg, E. Witten, JHEP, 9909: 032 (1999) . A Jafari, Eur. Phys. J. C. 732271A. Jafari, Eur. Phys. J. C, 73: 2271 (2013) . R , Oxford University Press IncNew YorkIntroducing Einstein's RelativityR. D'inverno, Introducing Einstein's Relativity, (New York: Oxford University Press Inc., 1993), p. 269-285 . L Parker, Phys. Rev. Let. 441559L. Parker, Phys. Rev. Let, 44: 1559 (1980) . L Parker, Phys. Rev. D. 221922L. Parker, Phys. Rev. D, 22: 1922 (1980) . P W Higgs, J. Phys. A, Math. Gen. 12309P. W. Higgs, J. Phys. A, Math. Gen, 12: 309 (1994) . J Weber, General Relativity and Gravitational Waves. Interscience Publisher IncJ. Weber, General Relativity and Gravitational Waves, Dover Publications (Mineola park, New York: Inter- science Publisher Inc., 1961), p. 87-144 C W Misner, K S Thorne, J A Wheeler, Gravitation. San FranciscoFreeman Publishing CompanyC. W. Misner, K. S. Thorne, J. A. Wheeler, Gravitation, (San Francisco: Freeman Publishing Company, 1973), p. 435-445 M Maggiore, Gravitational Waves. New YorkOxford University Press, IncM. Maggiore, Gravitational Waves, (New York: Oxford University Press, Inc., 2008),p. 42-56 . A D Speliotopoulos, Phys. Rev. D. 51A.D. Speliotopoulos, Phys. Rev. D, 51: 1701-1709 (1995) . A Saha, S Gangopadhyay, S Saha, Phys. Rev. D. 8325004A. Saha, S. Gangopadhyay, S. Saha, Phys. Rev. D, 83: 025004 (2011)
[]
[ "The Borel equivariant cohomology of real Grassmannians", "The Borel equivariant cohomology of real Grassmannians" ]
[ "Jeffrey D Carlson " ]
[]
[]
Recent work of Chen He has determined through GKM methods the Borel equivariant cohomology with rational coefficients of the isotropy action on a real Grassmannian and an oriented real Grassmannian through GKM methods. In this expository note, we propound a less involved approach, due essentially to Vitali Kapovitch, to computing equivariant cohomology rings HKpG{Hq for G, K, H connected Lie groups, and apply it to recover the equivariant cohomology of the Grassmannians. The bulk is setup and commentary; once one believes in the model, the proof itself is under a page.The oriented and unoriented real Grassmannians can be viewed as homogeneous quotientswhich admit a left isotropy action of K 0 . Chen He recently computed the Borel equivariant cohomology of these actions [He16, Thms. 5.2.2, 6.3.1, Cor. 5.2.1] through a sophisticated and subtle application of GKM-theory. If one is willing to part with the GKM data, expressions for these rings can be found in a substantially simpler way general to the class of homogeneous spaces. This note presents such a simpler proof to propagandize these techniques. The two goals of this note, to be short and to be expository, are in tension, much to the expense of the former. While we have been able to make the proof itself quite brief, to explain why the model that we find so convenient should exist at all takes about two pages. We round the note out with an alternate proof and some historical remarks.Acknowledgment. This note was informed by useful conversations with Chen He.
null
[ "https://arxiv.org/pdf/1611.01175v1.pdf" ]
15,978,854
1611.01175
a4281211c752bd95aa42dbb9cf2fdc7ace22ad93
The Borel equivariant cohomology of real Grassmannians 3 Nov 2016 1 November 2016 Jeffrey D Carlson The Borel equivariant cohomology of real Grassmannians 3 Nov 2016 1 November 2016 Recent work of Chen He has determined through GKM methods the Borel equivariant cohomology with rational coefficients of the isotropy action on a real Grassmannian and an oriented real Grassmannian through GKM methods. In this expository note, we propound a less involved approach, due essentially to Vitali Kapovitch, to computing equivariant cohomology rings HKpG{Hq for G, K, H connected Lie groups, and apply it to recover the equivariant cohomology of the Grassmannians. The bulk is setup and commentary; once one believes in the model, the proof itself is under a page.The oriented and unoriented real Grassmannians can be viewed as homogeneous quotientswhich admit a left isotropy action of K 0 . Chen He recently computed the Borel equivariant cohomology of these actions [He16, Thms. 5.2.2, 6.3.1, Cor. 5.2.1] through a sophisticated and subtle application of GKM-theory. If one is willing to part with the GKM data, expressions for these rings can be found in a substantially simpler way general to the class of homogeneous spaces. This note presents such a simpler proof to propagandize these techniques. The two goals of this note, to be short and to be expository, are in tension, much to the expense of the former. While we have been able to make the proof itself quite brief, to explain why the model that we find so convenient should exist at all takes about two pages. We round the note out with an alternate proof and some historical remarks.Acknowledgment. This note was informed by useful conversations with Chen He. In an attempt to unify the three parity cases, we let 0 ď α ď β ď 1 be natural numbers and replace the ℓ, m of the introduction respectively with 2n`α, 2k`β: G :" SOp2n`2k`α`βq; K 0 :" SOp2n`αqˆSOp2k`βq; K :" S`Op2n`αqˆOp2k`βq˘. Then G{K is even-dimensional unless α " β " 1. The objective rings HHpG{Hq are the cohomology of H G H :" EHˆH GˆH EH, the "two-sided" homotopy quotient, for H P tK, K 0 u. The projection rx, g, ys Þ ÝÑ pxK 0 , K 0 yq makes K 0 G K 0 a G-bundle over BK 0ˆB K 0 . Write p for the total Pontrjagin class of the BSOp2n`αq factor of the left BK 0 and e for its the Euler class if α " 0, and similarly p 1 and e 1 for the total Pontrjagin class and potential Euler class of the BSOp2k`βq factor of the left BK 0 . For the right BK 0 , symmetrically write π, ε, π 1 , ε 1 . These then pull back to classes in HK 0 pG{K 0 q which we notate by the same letters. Faced with an inhomogeneous element like the total Pontrjagin class p, we make the abbreviation Qrps :" Qrp 1 , . . . , p n s, and similarly we denote by px´yq :" px j´yj q jě0 the ideal generated by the homogeneous components of x´y. We write ΛP for the free commutative graded algebra ( ) on a positively-graded rational vector space; it is the tensor product of an exterior algebra (on the odd-degree subspace) and a symmetric algebra (on the even degrees), but we default to the notation QrPs whenever all generators are of even degree. Theorem 1.2 (He, 2016). The Borel equivariant cohomology rings of the left action of the isotropy group K 0 on the oriented Grassmannian G{K 0 and of K on the unoriented Grassmannian G{K are given by HS Op2nqˆSOp2kq r G 2k pR 2n`2k q -Qrp, p 1 , e, e 1 , π, π 1 , ε, ε 1 s ppp 1´π π 1 , ee 1´ε ε 1 q , HS Op2nqˆSOp2k`1q r G 2k pR 2n`2k`1 q -Qrp, p 1 , e, π, π 1 , εs ppp 1´π π 1 q , HS pOp2n`1qˆOp2k`1qq G 2k`1 pR 2n`2k`2 q -HS Op2n`1qˆSOp2k`1q r G 2k`1 pR 2n`2k`2 q -Qrp, p 1 , π, π 1 s ppp 1´π π 1 q b Λrηs, HS pOp2nqˆOp2kqq G 2k pR 2n`2k q -Qrp, p 1 , ee 1 , π, π 1 , εε 1 s ppp 1´π π 1 , ee 1´ε ε 1 q , HS pOp2nqˆOp2k`1qq G 2k pR 2n`2k`1 q -Qrp, p 1 , π, π 1 s ppp 1´π π 1 q , where η restricts under G ÝÑ K 0 G K 0 to the suspension in H 2n`2k`1 SOp2n`2k`2q of the Euler class in H 2n`2k`2 SOp2n`2k`2q , and the relations e 2 " p n , pe 1 q 2 " p 1 k , ε 2 " π n , pε 1 q 2 " π 1 k are tacit. One can improve the coefficients a tad without complicating the statement. as the Weyl invariants of H˚pBT n ; Zq " Zrt 1 , . . . , t n s are given by p ℓ Þ ÝÑ p´1q ℓ σ ℓ pt 2 1 , . . . , t 2 n q, e Þ ÝÑ t 1¨¨¨tn , where σ ℓ is the ℓ th elementary symmetric polynomial; we can equivalently write p " ś p1´t 2 j q. To understand HK ã ÝÑ HK 0 , we determine the Z{2-action on HK 0 ď HT induced by the double covering BK 0 ÝÑ BK. For this, note the nonidentity component of K " S`Op2n`αqˆOp2k`βqc onsists of pairs of matrices ph, h 1 q with det h " det h 1 "´1, e.g., the pair with h and h 1 block diagonal of the form " 0 1 1 0 ‰ ' r1s '¨¨¨' r1s. Conjugating T n`k " T nˆTk by this ph, h 1 q inverts one coordinate circle per toral factor, so the induced action on H 2 T n`k -H 1 T n`k sends the generator t 1 to´t 1 in H 2 T n and the generator t n`1 to´t n`1 in H 2 T k , thus preserving the p ℓ but sending e Þ Ñ´e. Thus e " t 1¨¨¨tn and e 1 " t n`1¨¨¨tn`k are not invariant under the larger symmetry group, but ee 1 still is invariant if it was in the image of HK 0 to begin with, and the image of HK ã Ñ HK 0 ã Ñ HT is # Q " p, p 1 , e 2 , ee 1 , pe 1 q 2 ‰ if α " β " 0, Qrp, p 1 s otherwise. Models Our proof of Theorem , which we rederive briefly here. All our models will be pure Sullivan algebras. Definition 2.1. A pure Sullivan algebra pΛQ b ΛP, dq is a finitely-generated commutative differential graded algebra ( ) over the rationals, with ΛP an exterior algebra, ΛQ a symmetric algebra, and d a derivation of degree 1 such that dP ď ΛQ and dΛQ " 0. A pure Sullivan model of a space X is a map from a pure Sullivan algebra pΛQ b ΛP, dq to the algebra A PL pXq of polynomial differential forms on X. All we need to know about the latter algebra is that it is a Qquasi-isomorphic as a to the singular cochain algebra C˚pX; Qq. In what follows we will mostly refer to models in terms of their source algebras and take the maps to A PL pXq for granted. F / / F 1 E / / E 1 q B f / / B 1 and Sullivan models pΛV B 1 , dq ÝÑ pΛV B , dq for f and pΛV B 1 , dq ÝÑ pΛV B 1 b ΛV F 1 , dq for q, if H˚F 1 ÝÑ H˚F is an isomorphism, π 1 B " π 1 B 1 " π 0 E " π 0 E 1 " 0, and either H˚F or both of H˚B and H˚B 1 are of finite type, then E admits a Sullivan model pΛV E , dq " pΛV B , dq b pΛV B 1 ,dq pΛV B 1 b ΛV F 1 , dq -pΛV B b ΛV F 1 , dq. We will apply the theorem in the following situation. Note that G admits a natural "two-sided" action of G 2 given by pg, hq¨x " gxh´1. Let U ď G 2 be any closed, connected subgroup acting by the restriction of the two-sided action and consider the homotopy quotient G U ; note K 0 G K 0 , whose cohomology we aim to compute, is G U for U " K 2 0 . The associated Borel fibration G U " pEGˆGˆEGq{U ÝÑ EGÛ EG " BU, re, g, e 1 s Þ ÝÑ re, e 1 s, admits a G-bundle map to the bundle ∆ : BG " EGˆG EG ÝÑ BGˆBG, given on total spaces as re, g, e 1 s Þ ÝÑ reg, e 1 s " re, ge 1 s. The induced map BU ÝÑ BGˆBG downstairs is Bi, induced functorially by the inclusion i : U ã Ñ G. Then Theorem 2.2, applied to the map G G G U / / BG ∆ BU Bi / / BGˆBG. (2.1) will give us a model of G U once we have models for the maps BU ÝÑ BG 2 and BG ÝÑ BG 2 . The first observation is that by Hopf's theorem [Hop41, Satz I], the cohomology of G is a Hopf algebra, whose underlying algebra is the exterior algebra ΛP on the space of primitive elements P " PH˚G. Thus G admits a pure Sullivan model pΛP, 0q with trivial differential. For the second observation, write ΣP for the suspension of P, the graded vector space defined by pΣPq n " P n´1 , and QHG " r HG{ r HG r HG for the space of irreducibles of HG. A set of Q-algebra generators for HG is exactly the image of a basis for QHG under a linear splitting of the projection HG QHG. Borel's theorem [Bor53,Thm. 19.1] states that the transgression τ in the Serre spectral sequence of the universal bundle G Ñ EG Ñ BG induces a natural graded linear isomorphism ΣP " ÝÑ QHG, and moreover, HG is the polynomial algebra on any lift of QHG. Picking such a lift at random, we can write τP " ΣP " QHG ď HG. The same holds of BU. Thus a model of Bi : BU ÝÑ BGˆBG is its own cohomology pHG b HG, 0q ÝÑ pHŮ, 0q, equipped with the zero differential. The map ∆ : BG ÝÑ BGˆBG is the diagonal up to homotopy, so it induces in cohomology the cup product HG b HG ÝÑ HG, whose kernel is the ideal p1 b τz´τz b 1 : z P Pq since HG is the symmetric algebra on the space Q -QHG " τP. The Serre spectral sequence of the right bundle in (2.1) thus suggests we model ∆ by the inclusion pHG b HG, 0q ã ÝÑ pHG b HG b H˚G, d 1 q, where d 1 vanishes on HG b HG and takes z P P to 1 b τz´τz b 1 [Esc92, Prop., p. 157]. From these two models, Theorem 2.2 constructs the model pHŮ b H˚G, dq for G U , where d is the derivation that vanishes on HŮ and takes z P PH˚G to pBiq˚d 1 z P HŮ for d 1 the differential in the model pHG b HG b H˚G, d 1 q of BG. Specializing to the case U " H 0ˆK0 , for H 0 and K 0 closed, connected subgroups of G, we get a pure Sullivan algebra pHH 0 b HK 0 b HG, dq modeling K 0 G H 0 , where d vanishes on HK 0 b HH 0 and takes a primitive z P PH˚G to pρH 0 b ρK 0 qp1 b τz´τz b 1q, for ρH 0 : HG ÝÑ HH 0 and ρK 0 : HG ÝÑ HK 0 the maps induced functorially by the inclusions of H 0 and K 0 in G. Specializing again to the case H 0 " K 0 , we get a pure Sullivan algebra pH b2 K 0 b HG, dq " pHK 0 b HK 0 b H˚G, dq modeling K 0 G K 0 , where d vanishes on H b2 K 0 and takes a primitive z P PH˚G to pρ˚b ρ˚qp1 b τz´τz b 1q, for ρ˚" ρK 0 . Specializing instead to the case H 0 " 1, we get back the Cartan algebra computing H˚pG{K 0 q, which we will discuss more in Section 4. Proof Equipped with Kapovitch's model pH b2 K 0 b H˚G, dq, Theorem 1.2 becomes fairly straightforward. Proof of Theorem 1.2. We start with the oriented Grassmannians, to which the model applies directly. In the even-dimensional cases, since rk G " rk K 0 , the map ρ˚: HG ÝÑ HK 0 is an injection and H˚pG{K 0 q -HK 0 {pρ˚r HGq by a result of Leray [Ler49a], where the tilde indicates elements of positive degree. 3 This means the Serre spectral sequence of G{K 0 Ñ BK 0 Ñ BG collapses at E 2 , so HK 0 is a free graded HG-module [Bau68, Thm. 6.3] and in particular flat. 4 Viewing the pHG b H˚G, z Þ Ñ τzq as an HG-module resolution HG b Λ ‚ PH˚G Ñ Q of Q, the cohomology of H b2 K 0 b H˚G can be seen as TorHG pQ, H b2 K 0 q -TorHG pHK 0 , HK 0 q. By flatness, the higher Tor j vanish, so this is just Tor 0 HG pHK 0 , HK 0 q " HK 0 b HG HK 0 . Since the Pontrjagin classes and Euler class generate HG, the tensor product can be computed explicitly by modding out of H b2 K 0 the ideal identifying in the two copies of HK 0 the images of the positive-degree components of HG, which are generated by the images of the Pontrjagin class r p and Euler class r e. But it is clear ρ˚sends r p " ś n`k 1 p1´t 2 j q Þ ÝÑ ś n 1 p1´t 2 j q¨ś k 1 p1´t 2 n`j q " pp 1 and r e " ś n`k 1 t j Þ ÝÑ ee 1 , so we obtain the first two expressions in the statement of the theorem by modding pp 1´π π 1 and ee 1´ε ε 1 out of HK b HK. For the odd-dimensional unoriented Grassmannian, note that by the previous case, the restriction of ρ˚to the subring Qrr p 1 , . . . , r p n`k sremains a flat ring extension, but r e " t 1¨¨¨tn`k`1 must be sent to zero because the maximal torus of K 0 has rank only n`k. Thus if η P H 2n`2k`1 G is the primitive that transgresses to r e, the tensor factor Λrηs splits off HK 0 G K 0 , while the other factor HK 0 b HG HK 0 is as before. To obtain the expressions for the unoriented Grassmannian, note that K 0 G K 0 ÝÑ K G K is a normal covering with fiber pZ{2ˆZ{2q. Thus [Hat02, Prop. 3G.1] we may identify HKG K with the pZ{2Ẑ {2q-invariants of HK 0 G K 0 From Remark 1.4, the generator of the first Z{2 factor and reverses e, e 1 and fixes the other generators, and symmetrically that of the second factor reverses only ε, ε 1 . This yields the expressions for HKG K . H˚r G 2k pR 2n`2k`1 q - Qrp, p 1 , es ppp 1´1 q , H˚G 2k`1 pR 2n`2k`2 q -H˚r G 2k`1 pR 2n`2k`2 q - Qrp, p 1 s ppp 1´1 q b Λrηs, H˚G 2k pR 2n`2k q -H˚G 2k pR 2n`2k`1 q - Qrp, p 1 s ppp 1´1 q . Proof. Since the particular actions we were interested in were equivariantly formal, the expressions above follow from Theorem 1.2 on modding out the homogeneous ideal pπ´1, π 1´1 , ε, ε 1 q. Remark 3.2. To illustrate this technique's versatility, lest the reader imagine it be limited to real Grassmannians, we extract some representative examples out of the class of all homogeneous spaces. We claim all the following are computable with equal facility from the model pHH 0 b HK 0 b H˚G, dq once one has expressions for the maps ρH and ρK.: HS ppkqˆSppnq´S ppn`kq L SppkqˆSppnq; Z¯" Zrp, p 1 , π, π 1 s{ppp 1´π π 1 q; HŮ p2kq`U p2nq{Sppnq; Z˘" Zrc 2 , . . . , c 2k , p 1 , . . . , p n s pc 2j´pj q b Λrz 2k`1 , z 2k`3 , . . . , z 2n´1 s; HŮ pnq`S ppnq{Upnq; Z˘" Zrc, κs{pcc´κκq; HG 2`S pinp9q{G 2˘" Qry 4 , y 12 s b Λrz 7 , z 15 s; HG 2 pF 4 {G 2 q " Qry 4 , y 12 s b Λrz 15 , z 23 s; HE 6 pE 7 {E 6 q " Qry 4 , y 10 , y 1 10 , y 12 , y 16 , y 18 , y 1 18 , y 24 , s b Λrz 19 , z 27 , z 35 s; HS 1 pG{S 1 q " Qrs 2 , σ 2 s ps 2 2´σ 2 2 q b H˚G pz 3 q , impH 1 G ÝÑ H 1 S 1 q " 0, action equivariantly formal; HS Up3qˆSUp3q´S Up6q L SUp3qˆSUp3q¯" Qrc 2 , c 3 , c 1 2 , c 1 3 , κ 2 , κ 3 , κ 1 2 , κ 1 3 s{pcc 1´κ κ 1 q; HS 1`S Up3q{S 1˘" Λrv 2 , w 2 , y 7 s L pvw, yw, w 3 q , S 1 " diagpz, z, z´2q ( . Here c " 1`c 1`¨¨¨`cn , c 1 , κ, κ 1 are total Chern classes,c " 1´c 1`¨¨¨`p´1 q n c n ,κ are the duals, p, etc., are symplectic Pontrjagin classes, and c 1 " 0 in HS Up3q . In all of these examples but the last two, the spaces are formal. Culture In this last section we provide some related results and cultural context. First, an alternate proof of Theorem 1.2 applies the following two results: 1]). Let G be a connected Lie group and K 0 a closed, connected subgroup such that the isotropy action of K 0 on G{K 0 is equivariantly formal. Then there is a natural pHK 0 b HK 0 q- algebra isomorphism HK 0 pG{K 0 q -`HK 0 b HG HK 0˘b Λ p P, where Λ p P is isomorphic to the image of im`H˚pG{K 0 q ÝÑ H˚G˘. Less-generalizable proof of Theorem 1.2. The even-dimensional cases are immediate from Theorem 4.1. The isotropy action on the odd-dimensional oriented Grassmannian is also known to be equivariantly formal (e.g., isotropy actions on (generalized) symmetric spaces are equivariantly formal by work of Goertsches [Goe12] (resp., Goertsches-Noshari [GN15])), so one needs only note the image of H˚pG{K 0 q ÝÑ H˚G is Λrηs (e.g., by the analysis of ρ˚: HG ÝÑ HK 0 from the other proof) and apply 4.2. But we emphasize that this alternate proof is specialized to the spaces in question, whereas the model applies in great generality. Beyond the fact that in both cases the action is equivariantly formal and in the even-dimensional case G and K 0 are of equal rank, Grassmannians are formal in the sense of rational homotopy theory (as all generalized symmetric spaces must be [Ter01,§4][Stke02, Prop. 4.1]). In fact, as will be shown in a revision of a joint work with Chi-Kwong Fok [CF15], the isotropy group K 0 can act equivariantly formally on G{K 0 only if the latter is a formal space. In terms of the map ρ˚: HG ÝÑ HK used to define the differential, Halperin showed G{K 0 is formal if and only if, given any minimal set of homogeneous elements x j P r HG whose images ρ˚x j generate the ideal pρ˚r HGq ⊳ HK 0 , the ring HK 0 is in fact a free module over the subring Qrρ˚x j s [GHV76, Thm. 11.5.IV, pp. 463-4].In the language of Baum's thesis [Bau68,Def. 4.9], the pair pG, K 0 q has deficiency 0. Remark 4.4. Kapovitch himself applied his model to the case in which U acts freely and so yields an honest biquotient manifold G{U. If we apply this model to U " 1ˆK 0 for K 0 a closed, connected subgroup of G, this yields homogeneous spaces themselves, and in this instance, as noted in Section 2, the model specializes to the Cartan algebra described in the next remark. The Cartan algebra can be seen as a consequence of a more general result due to Chevalley, which given a model pΛV B , dq of the base of a principal bundle G Ñ E Ñ B produces a model pΛV B b H˚G, dq of the total space E. This is the same model produced from Theorem 2.2 when one lets f : B ÝÑ BG be the classifying map and models q : EG ÝÑ BG by the map pΛΣP, 0q ÝÑ pΛΣP b ΛP, τq resulting from Borel's theorem. In the special case of the bundle G Ñ G K 0 Ñ BK 0 , by Borel's theorem again we can model BK 0 by pHK 0 , 0q, so Chevalley's theorem returns the Cartan algebra. The major protagonists in this chapter of the story leading to this computation of H˚pG{K 0 q are Leray, Chevalley, Cartan,Koszul,and Borel. 6 The main primary sources are the proceedings of the 1950 Brussels Colloque de topologie and Borel's thesis [Cen51,Bor53]. The main secondary sources are long survey papers by André and Rashevskii, and later the tomes of Greub et al. and Onishchik [And62,Ras69,GHV76,Oni94]. This material was important in the author's dissertation, which has gradually been evolving into a book on the topic, available online [Car] and awaiting critical feedback. (The material in Section 2 on homogeneous spaces is all developed in Ch. 8.) Remark 4.6. While the fastest way to obtain the Cartan algebra is in terms of rational homotopy theory, doing so is anachronistic. Out of general historical interest and the unbounded space afforded us by the arXiv, we summarize in this remark the original derivations of the Cartan model. Cartan's original approach [Car51] does not directly use a model of BK to apply Chevalley's theorem to G Ñ G K Ñ BK because BK is not a manifold and H˚pBKq had not been calculated before this note. 7 Instead, he starts with an acyclic R-Wk " ΛΣk _ b Λk _ , the Weil algebra, where k _ is the dual to the Lie algebra of K, equipped with natural actions of k by inner multiplications ι ξ and the Lie derivative L ξ , and meant to serve as a model for EK. 8 Given a principal K-bundle K Ñ E π Ñ B he constructs the Weil model`ΛΣk _ b Λk _ b Ω ‚ pEq˘b as of HKpE; Rq -H˚pB; Rq; here the subscript denotes the basic subalgebra annihilated by all ι ξ and L ξ . The idea is that this should serve as a model for the base B, and indeed Cartan shows the natural inclusion of π˚ΩpBq -ΩpBq in the Weil model is a quasi-isomorphism. He then shows the Weil model is quasi-isomorphic to the Cartan model`ΛΣk _ b Ω ‚ pEq˘K; this in turn, when our principal bundle is K Ñ G Ñ G{K for G another compact, connected Lie group, is quasi-isomorphic to a with underlying algebra ΛΣk _ b H˚pGq. 9 This is the Cartan algebra from the preceding comment, derived from very different considerations. The claimed isomorphisms rely on the existence of principal connections, viewed as linear maps k _ ÝÑ Ω 1 pEq respecting both actions of k, and on the fact there exist Kinvariant representative forms for the classes on H˚pE; Rq. The predecessor of the rational homotopy theory construction is due to Borel [Bor53,. It begins with a principal G-bundle G Ñ P Ñ B with G connected; we will eventually want this to be the bundle G Ñ G K Ñ BK for K a connected subgroup of G. One then needs to find models ApPq of P, B, and BG. There are several options: • the algebra A PL of polynomial differential forms; • the de Rham algebra Ω ‚ of smooth forms; this works for G K since it is a countable increasing union of smooth manifolds; • the global sections F pPq of a fine sheaf F of connected Rs on P; such a sheaf exists so long as P is compact Hausdorff by topologically embedding P in some sphere S k and 6 There is a later chapter in the story of the cohomology of a homogeneous space, due to authors including Paul Baum, Peter May, Victor Gugenheim, Hans Munkholm, and Joel Wolf, which derives from the collapse of the Eilenberg-Moore spectral sequence of the bundle G{K 0 Ñ BK 0 Ñ BG at E 2 " Tor H˚pBG;kq`k , H˚pBK 0 ; kq˘, under certain conditions on the coefficient ring (which we passed to Q immediately to bypass) and which is correspondingly more subtle. Note that the cohomology of the Cartan algebra can be expressed as Tor HG pQ, HK 0 q since the Koszul complex pHG b H˚G, τq yields a resolution of Q as a HG-module. 7 There also seems to have been a desire to stay in the realm of manifolds, so that finite-dimensional truncations of BK are mentioned instead. In Chevalley's review of this work, he notes that BK does not exist, a statement that only makes sense if one demands finite-dimensionality. 8 The Weyl model is in fact isomorphic as a to the model pΛΣP b ΛP, τq of EK but not in the most obvious way. 9 For generic E, one can find a differential on the graded vector space ΛΣk _ b H˚pB; Rq whose cohomology is HK pE; Rq -H˚pB; Rq, but this isomorphism does not generally respect multiplication. restricting the de Rham algebra [Bor53,Prop. 3.1]; we again have to use a sequence of finitedimensional approximations to model G K Ñ BK. Borel originally used the third model, which was the most general known at the time; the first is more and the second less. Once some commutative model has been selected, note the classifying map χ of the bundle G Ñ P π Ñ B induces a map to the Leray spectral sequence of π with coefficients in the constant sheaf over the regular field from that of the universal bundle G Ñ EG Ñ BG. Because nonzero differentials in spectral sequence of the universal bundle come only from transgressions τz of primitives of H˚G, the same holds for P Ñ B. Lifting these primitives and their transgressions to closed forms in ApPq, we can cobble together a differential subalgebra A 1 -ApBq b H˚G of ApPq with the already-existing differential on ApBq and dz " χ˚dz for dz some fixed lift to ApBGq of the transgressions τz P QH˚pBGq; it is to force A 1 to be a differential subalgebra that we needed commutativity on the nose. The degree filtration of ApBq induces a filtration of both A 1 and ApPq, so the inclusion of the one on in the other induces a map of filtration spectral sequences. By construction, this map is an isomorphism from E 2 -A 1 onward, so the inclusion is a quasi-isomorphism. But the filtration spectral sequence on ApPq agrees with the Leray spectral sequence of G Ñ P Ñ B from E 2 on, so A 1 is a model of P. 10 The existence of this model when B is a manifold is an unpublished result due to Chevalley [Kos51, p. 70][Bor53, p. 183] which Cartan also invokes in his derivation. In the special case of G Ñ G K Ñ BK, because there is even a quasi-isomorphic injection pHK, 0q ApBKq (this is a strong form of formality). one can replace A 1 with A 2 " HK b H˚G, where the differential vanishes on HK and sends z P PG to χ˚τz for some lift in HG of the transgression τz. This is again the Cartan algebra. Borel makes a generalization extracting a submodel ApBq b H˚F of ApEq, for a fiber bundle F Ñ E Ñ B, so long as H˚F is an exterior algebra on generators that transgress in the Serre spectral sequence. The map (2.1) and analysis showing PH˚G transgresses in the Serre spectral sequence of G Ñ BG Ñ BG 2 then gives us a model ApBUq b H˚G of G U as an instance of Borel's theorem. Replacing ApBUq with pHŮ, 0q then returns Kapovitch's model. Proof of Proposition 1.3. The torsion of the fiber and base of the bundle G Ñ K 0 G K 0 Ñ BK 0ˆB K 0 is all of order two [Mil53, Thm. 3.2][Tho60, Thms. A,B,12.1][BJ82, Thms. 1.5,6], so the Serre spectral sequence of the bundle tells us the torsion of K 0 G K 0 must be of 2-primary order as well [Hata, Lem. 1.9]. As the covering map K 0 G K 0 ÝÑ K G K is 4-sheeted, it induces an injection in cohomology with Zr 1 2 s coefficients [Hat02, Prop. 3G.1] . Remark Remark 4 . 5 . 45The expressions in Corollary 3.1 for even-dimensional oriented Grassmannians are special cases of general considerations due to Leray [Bor53, Thm. 26.1] and the odd-dimensional case seems to be have first been computed by Takeuchi [Tak62, p. 320], using the Cartan algebra H˚pHK 0 b H˚G, dq -H˚pG{K 0 q. That the model computes H˚pG{K 0 q was first proved by Cartan [Car51, Thm. 5, p. 216][Bor53, Thm. 25.2] for reductive, connected Lie groups G and K 0 , the latter closed in the former; the Poincaré polynomials for r G ℓ pR m q are already included at the end of Cartan's transgression paper [Car51, p. 219]. Proposition 1.3. The same isomorphisms hold with coefficients in Zr 1 2 s. 1 Remark 1.4. The equivariant cohomology in the original statement of the theorem is with respect to a maximal torus T of K 0 ; these expressions are recoverable, once we have explicit forms for the injections HK ã Ñ HK 0 ã Ñ HT, via the isomorphism [Hsi75, Prop. III.1, p. 31] HTpG{K 0 q -HT bfor the oriented Grassmannian, and for the unoriented the fact that K 0 G K ÝÑ K G K is a double cover identifying HKG K with the invariant subring pHK 0 G K q Z{2 . The map HK 0 ã ÝÑ HT is the tensor product of the maps HS Op2n`αq ã ÝÑ HTn and HS Op2k`βq ã ÝÑ HT k , both admitting the same description. Recall [Hatb, Thm. 3.16] that the canonical inclusions of H˚`BSOp2nq; Z˘L 2-torsion -Zrp 1 , . . . , p n´1 , es, deg p j " 4j, deg e " 2n,HK 0 HK 0 pG{K 0 q H˚`BSOp2n`1q; Z˘L 2-torsion -Zrp 1 , . . . , p n´1 , p n s, deg p j " 4j Sullivan models behave well with respect to fibrations and pullbacks. 2 Theorem 2.2 ([FHT01, Prop. 15.5,8]). Given a map of Serre fibrations Theorem 4.1 ([KV93, Prop. 68, p. 161]). Let G be a connected Lie group, K a closed subgroup of equal rank, and H another closed subgroup. Then there is a ring isomorphismHKpG{Hq -HK b HG HH. Theorem 4.2 ([Car, Thm. 11.1. 4.3. Kumar-Vergne prove Theorem 4.1 with real coefficients, but it follows rationally [Car, Thm. 6.2.6] from the pullback of the diagram BH ÝÑ BG ÐÝ BK. First observe that for K 0 the identity component in K, one has HK 0 ÝÑ H˚pG{K 0 q surjective [Bor53, Thm. 26.1]. Then HK ÝÑ H˚pG{Kq can be identified [Hat02, Prop. 3G.1] with the map of π 0 K-invariants, which by an averaging argument is again surjective. From this, the equivariant Künneth theorem 5 applies to the pullback diagram, and the pullback is homotopy equivalent to K G H . The result can also be obtained from fixed-point theory on generalized flag varieties. Tu proves [Tu10, Thm. 11] that if G is a connected Lie group with maximal torus T and K a closed, connected subgroup containing T, then one has the expression HTpG{Kq -HT b HG HK. Recalling the natural isomorphism HHpXq -HTpXq W H , where W H is the Weyl group [Ler49b, Thm. 1][Hsi75, Prop. III.1, p. 31], we recover Theorem 4.1 with the restriction the groups be connected. It seems likely that all the torsion is in fact of order 2, so the statement will remain true if one replaces the cohomology with integral cohomology modulo torsion of order 2 on the left-hand side, and Q with Z in the ring expressions on the right. We only need the following result for the pure Sullivan models we have defined, but it holds for all Sullivan models. We cite the earliest version of this result, which requires G to not have exceptional factors, as it applies to our special case; later versions [Ler51, §2][Bor53, Thm. 26.1] remove this restriction (and have published proofs). In fact, for a positively-graded module over a connected , flatness also implies freedom [NS02, Prop. A.1.5]. which can also be seen from the collapse of the Eilenberg-Moore spectral sequence and the fact Tor˚" Tor 0 because HK is free over HG The reader may note we do not call this the Serre spectral sequence. The reason is that while the distinction is immaterial from the E 2 page on, the version of the Leray spectral sequence Borel invokes here uses the filtration of A by ideals generated by π˚A ěp pBq, while the Serre spectral sequence as usually understood now replaces B with a CW approximation with skeleta B p and then filters by ker`ApPq ÝÑ Apπ´1B p´1 q˘. While the former filtration is clearly contained in the latter and the induced spectral sequences agree after E 2 , Borel uses the former, which makes it clearer that A 1ApPq is a quasi-isomorphism. Cohomologie des algèbres différentielles où opère une algèbre de Lie. Michel André, Tôhoku Math. J. 2Michel André, Cohomologie des algèbres différentielles où opère une algèbre de Lie, Tôhoku Math. J. (2) 14 (1962), no. 3, 263-311. On the cohomology of homogeneous spaces. Paul F Baum, Topology. 71Paul F. Baum, On the cohomology of homogeneous spaces, Topology 7 (1968), no. 1, 15-38. The cohomology of BSO n and BO n with integer coefficients. Edgar H BrownJr, Proc. Amer. Math. Soc. Edgar H. Brown Jr., The cohomology of BSO n and BO n with integer coefficients, Proc. Amer. Math. Soc. (1982), 283-288. Sur la cohomologie des espaces fibrés principaux et des espaces homogènes de groupes de Lie compacts. Armand Borel, Ann. of Math. 2Armand Borel, Sur la cohomologie des espaces fibrés principaux et des espaces homogènes de groupes de Lie compacts, Ann. of Math. (2) 57 (1953), no. 1, 115-207. On the equivariant cohomology of homogeneous spaces. Jeffrey D Carlson, manuscript monographJeffrey D. Carlson, On the equivariant cohomology of homogeneous spaces, manuscript monograph: https://www.dropbox.com/s/bczvscaxuixibac/book.pdf?dl=0. Henri Cartan, La transgression dans un groupe de Lie et dans un espace fibré principal, Colloque de topologie. Bruxelles; Liège/ParisGeorges Thone/Masson et companieCentre belge de Recherches mathématiquesHenri Cartan, La transgression dans un groupe de Lie et dans un espace fibré principal, Colloque de topologie (espace fibrés), Bruxelles 1950 (Liège/Paris), Centre belge de Recherches mathématiques, Georges Thone/Masson et companie, 1951, pp. 57-71. Colloque de topologie (espace fibrés). Bruxelles; Liège/ParisCentre belge de Recherches mathématiques ; Georges Thone/Masson et companieCentre belge de Recherches mathématiques, Colloque de topologie (espace fibrés), Bruxelles 1950, Liège/Paris, Georges Thone/Masson et companie, 1951. Jeffrey D Carlson, Chi-Kwong Fok, Equivariant formality of homogeneous spaces. Jeffrey D. Carlson and Chi-Kwong Fok, Equivariant formality of homogeneous spaces, https://arxiv.org/abs/1511.06228, November 2015. Cohomology of biquotients. Jost-Hinrich Eschenburg, Manuscripta Math. 752Jost-Hinrich Eschenburg, Cohomology of biquotients, Manuscripta Math. 75 (1992), no. 2, 151-166. . Yves Félix, Steve Halperin, Jean-Claude Thomas, Texts in Math. 205SpringerRational homotopy theoryYves Félix, Steve Halperin, and Jean-Claude Thomas, Rational homotopy theory, Grad. Texts in Math., vol. 205, Springer, 2001. H Werner, Stephen Greub, Ray Halperin, Vanstone, Cohomology of principal bundles and homogeneous spaces. Academic PressIIIWerner H. Greub, Stephen Halperin, and Ray Vanstone, Connections, curvature, and cohomology, vol. III: Coho- mology of principal bundles and homogeneous spaces, Academic Press, 1976. Oliver Goertsches, Sam Haghshenas Noshari, On the equivariant cohomology of isotropy actions on homogeneous spaces defined by Lie group automorphisms. Oliver Goertsches and Sam Haghshenas Noshari, On the equivariant cohomology of isotropy actions on homoge- neous spaces defined by Lie group automorphisms, http://arxiv.org/abs/1405.2655v2, Apr 2015. The equivariant cohomology of isotropy actions on symmetric spaces. Oliver Goertsches, Doc. Math. 17Oliver Goertsches, The equivariant cohomology of isotropy actions on symmetric spaces, Doc. Math. 17 (2012), 79-94. Spectral sequences in algebraic topology, math.cornell.edu/~hatcher/SSAT/SSATpage.html. Allen Hatcher, Vector bundles and K-theory. math.cornell.edu/~hatcher/VBKT/VBpage.htmlAllen Hatcher, Spectral sequences in algebraic topology, math.cornell.edu/~hatcher/SSAT/SSATpage.html. [Hatb] , Vector bundles and K-theory, math.cornell.edu/~hatcher/VBKT/VBpage.html. Algebraic topology. Cambridge Univ. Press, Algebraic topology, Cambridge Univ. Press, 2002. Chen He, GKM theory, characteristic classes and the equivariant cohomology ring of the real Grassmannian. Chen He, GKM theory, characteristic classes and the equivariant cohomology ring of the real Grassmannian, https://arxiv.org/abs/1609.06243, September 2016. Über eie Topologie der Gruppen-Mannigfaltigkeiten und ihre Verallgemeinerungen. Heinz Hopf, Ann. of Math. 421GermanHeinz Hopf, Über eie Topologie der Gruppen-Mannigfaltigkeiten und ihre Verallgemeinerungen, Ann. of Math. 42 (1941), no. 1, 22-52 (German). Wu-Yi Hsiang, Cohomology theory of topological transformation groups. SpringerWu-Yi Hsiang, Cohomology theory of topological transformation groups, Springer, 1975. Vitali Kapovitch, A note on rational homotopy of biquotients. Vitali Kapovitch, A note on rational homotopy of biquotients, http://www.math.toronto.edu/vtk/biquotient.pdf. Sur un type d'algebres differentielles en rapport avec la transgression, Colloque de topologie (espace fibrés). Jean-Louis Koszul, Centre belge de Recherches mathématiques. Bruxelles; Liège/ParisGeorges Thone/Masson et companieJean-Louis Koszul, Sur un type d'algebres differentielles en rapport avec la transgression, Colloque de topolo- gie (espace fibrés), Bruxelles 1950 (Liège/Paris), Centre belge de Recherches mathématiques, Georges Thone/Masson et companie, 1951, pp. 73-81. Equivariant cohomology with generalized coefficients. Shrawan Kumar, Michèle Vergne, Astérisque. 215Shrawan Kumar and Michèle Vergne, Equivariant cohomology with generalized coefficients, Astérisque 215 (1993), 109-204. Détermination, dans les cas non exceptionnels, de l'anneau de cohomologie de l'espace homogène quotient d'un groupe de Lie compact par un sous-groupe de même rang. Jean Leray, C. R. Acad. Sci. Paris. 22825Jean Leray, Détermination, dans les cas non exceptionnels, de l'anneau de cohomologie de l'espace homogène quotient d'un groupe de Lie compact par un sous-groupe de même rang, C. R. Acad. Sci. Paris 228 (1949), no. 25, 1902-1904. . C. R. Acad. Sci. Paris. 2294Sur l'anneau de cohomologie des espaces homogenés, Sur l'anneau de cohomologie des espaces homogenés, C. R. Acad. Sci. Paris 229 (1949), no. 4, 281-283. Sur l'homologie des groupes de Lie, des espaces homogènes et des espaces fibrés principaux, Colloque de topologie (espace fibrés). Centre belge de Recherches mathématiques. Bruxelles; Liège/ParisGeorges Thone/Masson et companie, Sur l'homologie des groupes de Lie, des espaces homogènes et des espaces fibrés principaux, Colloque de topologie (espace fibrés), Bruxelles 1950 (Liège/Paris), Centre belge de Recherches mathématiques, Georges Thone/Masson et companie, 1951, pp. 101-115. The topology of rotation groups. Clair E Miller, Ann. of Math. Clair E. Miller, The topology of rotation groups, Ann. of Math. (1953), 90-114. Invariant theory of finite groups. Mara D Neusel, Larry Smith, Mathematical Surveys and Monographs. 94Amer. Math. SocMara D. Neusel and Larry Smith, Invariant theory of finite groups, Mathematical Surveys and Monographs, vol. 94, Amer. Math. Soc., 2002. Topology of transitive transformation groups. L Arkadi, Onishchik, Johann Ambrosius BarthArkadi L. Onishchik, Topology of transitive transformation groups, Johann Ambrosius Barth, 1994. The real cohomology of homogeneous spaces. Petr Konstantinovich, Rashevskii , Russian Mathematical Surveys. 24323Petr Konstantinovich Rashevskii, The real cohomology of homogeneous spaces, Russian Mathematical Surveys 24 (1969), no. 3, 23. On formality of a class of compact homogeneous spaces. Zofia Stępień, Geom. Dedicata. 931Zofia Stępień, On formality of a class of compact homogeneous spaces, Geom. Dedicata 93 (2002), no. 1, 37-45. On Pontrjagin classes of compact symmetric spaces. Masaru Takeuchi, J. Fac. Sci. Univ. Tokyo Sect. I. 9Masaru Takeuchi, On Pontrjagin classes of compact symmetric spaces, J. Fac. Sci. Univ. Tokyo Sect. I 9 (1962), 313-328. Cohomology with real coefficients of generalized symmetric spaces. Svjetlana Terzić, Fundam. Prikl. Mat. 71in RussianSvjetlana Terzić, Cohomology with real coefficients of generalized symmetric spaces, Fundam. Prikl. Mat 7 (2001), no. 1, 131-157, in Russian. On the cohomology of the real Grassmann complexes and the characteristic classes of n-plane bundles. Emery Thomas, Trans. Amer. Math. Soc. 961Emery Thomas, On the cohomology of the real Grassmann complexes and the characteristic classes of n-plane bundles, Trans. Amer. Math. Soc. 96 (1960), no. 1, 67-89. Computing characteristic numbers using fixed points, A celebration of the mathematical legacy of. W Loring, Tu, C. R. M. Proceedings and Lecture Notes. Raoul Bott (Peter Robert Kotiuga50Centre de Recherches Mathématiques MontréalLoring W. Tu, Computing characteristic numbers using fixed points, A celebration of the mathematical legacy of Raoul Bott (Peter Robert Kotiuga, ed.), C. R. M. Proceedings and Lecture Notes, vol. 50, Centre de Recherches Mathématiques Montréal, Amer. Math. Soc., 2010, pp. 185-206.
[]
[ "INFERRING THE NEUTRON STAR EQUATION OF STATE FROM BINARY INSPIRAL WAVEFORMS", "INFERRING THE NEUTRON STAR EQUATION OF STATE FROM BINARY INSPIRAL WAVEFORMS" ]
[ "Charalampos Markakis \nDepartment of Physics\nUniversity of Wisconsin-Milwaukee\nPO Box 41353201MilwaukeeWIUSA\n", "Jocelyn S Read \nMax-Planck-Institut für Gravitationsphysik\nAlbert-Einstein-Institut Golm\nGermany\n", "Masaru Shibata \nYukawa Institute for Theoretical Physics\nKyoto University\n606-8502KyotoJapan\n", "Kōji Uryū \nDepartment of Physics\nUniversity of the Ryukyus\n1 Senbaru903-0213NishiharaOkinawaJapan\n", "Jolien D E Creighton \nDepartment of Physics\nUniversity of Wisconsin-Milwaukee\nPO Box 41353201MilwaukeeWIUSA\n", "John L Friedman \nDepartment of Physics\nUniversity of Wisconsin-Milwaukee\nPO Box 41353201MilwaukeeWIUSA\n" ]
[ "Department of Physics\nUniversity of Wisconsin-Milwaukee\nPO Box 41353201MilwaukeeWIUSA", "Max-Planck-Institut für Gravitationsphysik\nAlbert-Einstein-Institut Golm\nGermany", "Yukawa Institute for Theoretical Physics\nKyoto University\n606-8502KyotoJapan", "Department of Physics\nUniversity of the Ryukyus\n1 Senbaru903-0213NishiharaOkinawaJapan", "Department of Physics\nUniversity of Wisconsin-Milwaukee\nPO Box 41353201MilwaukeeWIUSA", "Department of Physics\nUniversity of Wisconsin-Milwaukee\nPO Box 41353201MilwaukeeWIUSA" ]
[]
The properties of neutron star matter above nuclear density are not precisely known. Gravitational waves emitted from binary neutron stars during their late stages of inspiral and merger contain imprints of the neutron-star equation of state. Measuring departures from the point-particle limit of the late inspiral waveform allows one to measure properties of the equation of state via gravitational wave observations. This and a companion talk by J. S. Read reports a comparison of numerical waveforms from simulations of inspiraling neutron-star binaries, computed for equations of state with varying stiffness. We calculate the signal strength of the difference between waveforms for various commissioned and proposed interferometric gravitational wave detectors and show that observations at frequencies around 1 kHz will be able to measure a compactness parameter and constrain the possible neutron-star equations of state.
10.1142/9789814374552_0046
[ "https://arxiv.org/pdf/1008.1822v1.pdf" ]
118,422,316
1008.1822
977ee4ed78bfbe28d92ef24715226529e3e2d747
INFERRING THE NEUTRON STAR EQUATION OF STATE FROM BINARY INSPIRAL WAVEFORMS 11 Aug 2010 August 12, 2010 Charalampos Markakis Department of Physics University of Wisconsin-Milwaukee PO Box 41353201MilwaukeeWIUSA Jocelyn S Read Max-Planck-Institut für Gravitationsphysik Albert-Einstein-Institut Golm Germany Masaru Shibata Yukawa Institute for Theoretical Physics Kyoto University 606-8502KyotoJapan Kōji Uryū Department of Physics University of the Ryukyus 1 Senbaru903-0213NishiharaOkinawaJapan Jolien D E Creighton Department of Physics University of Wisconsin-Milwaukee PO Box 41353201MilwaukeeWIUSA John L Friedman Department of Physics University of Wisconsin-Milwaukee PO Box 41353201MilwaukeeWIUSA INFERRING THE NEUTRON STAR EQUATION OF STATE FROM BINARY INSPIRAL WAVEFORMS 11 Aug 2010 August 12, 20100:16 WSPC -Proceedings Trim Size: 9.75in x 6.5in main 1 The properties of neutron star matter above nuclear density are not precisely known. Gravitational waves emitted from binary neutron stars during their late stages of inspiral and merger contain imprints of the neutron-star equation of state. Measuring departures from the point-particle limit of the late inspiral waveform allows one to measure properties of the equation of state via gravitational wave observations. This and a companion talk by J. S. Read reports a comparison of numerical waveforms from simulations of inspiraling neutron-star binaries, computed for equations of state with varying stiffness. We calculate the signal strength of the difference between waveforms for various commissioned and proposed interferometric gravitational wave detectors and show that observations at frequencies around 1 kHz will be able to measure a compactness parameter and constrain the possible neutron-star equations of state. In simulations of the late inspiral and merger of binary neutron-star systems, one typically specifies an equation of state (EOS) for the matter, performs a numerical evolution and extracts the gravitational waveforms produced in the inspiral. In this talk we report work on the inverse problem: if gravitational waves from an inspiraling neutron-star binary are observed, can they be used to infer the bulk properties of neutron star matter and, if so, with what accuracy? To answer this question, we performed a number of simulations, 1-3 using the evolution and initial data codes of Shibata and Uryū, while systematically varying the stiffness of a parameterized EOS. This parameterized EOS was previously developed in Refs. 3,4 and is of piecewise polytropic form, p(ρ) = K i ρ Γi in a set of three intervals ρ i−1 ρ ρ i in rest-mass density, with the constants K i determined by requiring continuity on each dividing ρ i and the energy density determined by the first law of thermodynamics. As described in Refs. 3,4, the nonpolytropic EOS of the crust (0 ρ ρ 0 ) as well as the dividing densities ρ 1 , ρ 2 are fixed while the parameters {p 1 ≡ p(ρ 1 ), Γ 1 , Γ 2 , Γ 3 } are generally varied. In this first set of simulations we set Γ 1 = Γ 2 = Γ 3 = 3 and change the EOS stiffness by varying only p 1 , while keeping the Schwarzschild mass of each neutron star fixed at 1.35 M ⊙ . The choice of EOS parameter varied in this work is motivated by the fact that neutron-star radius is closely tied to the pressure at density not far above nuclear equilibrium density. 5 Variation of the adiabatic exponents is the scope of a next set of simulations. We compared the gravitational waveforms from the simulations to point-particle waveforms (see, for example, Ref. 6) and calculated the signal strength of the difference in waveforms using the sensitivity curves of commissioned and proposed gravitational wave detectors. We find that, as the stars approach their final plunge and merger, the gravitational phase accumulates more rapidly for larger values of p 1 or smaller values of the neutron-star compactness (the ratio of the neutron-star mass to its radius). The waveform analysis indicates that realistic EOS will result in waveforms that are distinguishable from point-particle inspiral at an effective distance (the distance to an optimally oriented and located system that would produce an equivalent waveform amplitude) of D 0 = 100 Mpc or less with gravitational wave detectors with the sensitivity of broadband Advanced LIGO. We further estimate that observations of this sensitivity will be able to constrain p 1 for a source at effective distance D with an accuracy of δp 1 /p 1 ∼ 0.2D/D 0 . Related estimates of radius measurability show that such observations can determine the radius to an accuracy of δR ∼ 1 km D/D 0 . These first estimates neglect other details of internal structure which are expected to give smaller tidal effect corrections. This is the subject of work underway, which involves improving the accuracy of the estimates with variation of the adiabatic exponents, determination of surfaces in the equation of state (EOS) parameter space associated with a given departure from the waveform of point-particle inspiral and numerical simulation of more orbits in the late inspiral. Also, the results mentioned above do not take into account multiple detectors, parameter correlation, or multiple observations. The latter possibility is briefly discussed below. In the calculations mentioned above we estimated the error σ 0 in measuring an EOS parameter (such as p 1 or, more precisely, a related parameter that labels surfaces of constant departure from point-particle inspiral) from observation of one event at a reference effective distance D 0 = 100 Mpc. Here we wish to estimate the effect of multiple observations on the measurement accuracy. If N i identical events, each with measurement uncertainty σ i , occurred at the same effective distance D i , then the overall uncertainty (standard error) of the combined measurement would be σ i / √ N i . However, events do not occur at the same effective distance. Instead, we shall assume that events are homogeneously distributed in a sphere of effective radius D max ≃ 300 Mpc. (A uniform probability distribution of events in space is also uniform in effective space.) We divide this sphere into I shells of effective distance D i = D max (i− 1 2 )/I with i = 1, ..., I and assume that detections will only be counted for sources with effective distance smaller than D max . Because uncertainty scales linearly with effective distance, we have σ i = σ 0 D i /D 0 . Combining measurements at different distances will then result in an overall uncertainty σ given by 1 σ 2 = I i=1 N i σ 2 i = D 2 0 σ 2 0 I i=1 N i D 2 i(1) where N i is the number of events in the i-th shell. Note that the N i are random variables, so σ is also a random variable with some probability distribution. The total number of events in a given year, N = I i=1 N i ,(2) is itself a random variable and is Poisson-distributed around the rate of events R ≡ N (average number of events per year). The probability distribution function P(σ|R) of σ given R, and its moments σ , σ 2 etc., are more easily computed via Monte-Carlo simulation rather than analytically. However, an analytical estimate for the moment σ −2 can be obtained by assuming a constant density n of events per year per unit effective volume and converting the sums (2) and (1) to integrals: N = Dmax 0 4πn D 2 dD = 4πn 3 D 3 max (3) 1 σ 2 = D 2 0 σ 2 0 Dmax 0 1 D 2 4πn D 2 dD = D 2 0 σ 2 0 4πn D max(4) Eliminating n from eqs. (3) and (4) yields the simple formula σ −2 −1/2 = σ 0 D max D 0 (3R) −1/2(5) Our Monte-Carlo simulations confirm that, while σ , σ 2 1/2 and other moments do not in general scale proportionately to R −1/2 for fixed D max , the moment σ −2 −1/2 scales exactly as dictated by eq. (5). This formula indicates that, for example, three events randomly distributed in a sphere of effective radius D max = 3D 0 give the same "average" uncertainty σ −2 −1/2 = σ 0 as one event at effective distance D 0 . Although the above calculation was done for measurement of a single parameter, it can be straightforwardly generalized for more parameters, by replacing the uncertainty σ 2 with a Fisher information matrix. As noted above, we have restricted consideration to neutron stars with a single fixed mass. Independent variation of the mass of each companion is the subject of future work. . J S Read, Phys. Rev. D. 79124033J. S. Read et al., Phys. Rev. D 79, p. 124033 (2009). . C Markakis, Journal of Physics: Conference Series. 18912024C. Markakis et al., Journal of Physics: Conference Series 189, p. 012024 (2009). . J S Read, WI, USAUniversity of Wisconsin -MilwaukeePhD thesisJ. S. Read, PhD thesis, University of Wisconsin -Milwaukee (WI, USA, 2008). . J S Read, Phys. Rev. D. 79124032J. S. Read et al., Phys. Rev. D 79, p. 124032 (2009). . J M Lattimer, M Prakash, The Astrophysical Journal. 550426J. M. Lattimer and M. Prakash, The Astrophysical Journal 550, 426 (2001). . M Boyle, Phys. Rev. D. 76124038M. Boyle et al., Phys. Rev. D 76, p. 124038 (2007).
[]
[ "Dark matter pair production at the ILC in the littlest Higgs model with T-parity", "Dark matter pair production at the ILC in the littlest Higgs model with T-parity" ]
[ "Qing-Peng Qiao \nDepartment of Physics\nHenan Normal University\n453003XinxiangChina\n\nDepartment of Physics\nHenan Institute of Education\n450046ZhengzhouChina\n", "Bin Xu \nDepartment of Mathematics and Information Sciences\nNorth China Institute of Water Conservancy and Hydroelectric Power\n450011ZhengzhouChina\n" ]
[ "Department of Physics\nHenan Normal University\n453003XinxiangChina", "Department of Physics\nHenan Institute of Education\n450046ZhengzhouChina", "Department of Mathematics and Information Sciences\nNorth China Institute of Water Conservancy and Hydroelectric Power\n450011ZhengzhouChina" ]
[]
In the Littlest Higgs model with T-parity, the heavy photon (A H ) is supposed to be a possible dark matter (DM) candidate. A direct proof of validity of the model is to produce the heavy photon at accelerator. In this paper, we study the production rate of e + e − → A H A H at the international e + e − linear collider (ILC) in the Littlest Higgs model with T-parity and show the distributions of the transverse momenta of A H . The numerical results indicate that the heavy photon production rate could reach the level of 10 −1 f b at some parameter space, so it can be a good chance to observe the heavy photon via the pair production process with the high luminosity at the ILC (500 f b −1 ) .We know that DM is composed of weakly-interacting massive particles (WIMPs), so the interactions with standard model (SM) particles are weakness, how to detect heavy photon at a collider and distinguish it from other DM candidates are simply discussed in the final sector of the paper.
10.1088/1674-1137/37/3/033103
[ "https://arxiv.org/pdf/1105.3555v3.pdf" ]
119,192,391
1105.3555
cb776a66432dd61355e9fbfbbd2b68bc0cb84732
Dark matter pair production at the ILC in the littlest Higgs model with T-parity 6 Oct 2012 January 21, 2013 Qing-Peng Qiao Department of Physics Henan Normal University 453003XinxiangChina Department of Physics Henan Institute of Education 450046ZhengzhouChina Bin Xu Department of Mathematics and Information Sciences North China Institute of Water Conservancy and Hydroelectric Power 450011ZhengzhouChina Dark matter pair production at the ILC in the littlest Higgs model with T-parity 6 Oct 2012 January 21, 20131numbers: 1366Fg1366Hk9535+d Keywords: Littlest HiggsT-paritydark matterILC * In the Littlest Higgs model with T-parity, the heavy photon (A H ) is supposed to be a possible dark matter (DM) candidate. A direct proof of validity of the model is to produce the heavy photon at accelerator. In this paper, we study the production rate of e + e − → A H A H at the international e + e − linear collider (ILC) in the Littlest Higgs model with T-parity and show the distributions of the transverse momenta of A H . The numerical results indicate that the heavy photon production rate could reach the level of 10 −1 f b at some parameter space, so it can be a good chance to observe the heavy photon via the pair production process with the high luminosity at the ILC (500 f b −1 ) .We know that DM is composed of weakly-interacting massive particles (WIMPs), so the interactions with standard model (SM) particles are weakness, how to detect heavy photon at a collider and distinguish it from other DM candidates are simply discussed in the final sector of the paper. Introduction SM of particle physics, including the strong and electroweak interactions, is the SU (3) C ⊗ SU (2) L ⊗ U (1) Y gauge model which has been extensively tested during the past 30 years. The great success of SM in so-far all the fields of particle physics manifests that it is obviously a valid theory. However, on other aspect, because so many parameters in the theory are free and need to be phenomenologically determined, one can believe it to be only an effective theory. Besides all the questions which would promote theorists to look for a more fundamental theory, there exists a famous problem, namely the hierarchy. This is because the Higgs of SM gets a mass through a "bare" mass term and interactions with other particles, but because of its scalar nature, the mass it gets through interactions is as large as the largest mass scale in the theory. In order to get the relative low Higgs mass, the bare mass must be fine-tuned to several tens of an expressive places so as to accurately cancel the very big interaction terms. Meanwhile, the astrophysical observations, for example, the rotation curves of galaxies [1] and the gravitational lensing effects [2] show that about 23% of the energy density of the universe is composed of DM. One of the most fundamental problems in cosmology and particle physics today is what the nature of DM is. A flood of discussions point out that the DM should have the following characters: non-luminous, nonbaryonic, non-relativistic, and electrically neutral [3,4,5,6,7]. Extensive astronomical evidence indicates that DM is a stable heavy particle that interacts with SM particles weakly. That is to say, such a particle is a WIMP [8]. The microscopic composition of DM remains a mystery and SM does not offer an appropriate candidate to account for DM, however, many theories which extend SM contain new particles with the proper properties to play the role of DM. As mentioned above, the new physical models are needed to slove the fine-tuning problem and provide the candidates of DM. The best known example that can solve the above-mentioned problems is a supersymmetry (SUSY) model with R-parity [9]. The minimal supersymmetric standard model (MSSM) with Rparity [10,11] is such a model. Supersymmetric partners are introduced, the contribution of each supersymmetric partner cancels off the contribution of each ordinary particle, so the fine-tuning doesn't exist anymore. On the other hand, a discrete gauge symmetry named R-parity is included, so the lightest neutralino, a Majorana fermion (χ 0 1 ) is supposed to be the lightest stable SUSY particle and stand as the DM component, where the neutralinos (χ 0 1 ,χ 0 2 ,χ 0 3 ,χ 0 4 ) are the lightest particles in the SUSY models, they are the fermionic SUSY partners of the neutral gauge and CP-even Higgs bosons. A little Higgs model [12,13] with T-parity [14,15] provides a successful alternative to the SUSY with R-parity in solving the problems of the SM mentioned above. In the initial little Higgs model (LHM), Higgs originates as a pseudo-Nambu-Goldstone boson (PNGB) of a spontaneously broken global symmetry. The global symmetry is definitely broken by the interactions of two sets, with each set preserving an unbroken subset of the global symmetry. The Higgs is an exact NGB when either set of couplings is no longer present. The hierarchy problem is explained by adopting new heavy particles at the TeV scale with the same spins as the corresponding SM particles. One-loop quadratic divergences of the Higgs boson mass are canceled by these new particles. Unfortunately, these initial models suffered from strict restrictions from the precision electroweak fits. An ideal resolution to solve the problem is to apply a Z 2 discrete symmetry named T-parity (analogous to the R-parity in SUSY), by requiring SM particles to be even and the new heavy particles to be T-parity odd, forbidding all tree-level corrections to the electroweak precision tests. If the T-parity was conservation, the lightest T-odd particle (LTP) of a LHM would be stable and not decay, and it will play the role of the DM candidate. The littlest Higgs model with T-parity (LHT) [16,17] is such a type model, it is a modification of the original Littlest Higgs Model. One of the unique signatures of the theory is the existence of a neutral minimum weight heavy U (1) gauge boson: heavy photon A H and some other special particles. Experimentally searching them would provide direct evidence for judging the validity of the theory. There are several ways to study DM, for example, one can use the relativistic mean field theory to determine the nuclear form factor for the detection of dark matter [18]. Astronomical observations and restrictions may provide another way. In our previous work, we have studied the heavy photon time-evolution in the LHT [19]. Finally, There is a good opportunity to combine theoretical models with the upcoming DM direct and indirect detection experiments, results from the present and future colliders in the TeV range. Because all the new particles are very heavy, they escaped the detection (if exist) in previous accelerators. By a general analysis, their masses, just like the particles predicted by other models, may be in TeV regions, so that one can expect to observe them at the high energy colliders. The production of heavy photon pair at the large hadron collider (LHC), pp → A H A H + X and the associated production of Q H A H at the LHC have been discussed in Refs. [16] and [20] respectively, where Q H is the partner of the light quark. Because of the complex background, the LHC might be difficult to confirm a theory, so the ILC will be a proper tool to better understand the properties of the LHT. In this work, we analyze the production of the heavy photon pair at the ILC. M (Z H ) ≈ M (W ± H ) ≈ gf, M (A H ) ≈ g ′ √ 5 ≈ 0.16f,(1) where g and g ′ are the gauge couplings of SM SU (2) L and U (1) Y . After electroweak symmetry breaking at the scale v ≪ f , the masses of the new heavy gauge bosons as well as the SM gauge boson masses receive corrections of order v 2 /f 2 and could be written as M (Z H ) = M (W ± H ) = gf (1 − v 2 8f 2 ) ≈ 0.65f, M (A H ) = f g ′ √ 5 (1 − 5v 2 8f 2 ) ≈ 0.16f,(2) since the masses of the other T-odd particles are generically of level f , the A H can be assumed to be the LTP and be regarded as an ideal candidate of the WIMP cold DM. The mirror fermions, acquire masses through a Yukawa-type interaction κf (Ψ 2 ξΨ ′ +Ψ 1 Σ 0 Ωξ † ΩΨ ′ )(3) whereas Ψ 1 , Ψ 2 are the fermion doublets and Ψ ′ is a doublet under SU (2) 2 . One fermion doublet Ψ H = 1 √ 2 (Ψ 1 + Ψ 2 ) acquires a mass κf , which is a free parameter, with the natural scale set by f . Specifically, the T-odd heavy partners of the SM leptons acquire the following masses √ 2κ l f [21], where the κ l is the flavor independent Yukawa coupling. The T-odd fermion mass for both lepton and quark partners will be assumed to exceed 300 GeV to avoid the colored T-odd particles from being detected in the squark searches at the Tevatron. In the LHT model, the coupling term in the Lagrangian related to the heavy photon of our work is written as [16,17]: A µ HL i L j : i e 10C W S W (S W − 5C W ( v f ) 2 x h )γ µ P L δ ij ,(4) whereL is the heavy lepton of the LHT and L is the SM lepton, and e is the electromagnetic coupling constant, x h = 5 4 gg ′ 5g 2 −g ′2 , v = 2M W S W e , f = 1000, P L = 1 − γ 5 2 . We identify g and g ′ with the SM SU (2) and U (1) Y gauge couplings, respectively. S W and C W are sine and cosine of the With the Lagrangian, the amplitude of the process can be written directly: M = M a + M b = −i( e 10C W S W (S W − 5C W ( v f ) 2 x h )) 2v (P 1 )( 1 P 2 24 − m 2 L γ µ γ ρ γ ν P 24ρ + 1 P 2 23 − m 2 L γ ν γ ρ γ µ P 23ρ )P L u(P 2 )ǫ * µ (P 3 )ǫ * ν (P 4 ),(5) where P 23 = P 2 − P 3 , P 24 = P 2 − P 4 , an ǫ is the polarization vector for the heavy photon gauge boson. The differential cross section is given by: dσ dt = 1 2! 1 64πs 1 | p 1 | 2Σ |M | 2 ,(6) where s and t are the Mandelstam variables, dt = 2 | p 1 | | p 3 | d cos θ. With the above production amplitudes and differential cross section, we can directly obtain the production cross section of the process. For the numerical computations of the cross section, we need the electroweak fine-structure constant α to satisfy α(M Z ) = e 2 4π = 1 128 [22]. There are three free parameters involved in the production amplitudes: the heavy photon mass m AH , the mass of heavy lepton mL and the energy of the center-of-mass frame √ s. Concretely, in order to expose possible dependence of the cross section on these parameters, we take two groups of values: √ s =500, 1000 GeV, mL =300, 500, 700 GeV respectively, the m AH takes 100 to 250 GeV in Fig.2 and 100 to 300 GeV in Fig.3. The numerical results of the cross sections are plotted in Fig.2 and Fig.3. From these figures, we observe that the production rate decreases sharply as m AH increases due to the constraint from the phase space of the final state involving the heavy photon pair. The dependence of the cross section on √ s in the range of parameters that we use is apparent: when √ s becomes large the cross section increases obviously. The relations between the cross section and mL, are the other way around, when √ s is fixed as 500 GeV, mL becomes large, the cross section decreases, but when √ s takes the value of 1000 GeV, the situations are more complicated, one could see the concrete relations from the figures. In Fig.4 and Fig.5, we present the transverse momentum distributions of heavy photon for √ S=500 and 1000 GeV with m AH equal to 100 GeV, respectively. From the figures, we could see that the differential cross section increases with the transverse momentum until the value reaches the maximum at the point that p T equals a certain value, then it begins to slide. Discussions and conclusions The advantages of observing the process of e + e − → A H A H at the ILC are obvious. First, astrophysical probes provide a way to study the characteristics of DM, however, astrophysical observations are unable to determine and examine the exact properties of a DM particle and it mostly remains for collider physics to reveal the nature of DM. Second, the LHC and the ILC may well turn out to be the DM factories and the ILC background is relatively clean, so this paper is a proper supplement to the Refs. [16,20]. Finally, the heavy photon is the LTP and must be produced in pairs, constraint from the final state phase space of the above process is alleviated compared with other processes. The combination of cosmology and high energy collider seems to be a good idea [23,24], however, at the same time, we have to solve some problems. The first problem is how to distinguish the A H pair of the LHT from other DM pair production at the ILC, for example, the lightest neutralino (χ 0 1 ) of SUSY with R-parity [10,11]. Since different DM particles have different reaction channels and probabilities to be detected by the detectors of the ILC, which offer a way to distinguish the A H from others. In the MSSM with R-parity, tree-level production cross-sections of e + e − →χ 0 iχ 0 j have been studied systematically, readers who are interested in these processes and want to know more about them can see the paper [25,26], so we don't discuss the topic in detail here. There is a more serious question we have to face, unlikely the production of heavy photons in e + e − annihilation of six-dimensional SUSY QED [27], DM does not carry either color or electric charge, the direct detection of DM at a cllider is very hard. In the collider experiments, DM would be like neutrinos and therein they would escape the detector without depositing energy in the experimental devices, causing an obvious imbalance of momentum and energy in collider events. DM would manifest itself as missing energy, however, DM isn't the only source responsible for the missing energy. Limited calorimeter resolution, uninstrumented regions of the detector, and additional energy of cosmic rays must all be considered thoroughly in a collider experiment. All of these factors strongly complicate the investigation of DM in the collider experiments. For conservation of momentum, the sum of all momenta transverse to the beam direction must equal zero. Thus, we can ascertain the missing energy by measuring the energy deposited in each calorimeter cell of a detector. If all of the above uncertainties and the background caused by SM neutrinos were been are subtracted and the vector sum of all the transverse momenta is not equal to zero, we could claim that something invisible is produced, the undetected particle(s) may be the DM candidate(s) (such as a heavy photon). There is a better way to observe the heavy photon pair production. They can be observed via radiative production e + e − → A H A H γ. In this process, the signal is a single high energetic photon and missing energy, carried by the heavy photons. It would be our next work [28], we'll discuss the problem later in detail. Although it is very hard to detect the DM, e + e − → A H A H is the leading order process of the DM production in the Littlest Higgs model with T-parity at the ILC, so we need to predict the production rate. As a conclusion, we find that the production rate of e + e − → A H A H could reach the level of 10 −2 f b at some small mass parameter space, which is a bit more larger than the process of e + e − → A H Z H [29], the heavy gauge bosons A H and Z H are produced with the cross section of 1.9 f b at the center of mass energy of 500 GeV and the large mass of heavy Z H boson(369 GeV) suppresses the final state phase space. The production of A H may be observable as the missing energy at the ILC thinks to its high energy and luminosity. However, for some uncertain reasons, the missing energy related to the A H is very difficult to be accurately determined, so identification of the DM production at the ILC is very hard. We, so far, are not equipped with such expertise and ability to handle the complex analysis, but will definitely cooperate with our experimental colleagues and experts in this field to carry out a detailed analysis. The paper is organized as follows. The numerical results of the production rate and the distributions of the transverse momenta are presented in Sec. II where all input model parameters are explicitly listed. Our discussions on various aspects and conclusions are made in the last section.First, let us concisely recall the relevant characteristics of the LHT. The LHT is based on a non-linear σ-model describing a global SU (5)/SO(5) symmetry breaking, which takes place at an energy scale Λ ∼ 4πf ∼10 TeV. The SM Higgs doublet is generally considered to be a subset of the Goldstone bosons associated with the breaking. The symmetry breaking also breaks the assumed embedded local gauge symmetry [SU (2)×U (1)] 2 subgroup down to the diagonal subgroup SU (2) L × U (1) Y , which is identified as the ordinary SM electroweak gauge group. The additional gauge structure leads to four extra gauge bosons at the TeV scale, W ± H , Z H and A H . The diagonal Goldstone bosons of the matrix Π are "eaten" to become the longitudinal degrees of freedom of the heavy gauge bosons with odd T-parity at the scale f , the masses of them are Figure 1 : 1diagrams responsible for the process of e + e − → A H A H at the tree-The Feynman diagrams of the process e + e − → A H A H . Figure 2 : 2The dependence of the cross section of e + e − → A H A H on heavy photon mass m AH (100∼250 GeV) for √ s=500 GeV and mL =300, 500, 700 GeV at the ILC. Figure 3 :Figure 4 :Figure 5 : 345The dependence of the cross section of e + e − → A H A H on heavy photon mass m AH (100∼300 GeV) for √ s=1000 GeV and mL=300, 500, 700 GeV at the ILC. The distributions of the transverse momenta of final state (p T ) for the process e + e − → The distributions of the transverse momenta of final state (p T ) for the process e + e − → A H A H with √ S=1000 GeV. . K G Begeman, A Broeils, R H Sanders, Mon, Not. Roy. Astron. Soc. 249523Begeman K G, Broeils A H and Sanders R H. Mon. Not. Roy. Astron. Soc., 1991, 249: 523 . R Massey, Nature. 445286Massey R et al. Nature, 2007, 445: 286 . L Roszkowski, Phys. Lett. B. 26259Roszkowski L. Phys. Lett. B, 1991 262: 59 . L Roszkowski, arXiv:hepph/9903467Roszkowski L. arXiv:hepph/9903467 . J Primack, arXiv:astroph/0112255Primack J R. arXiv:astroph/0112255 . Chen Pisin, arXiv:astroph/0303349Chen Pisin. arXiv:astroph/0303349 . C P Ma, M Boylan-Kolchin, Phys. Rev. Lett. 9321301Ma C P and Boylan-Kolchin M. Phys. Rev. Lett., 2004, 93: 021301 . G Bertone, D Hooper, J Silk, Phys. Rept. 405279Bertone G, Hooper D and Silk J. Phys. Rept., 2005, 405: 279 . H K Dreiner, Luhn C Thormeier, M , Phys. Rev. D. 7375007Dreiner H K, Luhn C and Thormeier M. Phys. Rev. D, 2006, 73: 075007 . H Goldberg, Phys. Rev. Lett. 501419Goldberg H. Phys. Rev. Lett., 1983, 50: 1419 . J R Ellis, J S Hagelin, D Nanopoulos, Nucl. Phys. B. 238453Ellis J R, Hagelin J S, Nanopoulos D V et al. Nucl. Phys. B, 1984, 238: 453 . H C Cheng, I Low, Jhep, 040861Cheng H C and Low I. JHEP, 2004, 0408: 061 . I Low, Jhep, 041067Low I. JHEP, 2004, 0410: 067 . W Skiba, J Terning, Phys. Rev. D. 6875001Skiba W and Terning J. Phys. Rev. D, 2003, 68: 075001 . H C Cheng, I Low, Jhep, 030951Cheng H C and Low I. JHEP, 2003, 0309: 051 . J Hubisz, Meade P , Phys. Rev. D. 7135016Hubisz J and Meade P. Phys. Rev. D, 2005, 71: 035016 . A Birkedal, A Noble, M Perelstein, Phys. Rev. D. 7435002Birkedal A, Noble A, Perelstein M et al. Phys. Rev. D, 2006, 74: 035002 . Chen Ya-Zheng, Li Yan-An, Lei, Commun. Theor. Phys. 551059CHEN Ya-Zheng, LUO Yan-An, Li Lei et al. Commun. Theor. Phys., 2011, 55: 1059 . Tang Qiao Qing-Peng, L I Jian, Xue-Qian, Commun. Theor. Phys. 501211QIAO Qing-Peng, TANG Jian and LI Xue-Qian. Commun. Theor. Phys., 2008, 50: 1211 . - Chen Chian, Cheung Shu, Kingman, Yuan Tzu-Chiang, Phys. Lett. B. 644158CHEN Chian-Shu, Cheung Kingman, YUAN Tzu-Chiang. Phys. Lett. B, 2007, 644: 158 . G Cacciapaglia, A Deandrea, S Choudhury, Phys. Rev. D. 8175005Cacciapaglia G, Deandrea A, Choudhury S R et al. Phys. Rev. D, 2010, 81: 075005 . K Nakamura, J. Phys. G. 3775021Nakamura K et al. J. Phys. G, 2010, 37: 075021 . M Y Khlopov, A Sakharov, A L Sudarikov, Grav. Cosmol. 41Khlopov M Y, Sakharov A S and Sudarikov A L. Grav. Cosmol., 1998, 4: S1 . K M Belotsky, M Y Khlopov, A Sakharov, Grav. Cosmol. Suppl. 470Belotsky K M, Khlopov M Y, Sakharov A S et al. Grav. Cosmol. Suppl., 1998, 4: 70 . A Bartl, H Fraas, W Majerotto, Nucl. Phys. B. 2781Bartl A, Fraas H and Majerotto W. Nucl. Phys. B, 1986, 278: 1 . M S Carena, A Freitas, Phys. Rev. D. 7495004Carena M S and Freitas A. Phys. Rev. D, 2006, 74: 095004 . P Fayet, Nucl. Phys. B. 263649Fayet P. Nucl. Phys. B, 1986, 263: 649 . Qiao Qing-Peng, Xu Bin, arXiv:hepph/1111.6155QIAO Qing-Peng, XU Bin. arXiv:hepph/1111.6155 . E Asakawa, M Asano, K Fujii, Phys. Rev. D. 7975013Asakawa E, Asano M, Fujii K et al. Phys. Rev. D, 2009, 79: 075013
[]
[ "Perihelion advance and trajectory of charged test particles in Reissner-Nordstrom field via the higher-order geodesic deviations", "Perihelion advance and trajectory of charged test particles in Reissner-Nordstrom field via the higher-order geodesic deviations" ]
[ "M Heydari-Fard ", "S Fakhry ", "S N Hasani ", "\nDepartment of Physics\nDepartment of Physics\nThe University of Qom\nP. O. Box 37185-359QomIran\n", "\nShahid Beheshti University\n19839EvinG. C., TehranIran\n" ]
[ "Department of Physics\nDepartment of Physics\nThe University of Qom\nP. O. Box 37185-359QomIran", "Shahid Beheshti University\n19839EvinG. C., TehranIran" ]
[]
By using the higher-order geodesic deviation equations for charged particles, we apply the method described by Kerner et.al. to calculate the perihelion advance and trajectory of charged test particles in the Reissner-Nordstrom space-time. The effect of charge on the perihelion advance is studied and compared the results with those obtained earlier via the perturbation method. The advantage of this approximation method is to provide a way to calculate the perihelion advance and orbit of planets in the vicinity of massive and compact objects without considering Newtonian and post-Newtonian approximations.
10.1155/2019/1879568
[ "https://arxiv.org/pdf/1905.08642v1.pdf" ]
160,009,839
1905.08642
910d2b45e6bd22ed214ef17c077c0aad9913c825
Perihelion advance and trajectory of charged test particles in Reissner-Nordstrom field via the higher-order geodesic deviations 20 May 2019 May 22, 2019 M Heydari-Fard S Fakhry S N Hasani Department of Physics Department of Physics The University of Qom P. O. Box 37185-359QomIran Shahid Beheshti University 19839EvinG. C., TehranIran Perihelion advance and trajectory of charged test particles in Reissner-Nordstrom field via the higher-order geodesic deviations 20 May 2019 May 22, 2019numbers: 0420-q0470 Bw0480 Cc Keywords: geodesic deviationReissner-Nordstrom metricperihelion advance By using the higher-order geodesic deviation equations for charged particles, we apply the method described by Kerner et.al. to calculate the perihelion advance and trajectory of charged test particles in the Reissner-Nordstrom space-time. The effect of charge on the perihelion advance is studied and compared the results with those obtained earlier via the perturbation method. The advantage of this approximation method is to provide a way to calculate the perihelion advance and orbit of planets in the vicinity of massive and compact objects without considering Newtonian and post-Newtonian approximations. Introduction The problem of planets motion in general relativity is the subject of many studies in which the planet has been considered as a test particle moving along its geodesic [2]. Einstein made the first calculations in this regard for the planet Mercury in the Schwarzschild space-time which resulted in the equation for the perihelion advance ∆ϕ = 6πGM a(1 − e 2 ) ,(1) where G is the gravitational constant, M is the mass of the central body, a is the length of semi-major axis for planet's orbit and e is eccentricity. Derivation of perihelion advance by using this method leads to a quasi-elliptic integral whose calculation is very difficult, which is then evaluated after expanding the integrand in a power series of the small parameter GM/rc 2 . For the low-eccentricity trajectories of planets, one can obtain the following approximate formula for the perihelion advance ∆ϕ = 6πGM a(1 − e 2 ) ≃ 6πGM a (1 + e 2 + e 4 + e 6 + · · ·), even for the case of Mercury up to second-order of eccentricity, the perihelion advance differs only by 0.18% error from its actual value [1]. It should be noted again that Einstein's method is only valid for the small values of GM/rc 2 . . Also, u µ is the unit tangent vector to the central world-line, n µ is the separation vector to the curve s = const and N µ = b µ − Γ µ λρ n λ n ρ is the second-order geodesic deviation [3]. In what follows, we show that one can obtain the same results (without taking the complex integrals) only by considering the successive approximations around a circular orbit in the equatorial plane as the initial geodesic with constant angular velocity, which leads to an iterative process of the solving the geodesic deviation equations of first, second and higher-orders [4,5,6]. Here, instead of the GM/rc 2 parameter the eccentricity, e, plays the role of the small parameter which is controlling the maximal deviation from the initial circular orbit. In this method, we have no constraint on GM/rc 2 anymore. So, one can determine the value of perihelion advance for large mass objects and write it in the higher-order of GM/rc 2 . The orbital motion of neutral test particles via the higher-order geodesic deviation equations for Schwarzschild and Kerr metrics are studied in [1] and [5], respectively. Also, for massive charged particles in Reissner-Nordstrom metric, geodesic deviations have been extracted up to first-order [7]. In this paper, by using the higher-order geodesic deviations for charged particles [8], we are going to obtain the orbital motion and trajectory of charged particles. We also expect that our calculations reduce to similar one in Schwarzschild metric [1] by elimination of charge. In fact, we generalize the novel method used in reference [1] for neutral particles in the Schwarzschild metric to the charged particles in the Reissner-Nordstrom metric. Recently, an analytical computation of the perihelion advance in general relativity via the Homotopy perturbation method has been proposed in [9]. Also, one can study the perihelion advance of planets in general relativity and modified theories of gravity by using different methods in [10]- [24]. The structure of the paper is as follows: In section 2, by using the approximation method introduced in reference [8], we derive the higher-order geodesic deviation for charged particles. By using the first-order geodesic deviation equations, the orbital motion of charged particles is found in section 3. In section 4, we obtain the second-order geodesic deviations and derive the semi-major axis, eccentricity, and trajectory using the Taylor expansion around a central geodesic. The obtained results are discussed in section 5. 2 The higher-order geodesic deviation method As it mentioned above, the higher-order geodesic deviation equations for charged particles have been derived in [8] for the first-time. In this section, we are going to derive the geometrical set-up used in our work. The geodesic deviation equation for charged particles is [7] D 2 n µ Ds 2 = R µ λνκ u λ u ν n κ + q m F µ ν Dn ν Ds + q m ∇ λ F µ ν u ν n λ ,(3) where D Ds is the covariant derivative along the curve and n µ is the separation vector between two particular neighboring geodesics (see figure 1). Here, u µ is the tangent vector to the geodesic, R µ λνκ is the curvature tensor of space-time, q and m are charge and mass of particles (particles have the same charge-to-mass ratio, q/m), and F µ ν is the electromagnetic force acting on the charged particles. For neutral particles, the above equation reduces to the following geodesic deviation [25,26] D 2 n µ Ds 2 = R µ λνκ u λ u ν n κ ,(4) which is the well-known equation (Jacobi equation) in general relativity. We introduce the fourvelocity u α (s, p) = ∂x α ∂s as the time-like tangent vector to the world-line and n α (s, p) = ∂x α ∂p as the deviation four-vector as well. Practically it is often convenient to work with the non-trivial covariant form. It can be obtained by replacement of the trivial expressions for the covariant derivatives, the Riemann curvature tensor and use of the equation of motion in the left-hand side of equation (3) [7] d 2 n µ ds 2 + 2Γ µ κν u κ − q m F µ ν dn ν ds + u κ u σ ∂ ν Γ µ κσ − q m u κ ∂ ν F µ κ n ν = 0.(5) The geodesic deviation can be used to compose geodesics x µ (s) near a given reference geodesic x µ 0 (s), by an iterative method as follows. Considering this, one can write Taylor expansion of x µ (s, p) around the central geodesic and obtain the first-order and higher-order geodesic deviations for charged particles x µ (s, p) = x µ (s, p 0 ) + (p − p 0 ) ∂x µ ∂p | (s,p 0 ) + 1 2! (p − p 0 ) 2 ∂ 2 x µ ∂p 2 | (s,p 0 ) + · · · ,(6) our aim is to obtain an expression in terms of the deviation vector. As shown in the above equation, the second term, ∂x µ ∂p , is the definition of deviation vector and shows the first-order geodesic deviation. But in the third term, ∂ 2 x µ ∂p 2 , is not vector anymore. Therefore, we define the vector b µ as follow b µ = Dn µ Dp = ∂n µ ∂p + Γ µ λν n λ n ν ,(7) to change ∂ 2 x µ ∂p 2 into the expression showing the second-order geodesic deviation. By substituting equation (7) into equation (6), one can obtain the expression in terms of the order of vector deviation x µ (s, p) = x µ (s, p 0 ) + (p − p 0 )n µ + 1 2! (p − p 0 ) 2 (b µ − Γ µ λν n λ n ν ) + · · ·(8) In the above expression, one can make some changes for simplification. We consider δ n x µ (s) as n-th order of geodesic deviation and by assuming (p − p 0 ) as a small quantity, ǫ, we rewrite equation (8) as follows x µ (s, p) = x µ 0 (s) + δx µ (s) + 1 2 δ 2 x µ (s) + · · · ,(9) where δx µ (s) = ǫn µ (s) is the first-order geodesic deviation and δ 2 x µ (s) = ǫ 2 (b µ − Γ µ νλ n ν n λ ) is the second-order geodesic deviation. In order to obtain the second-order geodesic deviation equation, one can apply the definition of the covariant derivative on equation (7)(for more details see reference [8] and appendix therein) D 2 b µ Ds 2 + R µ ρλσ u ρ u λ n σ = (R µ ρλσ;ν − R µ σνρ;λ )u λ u σ n ρ n ν + 4R µ ρλσ u λ Dn ρ Ds n σ + q m R µ σρν F ρ λ n σ u λ n ν + q m F µ ρ;λσ n λ u ρ n σ + q m F µ ρ D 2 u ρ Dp 2 .(10) Similar to the first-order geodesic deviation equation (5), we can write equation (10) in the nonmanifest covariant form d 2 b µ ds 2 + 2Γ µ κν u κ − q m F µ ν db ν ds + u κ u σ ∂ ν Γ µ κσ − q m u κ ∂ ν F µ κ b ν = + Γ τ σν ∂ τ Γ µ λρ + 2Γ µ λτ ∂ ρ Γ τ σν − ∂ ν ∂ σ Γ µ λρ u λ u ρ n σ n ν − u σ u ν n λ n ρ + 4 ∂ λ Γ µ σρ + Γ ν σρ Γ µ λν dn σ ds (u λ n ρ − u ρ n λ ) + q m u ν n α n β ∂ α ∂ β F µ ν − F µ ρ ∂ ν Γ ρ αβ − ∂ σ F µ ν Γ σ αβ + 2 q m n σ dn ν ds ∂ σ F µ ν − F µ β Γ β σν .(11) As it clears, the left-hand side of the second-order geodesic deviation equation (11) is same to the left-hand side of equation (5). As in the case of the second-order geodesic deviation, the higherorder geodesic deviation equations have the same left-hand side and different right-hand side. A non-manifest covariant form of the third-order geodesic deviation equation is given in appendix 1. The successive approximations to the exact geodesic (b) have been shown in figure 1. Lines (c) and (d) represent the first-order approximation i.e. x µ (s, p) = x µ (s, p 0 ) + (p − p 0 ) ∂x µ ∂p | p 0 , and the second-order approximation i.e. x µ (s, p) = x µ (s, p 0 ) + (p − p 0 ) ∂x µ ∂p | p 0 + 1 2! (p − p 0 ) 2 ∂ 2 x µ ∂p 2 | p 0 , respectively. In the next section, we are going to obtain the components of n µ from the first-order geodesic deviation, equation (5), for a circular orbit of charged particles. Then by substituting them into equation (11), we can solve the second-order geodesic deviation equations, b µ . Finally, by substituting n µ and b µ into equation (8), we will find the relativistic trajectory of charged particles in Reissner-Nordstrom space-time. 3 The first-order geodesic deviation 3 .1 Circular orbits in Reissner-Nordstrom metric The Reissner-Nordstrom metric is a static exact solution of the Einstein-Maxwell equations which describes the space-time around a spherically non-rotating charged source with mass M and charge Q (in the natural coordinate with c = 1 and G = 1) ds 2 = −B(r)dt 2 + 1 B(r) dr 2 + r 2 (dθ 2 + sin 2 (θ)dϕ 2 ),(12) where B(r) = 1 − 2M r + Q 2 r 2 . Also, the vector potential and the electromagnetic field of Maxwell's equations is [7] A = A µ dx µ = − Q 4πr dt, F = dA = Q 4πr 2 dr ∧ dt.(13) By assuming that M 2 > Q 2 , we are going to obtain the equation of motion for test particles which have mass m and charge q. Now, we consider a circular orbit with a constant radius R. On the other hand, we know that the angular momentum of particles which are bounded to the spherically symmetric condition is limited to the equatorial plane. For this purpose and for simplicity, we limit the space to the plane of θ = π 2 in which the angular momentum is in the z direction. By using of the Euler-Lagrange equation, one can lead to the following constants of motion dϕ ds = l r 2 ,(14)dt ds = ε − qQ 4πmr 1 − 2M r + Q 2 r 2 ,(15) where l = J m is the angular momentum per unit mass,φ = ω is the angular velocity and ε is the energy per unit mass. Finally, from equations (12), (14) and (15) one obtains two constraints namely, the conservation of the absolute four-velocity and the radial acceleration. Now, due to the fact that the radius of the circular orbit is constant (r = R), two mentioned constraints vanish at all times and this creates two relations between R, l, and ε as follows ε − qQ 4πM R 2 = 1 − 2M R + Q 2 R 2 1 + l 2 R 2 ,(16) and l 2 R − M 1 + 3l 2 R 2 + Q 2 R 1 + 2l 2 R 2 2 = qQ 4πm 2 1 + l 2 R 2 1 − 2M R + Q 2 R 2 .(17) As we expect that by eliminating charge, all obtained equations reduce to the similar equations in the Schwarzschild metric. In summary, we obtain the following four-velocity vector for a circular orbit with radius R in an equatorial plane u t = dt ds = ε − qQ 4πmR 1 − 2M R + Q 2 R 2 , u r = dr ds = 0, u θ = dθ ds = 0, u ϕ = dϕ ds = ω 0 = l R 2 .(18) In the next subsection, we obtain the orbital motion by using the higher-order geodesic deviation method and compare the results with the perturbation method. First-order geodesic deviation around the circular orbits Now let us calculate the first-order geodesic deviation for the components n t , n r , n θ and n φ , by using of equation (5)           n t n r n θ n ϕ      =      0 0 0 0      ,(19) where the matrix elements are given by m 11 = d 2 ds 2 , m 12 = 2Rε M R − Q 2 R 2 − qQ 4πm 1 − Q 2 R 2 R 2 1 − 2M R + Q 2 R 2 2 d ds , m 13 = m 14 = 0, m 22 = d 2 ds 2 − l 2 R 4 1 − Q 2 R 2 + − 2M R + 6M 2 R 2 + 3Q 2 R 2 − 12M Q 2 R 3 + 5Q 4 R 4 ε − qQ 4πmR 2 R 2 1 − 2M R + Q 2 R 2 2 − q m F r t,r u t , m 21 = 2 R M R − Q 2 R 2 ε − qQ 4πmR d ds − q m F r t d ds , m 23 = 0, m 24 = − 2l R 1 − 2M R + Q 2 R 2 d ds ,m which is similar to the Schwarzschild case. So in this case we can neglect this solution (n θ = 0), because the new plane of orbit is a new one inclined, or just a change of coordinate system [5]. Now, by eliminating the derivatives of n t and n φ in the differential equation of n r , we obtain the following oscillating equation d 2 n r ds 2 + w 2 n r = 0, with the characteristic frequency ω 2 = ω 2 0 1 − 6M R + Q 2 M R + 3Q 2 R 2 + · · · .(22) By considering n r 0 > 0, we choose the following solution for n r n r = −n r 0 cos(ωs). Also, from the n t and n ϕ geodesic deviation equations, the solutions for n t and n ϕ are given by n t = n t 0 sin(ωs),(24)n ϕ = n ϕ 0 sin(ωs),(25) where the amplitudes depend on n r 0 n t 0 = 2 M R − Q 2 R(1 − 2M R + Q 2 R 2 ) 1 − 6M R + Q 2 M R n r 0 ,(26)n ϕ 0 = 2 R 1 − 6M R + Q 2 M R n r 0 .(27) In this way, the trajectory and the law of motion are obtained by r = R − n r 0 cos(ωs), ϕ = ω 0 s + n ϕ 0 sin(ωs),(28)t = ε − qQ 4πmR 1 − 2M R + Q 2 R 2 s + n t 0 sin(ωs),(29) where the argument phase of the cosine function is taken by s = 0 for perihelion and s = π ω for aphelion. Now, (28) can be written as r = R 1 − n r 0 R cos(ωs) .(31) By direct solution of the Euler-Lagrange equations, the trajectory of motion for particles is obtained in terms of centrifugal inertia [27] r(t) = a(1 − e 2 ) 1 + e cos(ω 0 t) ≃ a [1 − e cos(ω 0 t)] .(32) Obviously, equations (31) and (32) show that we have the same results. It means that if we bring up the eccentricity e to n r 0 R and the semi-major axis a to R, the same results are extracted, but there is also a difference that the circular frequency, ω, is lower than the circular frequency of the unperturbed circular motion, ω 0 . So, if the circular frequency decreases, the period increases. Then we obtain an expression for the periastron shift per one revolution as △ϕ = 2π ω 0 ω − 1 = 2π 3M R + 27 2 M 2 R 2 + 135 2 M 3 R 3 − Q 2 2M R − 6Q 2 R 2 + · · · .(33) It can be seen from above equation that the charge parameter, Q, decreases the perihelion advance. In the perturbation method (Einstein's method), the orbital motion for charged particles moving in the equatorial plane of the Reissner-Nordstrom source is given by [23] △ϕ = 6πM R − πQ 2 M R ,(34) comparing equation (33) with equation (34) shows that the presented method can be used in the vicinity of very massive and compact objects which is having a non-negligible ratio of M R . When the source is neutral and for the small values of M R , equation (33) reduces to the standard formula for Perihelion advance of planets [26]. If we also compare equation (33) to equation (2), it is clear that in the first-order deviation, we hold only the terms up to e 2 . In order to obtain the △ϕ for the higher values of the eccentricity, we must go beyond the first-order deviation equations. Therefore in the next section, we solve the second-order geodesic deviation equations in Reissner-Nordstrom space-time. The second-order geodesic deviation In this section, by using the first-order geodesic deviation equation and inserting equations (23), (24) and (25) into equation (10) and also doing a set of hard calculations, a linear equations system for the second-order geodesic deviation vector b µ is obtained b t b r b ϕ    = (n r 0 ) 2    C t + C t q C r + C r q C ϕ + C ϕ q    ,(35) where the constants C t , C t q , C r , C r q , and C ϕ , C ϕ q contain quantities depending on M , R, ω , ω 0 , q and Q C t = − 6M M R − Q 2 (2 − 7M R + 31Q 2 3R 2 − 5Q 2 3M R − 4Q 4 3M R 3 − Q 4 M 2 R 2 ) R 5 (1 − 2M R + Q 2 R 2 )(1 − 3M R + 2Q 2 R 2 ) 1 − 6M R + Q 2 M R sin(2ws),(36)C r = − 3M 6 − 27M R + 6M 2 R 2 + 158Q 2 3R 2 − 22Q 2 3M R − 14M Q 2 R 3 − 16Q 4 3R 4 − 4Q 4 M 2 R 2 2R 4 1 − 3M R + 2Q 2 R 2 1 − 6M R + Q 2 M R cos(2ωs) + 3M 2 − 5M R + 18M 2 R 2 + 6Q 2 R 2 − 10Q 2 3M R − 34M Q 2 R 3 + 4Q 4 M 2 R 2 2R 4 1 − 3M R + 2Q 2 R 2 1 − 6M R + Q 2 M R ,(37)C ϕ = − 6M ω 0 1 − 3M R + 2M 2 R 2 + 5Q 2 R 2 − 4Q 2 3M R − 8M Q 2 3R 3 − Q 4 R 4 − Q 4 M R 3 ωR 5 1 − 3M R + 2Q 2 R 2 1 − 2M R + Q 2 R 2 sin(2ωs),(38) and C t q = qQ M R − Q 2 R 2 1 − 6M R + Q 2 R 2 3M R − 31M 2 2R 2 + 15M 3 R 3 + Q 2 R 2 + 3M Q 2 R 3 − 7M 2 Q 2 R 4 4πmM R 3 1 − 3M R + 2Q 2 R 2 1 − 2M R + Q 2 R 2 2 1 − 6M R + Q 2 M R sin(2ωs),(39)C r q = qQ 1 − 3M R + 2Q 2 R 2 7M R − 61M 2 R 2 + 169M 3 R 3 − 150M 4 R 4 + 3Q 2 R 2 + 11M Q 2 R 3 − 130M 2 Q 2 R 4 + 198M 3 Q 2 R 5 4πmM R 3 1 − 3M R + 2Q 2 R 2 1 − 6M R + Q 2 M R 1 − 2M R + Q 2 R 2 cos(2ωs) − qQ 1 − 3M R + 2Q 2 R 2 M R + 5M 2 R 2 − 45M 3 R 3 + 54M 4 R 4 − 3Q 2 R 2 + 21M Q 2 R 3 − 2M 2 Q 2 R 4 − 54M 3 Q 2 R 5 4πmM R 3 1 − 3M R + 2Q 2 R 2 1 − 6M R + Q 2 M R 1 − 2M R + Q 2 R 2 ,(40)C ϕ q = 0.(41) Here we do not have used any approximation in C i , (i = r, θ, φ) but in what follows we neglect terms of higher order of the small parameters M R , Q M and q m . Solving the matrix equation (10) for b µ is similar to the approach used in the previous section (for the first-order geodesic deviation vector n µ ) which contains the terms with characteristic frequency ω. Here we are only interested in a particular solution because of the oscillating general solution with the angular frequency ω already take into account for n µ (s). The particular solution of the above equation which is containing the oscillating terms with the angular frequency 2ω, the linear terms in the proper time s and constants. To obtain the trajectory x µ according to the equation (9), we need to calculate 1 2 δ 2 x µ . Also for x µ , the perihelion is extracted by ωs = 2kπ and the aphelion is derived by ωs = (1 + 2k)π where k ∈ Z. In appendix 2, we have put the particular solution of the above equation, b µ , the second-order geodesic deviation δ 2 x µ , and the semi-major axis a and eccentricity e, respectively. Finally, successive approximation brings us to trajectory by substituting s(ϕ) to ϕ(s) r R = 1 − n r 0 R cos ω ω 0 ϕ + n r 0 R 2 3 − 5M R − 30M 2 R 2 + 72M 3 R 3 + 7Q 2 R 2 − 7Q 2 M R 2 1 − 2M R + Q 2 R 2 1 − Q 2 M R 1 − 6M R + Q 2 M R 2 + 1 − 7M R + 10M 2 R 2 + 61Q 2 2R 2 − 8Q 2 3M R 2 1 − 2M R + Q 2 R 2 1 − 6M R + Q 2 M R cos 2ω ω 0 ϕ + 3 2 Qq 32πM m 1 − Q 2 M R 1 − 2M R 1 − 3M R 3 2 1 − 6M R + Q 2 M R 2 + 19 2 Qq 32πM m 1 − Q 2 M R 1 − 2M R 1 − 3M R 3 2 1 − 6M R + Q 2 M R 2 cos 2ω ω 0 ϕ + · · ·(42) In the Schwarzschild limit, we have an elliptical orbit with [1] a = R + (n r 0 ) 2 R (2 − 9M R + 11M 2 R 2 + 6M 3 R 3 ) (1 − 2M R )(1 − 6M R ) 2 ,(43)e = n r 0 (1 − 2M R )(1 − 6M R ) 2 R(1 − 2M R )(1 − 6M R ) 2 + (n r 0 ) 2 R (2 − 9M R + 11M 2 R 2 + 6M 3 R 3 ) = n r 0 R + O (n r 0 ) 3 R 3 .(44) Also, for the Schwarzschild case the shape of the orbit is described up to second-order of ( n r 0 R ) as r(ϕ) R = 1 − n r 0 R cos ω ω 0 ϕ + n r 0 R 2 3 − 5M R − 30M 2 R 2 + 72M 3 R 3 2(1 − 2M R )(1 − 6M R ) 2 + (1 − 5M R ) 2(1 − 6M R ) cos 2ω ω 0 ϕ + · · · .(45) which is in agreement with equation (62) of reference [1]. Third-order geodesic deviation and Poincaré-Lindstedt's method In the previous section, we have calculated the trajectory of charged particles up to second-order. To find a more accurate trajectory, we need to obtain the higher-order terms of expansion (9). Using the first and second-order solutions and third-order equation (51) for δ 3 x µ , we have δ 3 t δ 3 r δ 3 ϕ    = ǫ 3    D t D r D ϕ    ,(46) where m ij are defined in equation (19) and the coefficients D i 0 , D i 0q , D i 1 , D i 1q , D i 3 , D i 3q , (i = t, r, ϕ) are functions of M , R, q, Q D t = (D t 1 + D t 1q ) cos(ωs) + (D t 3 + D t 3q ) cos(3ωs) + D t 0 + D t 0q , D r = (D r 1 + D r 1q ) cos(ωs) + (D r 3 + D r 3q ) cos(3ωs) + D r 0 + D r 0q , D ϕ = (D ϕ 1 + D ϕ 1q ) cos(ωs) + (D ϕ 3 + D ϕ 3q ) cos(3ωs) + D ϕ 0 + D ϕ 0q . As one can see the right-hand side of equations (46) have a frequency that is same as the eigenvalues of the differential matrix in the left-hand side (resonant terms). This makes a new problem i.e. infinite solution for δ 3 r which is called the secular term (growing without bound). For avoiding these unbounded deviations we use the Poincaré's method. In this method by replacing ω by infinite series in power of the infinitesimal parameter ǫ = n r 0 R as ω → ω p = ω + ǫω 1 + ǫ 2 ω 2 + ǫ 3 ω 3 + · · · ,(47) the correction frequencies ω 1 , ω 2 , ω 3 , · · · can be chosen such that the Poincaré's resonances vanish. By considering a differential equation for x µ as d 2 ds 2 δr + 1 2 δ 2 r + 1 6 δ 3 r + ω 2 δr + 1 2 δ 2 r + 1 6 δ 3 r = C r 0 + C r 0q + (C r + C r q ) cos(2ω p s) + (D r 1 + D r 1q ) cos(ω p s) + (D r 3 + D r 3q ) cos(3ω p s) + D r 0 + D r 0q .(48) Now, by developing both of the sides in terms of a series of the parameter ǫ, for avoiding the secular terms, we find some algebraic relations on ω 1 , ω 2 , ω 3 , · · ·. In the Schwarzschild limit, we have [5] ω p = M 1/2 1 − 6M R R 3/2 1 − 3M R − ǫ 2 3M 3/2 (6 − 37M R ) 4R 5/2 1 − 3M R (1 − 6M R ) 3/2 ,(49) where ǫ 2 = (n r 0 ) 2 R 2 . The resonant terms will also appear at the fifth-order approximation, by terms cos 5 (ws), sin 3 (ws) cos 2 (ws), etc, this problem can be solved in a similar way. Finally, we note that the electric charge of any celestial body being practically close to zero anyway. Therefore, it is worth to investigate the geodesic deviation and higher-order geodesic deviations in a more realistic background such as the Schwarzschild metric in a strong magnetic dipole field or magnetized black holes [28,29,30]. The study of them will be the subject of the future investigations. Conclusion and discussion Many of significant successes in general relativity are obtained by approximation methods. One of the most important approximation scheme in general relativity is the post-Newtonian approximation; an expansion with a small parameter which is the ratio of the velocity of matter to the speed of light. A novel approximation method was also proposed by Kerner et. al. which is based on the world-line deviations [1]. The calculation of the perihelion advance by means of the higher-order geodesic deviation method for neutral particles in different gravitational fields such as Schwarzschild and Kerr metric was first studied in several papers [1,5]. In the present paper by using of the higher-order geodesic deviation method for charged particles [8], we applied this approximation method to charged particles in the Reissner-Nordstrom space-time. We first started with an orbital motion which is close to a circular orbit with constant angular velocity which is considered as zeroth-approximation (unperturbed circular orbital motion) with the orbital frequency ω 0 . In the next step, we solved the first and second-order deviation equations which reduced to a system of the second-order linear differential equations with constant coefficients. The solutions are harmonic oscillators with characteristic frequency. From equation (35), the first and second-order corrections are oscillating terms with angular frequency ω and 2ω, respectively. Finally, we have obtained the new trajectory by adding the higher-order geodesic deviations (non-linear effects) to the circular one, equation (42). The advantage of this approach is to get the relativistic trajectories of planets without using Newtonian and post-Newtonian approximations for arbitrary values of quantity M/R. Acknowledgements We wish to express our thanks to the anonymous referees for their comments and suggestions that helped us significantly to improve this manuscript. Appendix 1 For solving the third-order geodesic deviation equation, we should invoke to the Poincare's method. For this purpose, it is better to write the third-order geodesic deviation as δ 3 x µ . The third-order geodesic deviation equation δ 3 x µ is related to the third-order geodesic deviation vector h µ δ 3 x µ = ǫ 3 h µ − 3Γ µ λρ n λ b ν + (∂ κ Γ µ λν − 2Γ µ λσ Γ σ κν )n κ n λ n ν ,(50) where h µ = Db µ Dp . We derive the third-order geodesic deviation equation as d 2 δ 3 x µ ds 2 + 2Γ µ λν u λ − q m F µ ν dδ 3 x ν ds + ∂ σ Γ µ λρ u λ u ρ − q m u ν ∂ σ F µ ν δ 3 x σ = − 6Γ µ λρ dδx λ ds dδ 2 x ρ ds − 3δx σ (∂ τ ∂ σ Γ µ λρ )u λ δ 2 x τ u ρ + 2δx τ dδx ρ ds − 6(∂ σ Γ µ λρ ) δx σ u λ dδ 2 x ρ ds + δx σ dδx λ ds dδx ρ ds + δ 2 x σ u λ dδx ρ ds − δx σ δx τ δx ν (∂ τ ∂ σ ∂ ν Γ µ λρ )u λ u ρ + q m dn ν ds (∂ σ F µ ν )δ 2 x σ + (∂ σ ∂ τ F µ ν )n σ n τ + q m n σ dδ 2 x ν ds (∂ σ F µ ν ) + (∂ σ ∂ τ F µ ν )δ 2 x τ u ν ,(51) by substituting δ 3 x µ in term of h µ into above equation, we obtain equation (72) for case q = 0 [1]. Appendix 2 The second-order geodesic deviation vector b µ is b t = (n r 0 ) 2 M ε − qQ 4πmR R 3 1 − 6M R + Q 2 M R 1 − 2M R 2 − 3(2 − 5M R + 18M 2 R 2 − 10 3 Q 2 M R ) (1 − 6M R + Q 2 M R ) s + (2 − 13M R + 79 6 Q 2 M R ) sin(2ωs) ω − 6qQws + 19qQ sin(2ωs) 32πmM ω(1 − 6M R + Q 2 M R )(1 − 2M R )(1 − 3M R ) 3 2 ,(52)b r = (n r 0 ) 2 M 2R 2 M R − Q 2 R 2 1 − 6M R + Q 2 M R 3(2 − 5M R + 18M 2 R 2 − 10 3 Q 2 M R ) (1 − 6M R + Q 2 M R ) + (2 + 5M R − 28 3 Q 2 M R ) cos(2ωs) + 3qQ + 19qQ cos(2ωs) 16πmM (1 − 6M R + Q 2 M R )(1 − 2M R )(1 − 3M R ) 3 2 ,(53)b ϕ = (n r 0 ) 2 ω 0 M R 3 ( M R − Q 2 R 2 )(1 − 6M R + Q 2 M R ) − 3(2 − 5M R + 18M 2 R 2 − 10 3 Q 2 M R ) (1 − 6M R + Q 2 M R ) s + (1 − 8M R ) sin(2ωs) 2ω − 6qQs + qQ(31 − 196M R ) sin(2ωs) 32πmM (1 − 6M R + Q 2 M R )(1 − 2M R )(1 − 3M R ) 3 2 .(54) As explained in section 2 the second-order geodesic deviation, (δ 2 x = b µ − Γ µ νλ n ν n λ ), are given by δ 2 t = (n r 0 ) 2 M ε − qQ 4πmR R 3 − 3 2 − 5M R + 18M 2 R 2 − 10Q 2 3M R 1 − 2M R 2 1 − 6M R + Q 2 M R s + 2 − 15M R + 14M 2 R 2 − 79Q 2 6M R sin(2ωs) ω 1 − 2M R 3 1 − 6M R + Q 2 M R − 6qQω + 19qQ sin(2ωs) 32πmM ω 1 − 2M R 3 1 − 3M R 3 2 1 − 6M R + Q 2 M R 2 ,(55)δ 2 r = (n r 0 ) 2 M R 2 M R − Q 2 1 − 6M R + Q 2 M R 5 − 33M R + 90M 2 R 2 − 72M 3 R 3 + 5Q 2 R 2 − 5Q 2 M R 1 − 2M R + Q 2 R 2 1 − 6M R + Q 2 M R + −1 + 9M R + 33Q 2 2R 2 − 19M 2 R 2 − 23 2 M Q 2 R 3 − 8Q 2 3M R cos(2ωs) 1 − 2M R + Q 2 R 2 + 3qQ + 19Qq cos(2ωs) 32πmM 1 − 2M R 1 − 3M R 3 2 1 − 6M R + Q 2 M R ,(56)δ 2 ϕ = (n r 0 ) 2 M ω 0 R 3 M R − Q 2 R 2 1 − 6M R + Q 2 M R − 3 2 − 5M R + 18M 2 R 2 − 10Q 2 3M R 1 − 6M R + Q 2 M R s + 5 − 32M R sin(2ωs) 2ω − 6qQs + qQ 31 − 196M R sin(2ωs) 32πmM 1 − 2M R 1 − 3M R 3 2 1 − 6M R + Q 2 M R ,(57) also, the semi-major axis a and eccentricity e are a = R + (n r 0 ) 2 R (2 − 9M R + 11M 2 R 2 + 6M 3 R 3 + 15Q 2 R 2 − 13Q 2 3M R − 235M Q 2 4R 3 ) (1 − 2M R + Q 2 R 2 )(1 − Q 2 M R )(1 − 6M R + Q 2 M R ) 2 ,(58)e = n r 0 (1 − 2M R + Q 2 R 2 )(1 − Q 2 M R )(1 − 6M R + Q 2 M R ) 2 R(1 − 2M R + Q 2 R 2 )(1 − Q 2 M R )(1 − 6M R + Q 2 M R ) 2 + (n r 0 ) 2 R (2 − 9M R + 11M 2 R 2 + 6M 3 R 3 + 15Q 2 R 2 − 13Q 2 3M R − 235M Q 2 4R 3 ) ,(59) in which for massive central objects we have neglected all terms of order qQ 4πmM . Figure 1 : 1Deviation of two nearby geodesics in a gravitational field. Lines (a) and (b) represent the central geodesic p = 0 and the nearby geodesic p = p0, respectively; lines (c) and (d) show the corresponding first and second-order approximations to the nearby geodesic (b) m 12 m 13 m 14 m 21 m 22 m 23 m 24 m 31 m 32 m 33 m 34 m 41 m 42 m 43 m 44 31 = m 32 = m 34 = 0, m 33 = d 2 ds 2 + l 2 R 4 , m 41 = m 43 = 0, m 42 = 2l R 3 d ds , m 44 = d 2 ds 2 . As can be seen, the geodesic deviation equation of θ component represents a harmonic oscillator equation with the angular frequency of ω θ = ω 0 = l R 2 . So we consider n θ as follow n θ (s) = n θ 0 cos(ω 0 s), m 11 m 12 m 14 m 21 m 22 m 24 m 41 m 42 m . R Kerner, J W Van Holten, R Colistete, Class. Quantum Grav. 184725R. Kerner, J. W. van Holten and R. Colistete, Class. Quantum Grav. 18, 4725 (2001) . A Einstein, Ann. Physik. 49769A. Einstein, Ann. Physik, 49, (1916) 769 . D Baskaran, L P Grishchuk, Class. Quantum Grav. 214041D. Baskaran and L. P. Grishchuk, Class. Quantum Grav. 21, 4041 (2004) . R Kerner, J Martin, S Mignemi, J W Van Holten, Phys. Rev. D. 6327502R. Kerner, J. Martin, S. Mignemi and J. W. van Holten, Phys. Rev. D 63, 027502 (2001) . R ColisteteJr, C Leygnac, R Kerner, Class. Quantum Grav. 194573R. Colistete Jr, C. Leygnac and R. Kerner, Class. Quantum Grav. 19, 4573 (2002) . J W Van Holten, Int. J. Mod. Phys. A. 172764J. W. van Holten, Int. J. Mod. Phys. A 17, 2764 (2000). . A Balakin, J W Van Holten, R Kerner, Class. Quantum Grav. 175009A. Balakin, J. W. van Holten and R. Kerner, Class. Quantum Grav. 17, 5009 (2000) . M Heydari-Fard, S N Hasani, Int. J. Mod. Phys. D. 271850042M. Heydari-Fard and S. N. Hasani, Int. J. Mod. Phys. D 27, 1850042 (2018) . V K Shchigolev, Universal Journal of Computational Mathematics. 345V. K. Shchigolev, Universal Journal of Computational Mathematics 3, 45 (2015) Y P Hu, H Zhang, J P Hou, L Z Tang, Advances in High Energy Physics. 604321Y. P. Hu, H. Zhang, J. P. Hou, and L. Z. Tang, Advances in High Energy Physics 2014, 604321 (2014) . H Arakida, Int. J. Theor. Phys. 5214081414H. Arakida, Int. J. Theor. Phys. 52, 14081414 (2013) . R R Cuzinatto, P G Pompeia, M De Montigny, F C Khanna, J M Hoff Da, Silva , Eur. Phys. J. C. 743017R. R. Cuzinatto, P. G. Pompeia, M. de Montigny, F. C. Khanna and J. M. Hoff da Silva, Eur. Phys. J. C 74, 3017 (2014) . H J Schmidt, Phys. Rev. D. 7823512H. J. Schmidt, Phys. Rev. D 78, 023512 (2008) . T J Lemmon, A R Mondragon, Am. J. Phys. 77890T. J. Lemmon and A.R. Mondragon, Am. J. Phys. 77, 890 (2009) M K Mak, C S Leung, T Harko, Advances in High Energy Physics. 7093592M. K. Mak, C. S. Leung and T. Harko, Advances in High Energy Physics 2018, 7093592 (2018) . V K Shchigolev, Universal Journal of Computational Mathematics. 345V. K. Shchigolev, Universal Journal of Computational Mathematics 3, 45 (2015) . V K Shchigolev, International Journal of Physical Research. 452V. K. Shchigolev, International Journal of Physical Research 4, 52 (2016) . C M Will, Phys. Rev. Lett. 120191101C. M. Will, Phys. Rev. Lett. 120, 191101 (2018) . G V Kraniotis, S B Whitehouse, Class. Quant. Grav. 204817G. V. Kraniotis and S. B. Whitehouse, Class. Quant. Grav. 20, 4817 (2003) . M L Ruggiero, Int. J. Mod. Phys. D. 231450049M. L. Ruggiero, Int. J. Mod. Phys. D 23, 1450049 (2014) . M L Ruggiero, Int. J. Mod. Phys. D. 231450049M. L. Ruggiero, Int. J. Mod. Phys. D 23, 1450049 (2014) . C Jiang, W Lin, Eur. Phys. J. Plus. 129200C. Jiang and W. Lin, Eur. Phys. J. Plus 129, 200 (2014) . A Avalos-Vargas, G Ares De Parga, Eur. Phys. J. Plus. 127155A. Avalos-Vargas and G. Ares de Parga, Eur. Phys. J. Plus 127, 155 (2012) . A Avalos-Vargas, G Ares De Parga, Eur. Phys. J. Plus. 126117A. Avalos-Vargas and G. Ares de Parga, Eur. Phys. J. Plus 126, 117 (2011) C W Misner, K S Thorne, J A Wheeler, Gravitation. San Francisco, CAFreemanC. W. Misner, K. S. Thorne and J. A. Wheeler, Gravitation (Freeman, San Francisco, CA, 1970) S Weinberg, Gravitation and Cosmology: Principles and Applications of the General Theory of Relativity. New YorkJohn Wiley and SonsS. Weinberg, Gravitation and Cosmology: Principles and Applications of the General Theory of Relativity ( New York, John Wiley and Sons, 1972) J Kepler, Astronomia Nova (The New Astronomy). J. Kepler, Astronomia Nova (The New Astronomy), (1609) . Y K Lim, Phys. Rev. D. 9124048Y. K. Lim, Phys. Rev. D 91, 024048 (2015) . D B Papadopoulos, I Contopoulos, K D Kokkotas, N Stergioulas, Journal of Mathematical Physics. 7823512D. B. Papadopoulos, I. Contopoulos, K. D. Kokkotas and N. Stergioulas, Journal of Mathematical Physics 78, 023512 (2008) . M Kolo, Z Stuchlk, A Tursunov, Class. Quantum Grav. 32165009M. Kolo, Z. Stuchlk and A. Tursunov, Class. Quantum Grav. 32, 165009 (2015)
[]
[ "Astro2020 Science White Paper Joint Gravitational Wave and Electromagnetic Astronomy with LIGO and LSST in the 2020's Thematic Areas: Planetary Systems Star and Planet Formation Formation and Evolution of Compact Objects Cosmology and Fundamental Physics Stars and Stellar Evolution Resolved Stellar Populations and their Environments Galaxy Evolution Multi-Messenger Astronomy and Astrophysics", "Astro2020 Science White Paper Joint Gravitational Wave and Electromagnetic Astronomy with LIGO and LSST in the 2020's Thematic Areas: Planetary Systems Star and Planet Formation Formation and Evolution of Compact Objects Cosmology and Fundamental Physics Stars and Stellar Evolution Resolved Stellar Populations and their Environments Galaxy Evolution Multi-Messenger Astronomy and Astrophysics" ]
[]
[]
[]
optional): The blossoming field of joint gravitational wave and electromagnetic (GW-EM) astronomy is one of the most promising in astronomy. The first, and only, joint GW-EM event GW170817 provided remarkable science returns that still continue to this day. Continued growth in this field requires increasing the sample size of joint GW-EM detections. In this white paper, we outline the case for using some percentage of LSST survey time for dedicated targetof-opportunity follow up of GW triggers in order to efficiently and rapidly identify optical counterparts. We show that the timeline for the LSST science survey is well matched to the planned improvements to ground based GW detectors in the next decade. LSST will become particularly crucial in the later half of the 2020s as more and more distant GW sources are detected. Lastly, we highlight some of the key science goals that can be addressed by a large sample of joint GW-EM detections. arXiv:1904.02718v1 [astro-ph.HE] 4 Apr 2019 500 1000 10% 5% 2% 1% 190 Mpc A+: 325 Mpc z 0.3
null
[ "https://arxiv.org/pdf/1904.02718v1.pdf" ]
102,350,544
1904.02718
b994cbb036190a35c9acec4fe227cf755857bce9
Astro2020 Science White Paper Joint Gravitational Wave and Electromagnetic Astronomy with LIGO and LSST in the 2020's Thematic Areas: Planetary Systems Star and Planet Formation Formation and Evolution of Compact Objects Cosmology and Fundamental Physics Stars and Stellar Evolution Resolved Stellar Populations and their Environments Galaxy Evolution Multi-Messenger Astronomy and Astrophysics Astro2020 Science White Paper Joint Gravitational Wave and Electromagnetic Astronomy with LIGO and LSST in the 2020's Thematic Areas: Planetary Systems Star and Planet Formation Formation and Evolution of Compact Objects Cosmology and Fundamental Physics Stars and Stellar Evolution Resolved Stellar Populations and their Environments Galaxy Evolution Multi-Messenger Astronomy and Astrophysics Principal Author: Name: Philip S. Cowperthwaite Institution: Carnegie Observatories Co-authors: Hsin-Yu Chen (Harvard/BHI), Ben Margalit (Columbia), Raffaella Margutti (North-western), Morgan May (Columbia), Brian Metzger (Columbia), and Chris Pankow (Northwest-ern). optional): The blossoming field of joint gravitational wave and electromagnetic (GW-EM) astronomy is one of the most promising in astronomy. The first, and only, joint GW-EM event GW170817 provided remarkable science returns that still continue to this day. Continued growth in this field requires increasing the sample size of joint GW-EM detections. In this white paper, we outline the case for using some percentage of LSST survey time for dedicated targetof-opportunity follow up of GW triggers in order to efficiently and rapidly identify optical counterparts. We show that the timeline for the LSST science survey is well matched to the planned improvements to ground based GW detectors in the next decade. LSST will become particularly crucial in the later half of the 2020s as more and more distant GW sources are detected. Lastly, we highlight some of the key science goals that can be addressed by a large sample of joint GW-EM detections. arXiv:1904.02718v1 [astro-ph.HE] 4 Apr 2019 500 1000 10% 5% 2% 1% 190 Mpc A+: 325 Mpc z 0.3 Introduction The direct detection of gravitational waves (GW) from astrophysical sources has ushered in an exciting new era of astronomy. The GW data from sources such as the inspiral and merger of compact object binaries comprised of black holes (BH) and/or neutron stars (NS) provides unparalleled insight into the component masses and spins, binary parameters (e.g., inclination, eccentricity), and the strong gravity regime of general relativity. However, the true power of GW detections become apparent when combined with traditional electromagnetic (EM) observations. The discovery of an EM counterpart provides improved localization and distance information, host galaxy identification and demographics, breaks modeling degeneracies (e.g., distance vs. inclination), and allows modeling of the merger hydrodynamics. The first such joint detection, GW170817 [7], was an exceptional event. It remains the loudest signal detected by GW instruments from the first two observing runs. Its proximity (40 Mpc) allowed not only rapid counterpart identification and a bevy of electromagnetic observations [8], but also a number of joint GW-EM studies. The GW data provided the first measurements of NS tidal deformation and subsequent equation of state constraints [10], constraints on non-linear tides in NS [12], constraints on the formation of the progenitor system [9], and new tests of general relativity [72]. Joint GW-EM studies provided an independent measurement of the Hubble constant [5], placed complementary constraints on the NS equation of state [50,16,61,25], the fate of the post-merger object [54], and prompted reanalysis and questions about the distribution and formation of binary neutron stars in the galaxy [56,34]. Building upon the tremendous success of GW170817 requires the continued support for ongoing improvements to both the network of GW detectors and next-generation EM facilities. As the network of ground-based GW detectors continues to grow more sensitive (Section 2) wide-field optical telescopes such as the Large Synoptic Survey Telescope [LSST 1 ; 42] will be crucial for detecting distant EM counterparts. In this white paper we discuss the landscape of joint GW-EM science in the next decade with a focus on target of opportunity follow-up with LSST to identify a large sample of EM counterparts (Section 3), as well as briefly introducing some of the major science questions such a sample can help address (Section 4). Gravitational-Wave Detectors in the Next Decade Gravitational-wave networks are expected to both expand in number and increase in sensitivity over the next 5-10 years [11]. The first two observing runs included both of the LIGO interferometers [49,3] as well as the French-Italian Virgo [13] interferometer. The third observing run will likely see the inclusion of a fourth site in Japan [KAGRA; 44], and a fifth site in India [LIGO-India; 41] is in the planning and construction stages with a 2024 operations date. By the middle of the decade, a five site, world-wide network will be operational, with a binary neutron star (BNS) merger detection horizon of approximately 400 Mpc. Current observational estimates of merger event rates for these systems range over 100 -4000 Gpc −3 yr−1 [46,59,71]. Taking the median of current estimates and the detection range quoted above, a year of GW network operations would yield a few tens of events. As sensitivity improves, BNS localizations are expected to shrink by at least an order of magnitude; from hundreds to tens of square degrees on the sky [4,33,57,32]. Plans for going beyond the design sensitivity of second generation instruments include the A+ upgrade [15] for the LIGO instruments, as well as proposed third generation instruments such as LIGO Voyager and the Einstein telescope [60,6]. The surveyed volumes for these instruments are at the Gpc 3 scale, and will allow for hundreds to thousands of BNS detections. This sample will provide exquisitely precise measurements of the GW emission across the entire accessible bandwidth as well as commensurate contributions to our understanding of a variety of fundamental open questions [66,73]. Targeted Follow-Up of GW Events with LSST While GW170817 was detected with relative ease due to it's small sky localization (∼ 28 deg 2 ) and luminosity distance (D L ∼ 40 Mpc), future detections may not be so straight-forward. This is particularly true as the sensitivity horizon of GW detectors pushes out to hundreds of Mpc or Gpc scales in the case of NS-BH mergers. In this regime, LSST is the instrument of choice due to its unmatched combination of wide field of view (9.6 deg 2 ), depth and speed (g ∼ 25 mag in 30s). Thus, LSST will be able to efficiently observe GW localization regions [20-80 deg 2 ; 22, 57] and rapidly identify EM counterparts. The operations timescale for LSST is well matched to this goal with full science observations expected to begin in 2022 and continuing for 10 years. The present LSST operations plan sets aside 5% of survey time for mini-surveys that require a special cadence. It is imperative that some fraction of this allocation go to target-of-opportunity observations in response to GW events. Such triggered observations are essential to ensure a large sample of joint GW-EM detections as the primary survey cadence is insufficient for the efficient detection of rapidly evolving transients [e.g., 28,68,51]. In addition to growing the sample of joint GW-EM detections, targeted observations with LSST have additional benefits such as the potential for detecting the optical counterpart at very early times. Such observations can provide key insights into the nature of the early-time emission [e.g., 52,58]. This is particularly interesting in cases where strong BNS signals provide an "early-warning" [19,20] potentially allowing observations to begin before the final merger occurs [14]. One challenge facing these deep follow-up observations is the presence of a background of transients unrelated to the GW event, particularly as observations increase in depth. However, these effects can be mitigated thanks to the unique temporal and spectral behavior of kilonovae [26,28,29]. Examples of Joint GW-EM Science The field of joint GW-EM astronomy is rich with opportunity to study the Universe in new and exciting ways previously inaccessible to studies using just traditional EM observations. Here we briefly discuss just a fraction of these exciting possibilities: The Growth of Joint GW-EM Cosmology The tension between local measurements of H 0 and CMB measurements by Planck is now at 3.8 σ (99.9%) [64] due to an improvement in the distance ladder using Gaia parallax and HST photometry. In light of this increasing tension, an independent measurement of H 0 is desirable as a potential resolution. Such an independent measurement is made possible by joint GW-EM science whereby GW signals provide a direct measurement of the luminosity distance of the GW sources and the EM counterpart provide a unique determination of the host galaxy and thus the redshift of the source. Combining these two pieces of information, one can measure cosmological parameters, including H 0 [67,38]. The value of H 0 measured in this fashion is completely independent of other cosmological experiments, with its own unique systematics. The first H 0 measurement using this technique was made following the detection of GW170817 and its associated optical counterpart [5,37]. As more GW-EM joint detections are made the precision of our H 0 measurement will continue to increase significantly ( Figure 2). It is expected that within the decade we will have a sufficient sample size to make a percent-level determination of H 0 which can potentially resolve the current tension [21]. In addition to BNS mergers, NS-BH mergers provide another potential candidate for GW-EM H 0 measurements. The precession of NS-BH binaries allow for a more accurate determination of the luminosity distance and therefore an even more precise H 0 measurement, assuming that an EM counterpart can be identified [74]. As the sensitivity of GW detectors continues to improve, GW sources at higher redshifts may eventually provide the leverage necessary to constrain other cosmological parameters through measurements of the distance-redshift relation beyond the local volume. Unveiling the Sites of Cosmic r-process Production Roughly half of the elements heavier than the iron group are produced by the rapid capture of neutrons onto lighter seed nuclei [the so-called r-process; e.g., 18]. However, the astrophysical site or sites where the r-process takes place has long been debated, with core collapse supernovae [e.g., 75] and binary NS mergers [48] being traditionally favored contenders. The kilonova transient following GW170817 was very likely powered by the radioactive decay of freshly synthesized r-process nuclei, providing the first direct evidence for its origin. Furthermore, given the large quantity of r-process material (of at least several percent of a solar mass) inferred from the light curve modeling, and the volumetric rate of BNS mergers implied by Advanced LIGO, one is already led to conclude that NS mergers are major, if not dominant, r-process sources in the Galaxy [e.g., 27,31]. Nevertheless, given just a single event, it is not known whether the r-process yield of GW170817 was in fact "typical". Furthermore, other astrophysical events [e.g. collapsars; 70] may contribute to the r-process. A future sample of kilonovae discovered by LSST following triggers from LIGO/Virgo will provide much better statistics on the diversity of r-process production from NS mergers. The colors New Constraints on the Neutron Star Equation of State Despite significant efforts, the equation of state (EOS) of dense neutron-rich matter is fundamentally unknown. This property is key to understanding the physics of matter at extreme densities and the properties of NSs. The EOS is also essential input to numerical simulations of NS mergers and core-collapse SNe. However, most properties of the EOS are inaccessible to terrestrial laboratory experiments and currently incalculable from first principles QCD. As such, astrophysical observations of NSs remain the most promising avenue of probing the regime of dense matter. The GW signal of binary NS mergers alone can provide important information about the radii of NSs [7,30,62,55, 10] -a proxy for the pressure at ∼twice nuclear saturation density [47], however probing the EOS at even higher densities in the NS core remains difficult [though see e.g. 17]. Joint GW-EM observations of NS mergers (particularly the kilonova and gamma-ray burst emission) offer a fresh opportunity to probe this regime by using the electromagnetic signature to infer the fate of the merger remnant, e.g. when and whether a black hole forms. This approach was applied to GW170817, leading to stringent new constraints on the maximal mass of (cold, non-rotating) NSs [50,69,63,65] as well as lower limits on the NS radius [16,61,25]. A larger statistical sample of joint GW-EM binary NS merger observations will further strengthen these constraints and provide invaluable insight into the behavior of matter at the highest densities. Figure 2 shows the predicted constraints (upper/lower limits) on the maximum NS mass, M TOV , as a function of the number of well-characterized joint GW+EM binary NS merger detections, simulated for a binary NS mass distribution drawn from the Galactic population. Future LSST discoveries of GW events, accompanied by dedicated ground or space-based EM follow-up efforts to ascertain the remnant fates, thus represents a powerful new probe on constraining the NS EOS and the only current method which can provide an upper limit on M TOV . In addition to NS-NS mergers, the EM follow-up of mergers of BH-NS binaries can provide constraints on the NS radius and the properties of the BH complementary to those obtained through the GW signal alone. This is driven by the fact that the quantity of ejected mass (which can be measured from the kilonova emission) in BH-NS merger depends sensitively on the NS radius and the size of the BH event horizon [e.g. 35]. The presence or absence of luminous EM counterpart in a population of BH-NS merger, in addition to providing constraints on the NS radius given the GW-inferred mass and spin of the BH, is sensitive to whether the BH event horizon has the properties predicted by General Relativity. Additional Science Drivers In this white paper, we have touched on only a small fraction of the GW-EM science possible in the next decade. In reality, the potential for new discoveries is vast. This includes as of yet undetected GW sources, such as Local Group supernova [2] and neutron-star black-hole mergers [1,71] which may also begin to emerge. In both cases, GW detectors provide complementary facilities to study complex physical phenomena such as accretion disks, jet physics, and the collapse of massive stars. Additionally, binary black hole (BBH) mergers are expected to form the majority (up to hundreds per year) of GW event catalogs. While not anticipated to be EM active, tentative links to gammaray emission have been made [24], and further detections will solidify or rule out these claims as well as constrain other models. This broad range of science possibilities means that LSST should remain available and flexible enough to respond to unexpected triggers. Outlook For The Next Decade We have presented a wide range of possible science goals using joint GW-EM astronomy that are achievable in the next decade. The common thread between all of these goals is that they require the dedicated allocation of resources. Specifically, they all require a large sample of joint GW-EM detections that enable deep statistical studies of both GW and EM sources. As outlined in Section 3, the use of GW triggered target of opportunity observations with LSST is the most efficient and effective approach to building up such a sample. We therefore reiterate our recommendation that significant effort and telescope time be dedicated to this task. This includes an appropriate fraction of LSST survey time along with the development of rapid response data brokers fine-tuned for GW follow-up. Ideally, LSST should also strive to interact with other telescopes [e.g., ELTs; 23, 36] to ensure that we obtain rich data sets spanning the entire EM spectrum. Data from these programs should be made promptly available to the entire community. Joint GW-EM astronomy has the potential to truly revolutionize the way in which we observe and understand the Universe, but only with the proper allocation of resources and opportunities. Figure 2 : 2Left Panel: Hubble constant precision as a function of the number of joint GW-EM detections. Right Panel: Constraints on maximum neutron star mass, M TOV (here assumed to be M TOV = 2.1M ), as a function of the number of joint GW-EM detections.and spectra of the kilonovae provide key information on the detailed composition of the ejecta [e.g., 45], particularly the presence or absence of heavy lanthanide elements (atomic mass number A 140), which may vary from event to event, depending on the neutron richness (electron fraction) of the merger ejecta [e.g., 53]. The abundance patterns of individual r-process "pollution events" are measured in the spectra of metal-poor stars in the galactic halo [e.g., 39] or in dwarf galaxies orbiting the Milky Way [e.g., 43]. Searching for similar diversity in the the r-process signatures of kilonovae would help determine whether binary NS mergers were present and contributing to chemical enrichment in the early universe [e.g., 40]. www.lsst.org . J Abadie, B P Abbott, R Abbott, 10.1088/0264-9381/27/17/173001Classical and Quantum Gravity. 27173001Abadie, J., Abbott, B. P., Abbott, R., et al. 2010, Classical and Quantum Gravity, 27, 173001 . B P Abbott, R Abbott, T D Abbott, 10.1103/PhysRevD.94.102001Phys. Rev. D. 94102001Abbott, B. P., Abbott, R., Abbott, T. D., et al. 2016, Phys. Rev. D, 94, 102001 . 10.1103/PhysRevLett.116.131103Phys. Rev. Lett. 116131103-. 2016, Phys. Rev. Lett., 116, 131103 . 10.1007/lrr-2016-1Living Reviews in Relativity. 191-. 2016, Living Reviews in Relativity, 19, 1 . 10.1038/nature24471Nature. 55185-. 2017, Nature, 551, 85 . 10.1088/1361-6382/aa51f4Classical and Quantum Gravity. 3444001-. 2017, Classical and Quantum Gravity, 34, 044001 . 10.1103/PhysRevLett.119.161101Phys. Rev. Lett. 119161101-. 2017, Phys. Rev. Lett., 119, 161101 . 10.3847/2041-8213/aa91c9ApJ. 84812-. 2017, ApJ, 848, L12 . 10.3847/2041-8213/aa93fcApJ. 85040-. 2017, ApJ, 850, L40 . 10.1103/PhysRevLett.121.161101Phys. Rev. Lett. 121161101-. 2018, Phys. Rev. Lett., 121, 161101 . 10.1007/s41114-018-0012-9Living Reviews in Relativity. 213-. 2018, Living Reviews in Relativity, 21, 3 . 10.1103/PhysRevLett.122.061104Phys. Rev. Lett. 12261104-. 2019, Phys. Rev. Lett., 122, 061104 . T Accadia, F Acernese, M Alshourbagy, 10.1088/1748-0221/7/03/P03012Journal of Instrumentation. 73012Accadia, T., Acernese, F., Alshourbagy, M., et al. 2012, Journal of Instrumentation, 7, 3012 . I Arcavi, 10.3847/2041-8213/aab267ApJ. 85523Arcavi, I. 2018, ApJ, 855, L23 The A+ design curve. L Barsotti, L Mcculler, M Evans, Tech. repBarsotti, L., McCuller, L., Evans, M., et al. 2018, The A+ design curve, Tech. rep., LIGO . A Bauswein, O Just, H.-T Janka, 10.3847/2041-8213/aa9994ApJ. 85034Bauswein, A., Just, O., Janka, H.-T., et al. 2017, ApJ, 850, L34 . A Bauswein, N Stergioulas, H.-T Janka, 10.1103/PhysRevD.90.023002Phys. Rev. D. 9023002Bauswein, A., Stergioulas, N., & Janka, H.-T. 2014, Phys. Rev. D, 90, 023002 . E M Burbidge, G R Burbidge, W A Fowler, 10.1103/RevModPhys.29.547Reviews of Modern Physics. 29547Burbidge, E. M., Burbidge, G. R., Fowler, W. A., et al. 1957, Reviews of Modern Physics, 29, 547 . K Cannon, R Cariou, A Chapman, 10.1088/0004-637X/748/2/136ApJ. 748136Cannon, K., Cariou, R., Chapman, A., et al. 2012, ApJ, 748, 136 . M L Chan, C Messenger, I S Heng, 10.1103/PhysRevD.97.123014Phys. Rev. D. 97123014Chan, M. L., Messenger, C., Heng, I. S., et al. 2018, Phys. Rev. D, 97, 123014 . H.-Y Chen, M Fishbach, D E Holz, 10.1038/s41586-018-0606-0Nature. 562545Chen, H.-Y., Fishbach, M., & Holz, D. E. 2018, Nature, 562, 545 . H.-Y Chen, D E Holz, J Miller, arXiv:1709.08079arXiv e-printsChen, H.-Y., Holz, D. E., Miller, J., et al. 2017, arXiv e-prints, arXiv:1709.08079 R Chornock, P S Cowperthwaite, R Margutti, Astro2020 White Paper: "Multi-Messenger Astronomy with Extremely Large Telescopes. Chornock, R., Cowperthwaite, P. S., Margutti, R., et al. 2019, Astro2020 White Paper: "Multi- Messenger Astronomy with Extremely Large Telescopes" . V Connaughton, E Burns, A Goldstein, 10.3847/2041-8205/826/1/L6ApJ. 8266Connaughton, V., Burns, E., Goldstein, A., et al. 2016, ApJ, 826, L6 . M W Coughlin, T Dietrich, B Margalit, arXiv:1812.04803arXiv e-printsastroph.HECoughlin, M. W., Dietrich, T., Margalit, B., et al. 2018, arXiv e-prints, arXiv:1812.04803 [astro- ph.HE] . P S Cowperthwaite, E Berger, 10.1088/0004-637X/814/1/25ApJ. 81425Cowperthwaite, P. S., & Berger, E. 2015, ApJ, 814, 25 . P S Cowperthwaite, E Berger, V A Villar, 10.3847/2041-8213/aa8fc7ApJ. 84817Cowperthwaite, P. S., Berger, E., Villar, V. A., et al. 2017, ApJ, 848, L17 . P S Cowperthwaite, V A Villar, D M Scolnic, arXiv:1811.03098arXiv e-printsCowperthwaite, P. S., Villar, V. A., Scolnic, D. M., et al. 2018, arXiv e-prints, arXiv:1811.03098 . P S Cowperthwaite, E Berger, A Rest, 10.3847/1538-4357/aabad9ApJ. 85818Cowperthwaite, P. S., Berger, E., Rest, A., et al. 2018, ApJ, 858, 18 . S De, D Finstad, J M Lattimer, 10.1103/PhysRevLett.121.091102Physical Review Letters. 12191102De, S., Finstad, D., Lattimer, J. M., et al. 2018, Physical Review Letters, 121, 091102 . M R Drout, A L Piro, B J Shappee, 10.1126/science.aaq0049Science. 3581570Drout, M. R., Piro, A. L., Shappee, B. J., et al. 2017, Science, 358, 1570 . S Fairhurst, 10.1088/1361-6382/aab675Classical and Quantum Gravity. 35105002Fairhurst, S. 2018, Classical and Quantum Gravity, 35, 105002 . B Farr, C P L Berry, W M Farr, 10.3847/0004-637X/825/2/116ApJ. 825116Farr, B., Berry, C. P. L., Farr, W. M., et al. 2016, ApJ, 825, 116 . N Farrow, X.-J Zhu, E Thrane, arXiv:1902.03300arXiv e-printsFarrow, N., Zhu, X.-J., & Thrane, E. 2019, arXiv e-prints, arXiv:1902.03300 . F Foucart, 10.1103/PhysRevD.86.124007Phys. Rev. D. 86124007Foucart, F. 2012, Phys. Rev. D, 86, 124007 Astro2020 White Paper. M L Graham, D Milisavljevic, A Rest, Discovery Frontiers of Explosive Transients: An ELT & LSST Perspectives. Graham, M. L., Milisavljevic, D., Rest, A., et al. 2019, Astro2020 White Paper: "Discovery Frontiers of Explosive Transients: An ELT & LSST Perspectives" . C Guidorzi, R Margutti, D Brout, 10.3847/2041-8213/aaa009ApJ. 85136Guidorzi, C., Margutti, R., Brout, D., et al. 2017, ApJ, 851, L36 . D E Holz, S A Hughes, 10.1086/431341ApJ. 62915Holz, D. E., & Hughes, S. A. 2005, ApJ, 629, 15 . S Honda, W Aoki, Y Ishimaru, 10.1086/503195ApJ. 6431180Honda, S., Aoki, W., Ishimaru, Y., et al. 2006, ApJ, 643, 1180 . C J Horowitz, A Arcones, B Côté, arXiv:1805.04637[astro-ph.SRarXiv e-printsHorowitz, C. J., Arcones, A., Côté, B., et al. 2018, arXiv e-prints, arXiv:1805.04637 [astro-ph.SR] LIGO-India, Proposal of the Consortium for Indian Initiative in Gravitational-wave Observations (IndIGO). IndIGOTech. Rep. LIGO-M1100296-v2IndIGO Collaboration. 2011, LIGO-India, Proposal of the Consortium for Indian Initia- tive in Gravitational-wave Observations (IndIGO), Tech. Rep. LIGO-M1100296-v2, IndIGO, https://dcc.ligo.org/cgi-bin/DocDB/ShowDocument?docid=75988 . Ž Ivezić, S M Kahn, J A Tyson, arXiv:0805.2366arXiv e-printsIvezić, Ž., Kahn, S. M., Tyson, J. A., et al. 2008, arXiv e-prints, arXiv:0805.2366 . A P Ji, A Frebel, A Chiti, 10.1038/nature17425Nature. 531610Ji, A. P., Frebel, A., Chiti, A., et al. 2016, Nature, 531, 610 . T Akutsu, Kagra CollaborationM Ando, Kagra Collaboration10.1038/s41550-018-0658-yNature Astronomy. 335Kagra Collaboration, Akutsu, T., Ando, M., et al. 2019, Nature Astronomy, 3, 35 . D Kasen, B Metzger, J Barnes, 10.1038/nature24453Nature. 55180Kasen, D., Metzger, B., Barnes, J., et al. 2017, Nature, 551, 80 . C Kim, B B P Perera, M A Mclaughlin, 10.1093/mnras/stu2729MNRAS. 448928Kim, C., Perera, B. B. P., & McLaughlin, M. A. 2015, MNRAS, 448, 928 . J M Lattimer, M Prakash, 10.1086/319702ApJ. 550426Lattimer, J. M., & Prakash, M. 2001, ApJ, 550, 426 . J M Lattimer, D N Schramm, 10.1086/181612ApJ. 192145Lattimer, J. M., & Schramm, D. N. 1974, ApJ, 192, L145 . J Aasi, LIGO Scientific CollaborationB P Abbott, LIGO Scientific Collaboration10.1088/0264-9381/32/7/074001Classical and Quantum Gravity. 3274001LIGO Scientific Collaboration, Aasi, J., Abbott, B. P., et al. 2015, Classical and Quantum Gravity, 32, 074001 . B Margalit, B D Metzger, 10.3847/2041-8213/aa991cApJ. 85019Margalit, B., & Metzger, B. D. 2017, ApJ, 850, L19 . R Margutti, P Cowperthwaite, Z Doctor, arXiv:1812.04051arXiv e-printsMargutti, R., Cowperthwaite, P., Doctor, Z., et al. 2018, arXiv e-prints, arXiv:1812.04051 . B D Metzger, A Bauswein, S Goriely, 10.1093/mnras/stu2225MNRAS. 4461115Metzger, B. D., Bauswein, A., Goriely, S., et al. 2015, MNRAS, 446, 1115 . B D Metzger, R Fernández, 10.1093/mnras/stu802MNRAS. 4413444Metzger, B. D., & Fernández, R. 2014, MNRAS, 441, 3444 . B D Metzger, T A Thompson, E Quataert, 10.3847/1538-4357/aab095ApJ. 856101Metzger, B. D., Thompson, T. A., & Quataert, E. 2018, ApJ, 856, 101 . E R Most, L R Weih, L Rezzolla, 10.1103/PhysRevLett.120.261103Physical Review Letters. 120261103Most, E. R., Weih, L. R., Rezzolla, L., et al. 2018, Physical Review Letters, 120, 261103 . C Pankow, 10.3847/1538-4357/aadc66ApJ. 86660Pankow, C. 2018, ApJ, 866, 60 . C Pankow, E A Chase, S Coughlin, 10.3847/2041-8213/aaacd4ApJ. 85425Pankow, C., Chase, E. A., Coughlin, S., et al. 2018, ApJ, 854, L25 . A L Piro, Kollmeier, 10.3847/1538-4357/aaaab3J. A. 855103ApJPiro, A. L., & Kollmeier, J. A. 2018, ApJ, 855, 103 . N Pol, M Mclaughlin, D R Lorimer, 10.3847/1538-4357/aaf006ApJ. 87071Pol, N., McLaughlin, M., & Lorimer, D. R. 2019, ApJ, 870, 71 . M Punturo, M Abernathy, F Acernese, 10.1088/0264-9381/27/19/194002Classical and Quantum Gravity. 27194002Punturo, M., Abernathy, M., Acernese, F., et al. 2010, Classical and Quantum Gravity, 27, 194002 . D Radice, A Perego, F Zappa, 10.3847/2041-8213/aaa402ApJ. 85229Radice, D., Perego, A., Zappa, F., et al. 2018, ApJ, 852, L29 . C A Raithel, F Özel, D Psaltis, 10.3847/2041-8213/aabcbfApJ. 23Raithel, C. A., Özel, F., & Psaltis, D. 2018, ApJ, 857, L23 . L Rezzolla, E R Most, L Weih, 10.3847/2041-8213/aaa401ApJ. 85225Rezzolla, L., Most, E. R., & Weih, L. R. 2018, ApJ, 852, L25 . A G Riess, S Casertano, W Yuan, 10.3847/1538-4357/aac82eApJ. 861126Riess, A. G., Casertano, S., Yuan, W., et al. 2018, ApJ, 861, 126 . M Ruiz, S L Shapiro, A Tsokaros, 10.1103/PhysRevD.97.021501Phys. Rev. D. 9721501Ruiz, M., Shapiro, S. L., & Tsokaros, A. 2018, Phys. Rev. D, 97, 021501 . B Sathyaprakash, M Abernathy, F Acernese, 10.1088/0264-9381/29/12/124013Classical and Quantum Gravity. 29124013Sathyaprakash, B., Abernathy, M., Acernese, F., et al. 2012, Classical and Quantum Gravity, 29, 124013 . B F Schutz, 10.1038/323310a0Nature. 323310Schutz, B. F. 1986, Nature, 323, 310 . D Scolnic, R Kessler, D Brout, 10.3847/2041-8213/aa9d82ApJ. 8523Scolnic, D., Kessler, R., Brout, D., et al. 2018, ApJ, 852, L3 . M Shibata, S Fujibayashi, K Hotokezaka, 10.1103/PhysRevD.96.123012Phys. Rev. D. 96123012Shibata, M., Fujibayashi, S., Hotokezaka, K., et al. 2017, Phys. Rev. D, 96, 123012 . D M Siegel, J Barnes, B Metzger, arXiv:1810.00098arXiv e-printsastro-ph.HESiegel, D. M., Barnes, J., & Metzger, B. D. 2018, arXiv e-prints, arXiv:1810.00098 [astro-ph.HE] . B P Abbott, LIGO Scientific Collaboration ; Virgo CollaborationarXiv:1811.12907arXiv e-printsThe LIGO Scientific Collaboration, the Virgo Collaboration, Abbott, B. P., et al. 2018, arXiv e-prints, arXiv:1811.12907 . C Van Den Broeck, 10.1088/1742-6596/484/1/012008Journal of Physics Conference Series. 48412008Journal of Physics Conference SeriesVan Den Broeck, C. 2014, in Journal of Physics Conference Series, Vol. 484, Journal of Physics Con- ference Series, 012008 . S Vitale, H.-Y Chen, 10.1103/PhysRevLett.121.021303Phys. Rev. Lett. 12121303Vitale, S., & Chen, H.-Y. 2018, Phys. Rev. Lett., 121, 021303 . S E Woosley, J R Wilson, G J Mathews, 10.1086/174638ApJ. 433229Woosley, S. E., Wilson, J. R., Mathews, G. J., et al. 1994, ApJ, 433, 229
[]
[ "Genetic Algorithm based Multi-Objective Optimization of Solidification in Die Casting using Deep Neural Network as Surrogate Model", "Genetic Algorithm based Multi-Objective Optimization of Solidification in Die Casting using Deep Neural Network as Surrogate Model" ]
[ "Shantanu Shahane \nDepartment of Mechanical Science and Engineering\nUniversity of Illinois at Urbana-Champaign\n\n", "Narayana Aluru \nDepartment of Mechanical Science and Engineering\nUniversity of Illinois at Urbana-Champaign\n\n", "Placid Ferreira \nDepartment of Mechanical Science and Engineering\nUniversity of Illinois at Urbana-Champaign\n\n", "Shiv G Kapoor \nDepartment of Mechanical Science and Engineering\nUniversity of Illinois at Urbana-Champaign\n\n", "Surya Pratap Vanka \nDepartment of Mechanical Science and Engineering\nUniversity of Illinois at Urbana-Champaign\n\n" ]
[ "Department of Mechanical Science and Engineering\nUniversity of Illinois at Urbana-Champaign\n", "Department of Mechanical Science and Engineering\nUniversity of Illinois at Urbana-Champaign\n", "Department of Mechanical Science and Engineering\nUniversity of Illinois at Urbana-Champaign\n", "Department of Mechanical Science and Engineering\nUniversity of Illinois at Urbana-Champaign\n", "Department of Mechanical Science and Engineering\nUniversity of Illinois at Urbana-Champaign\n" ]
[]
In this paper, a novel strategy of multi-objective optimization of die casting is presented. The cooling of molten metal inside the mold is achieved by passing a coolant, typically water through the cooling lines in the die. Depending on the cooling line location, coolant flow rate and die geometry, nonuniform temperatures are imposed on the molten metal at the mold wall. This boundary condition along with the initial molten metal temperature affect the product quality quantified in terms of micro-structure parameters and yield strength. A finite volume based numerical solver is used to correlate the inputs to outputs.The objective of this research is to estimate the initial and wall temperatures so as to optimize the product quality. The non-dominated sorting genetic algorithm (NSGA-II) which is popular for solving multi-objective optimization problems is used. The number of function evaluations required for NSGA-II can be of the order of millions and hence, the finite volume solver cannot be used directly for optimization. Thus, a neural network trained using the results from the numerical solver is used as a surrogate model. Simplified versions 1 arXiv:1901.02364v1 [cs.CE] 8 Jan 2019 of the actual problem are designed to verify results of the genetic algorithm.An innovative local sensitivity based approach is used to rank the final Pareto optimal solutions and choose a single best design.
null
[ "https://arxiv.org/pdf/1901.02364v1.pdf" ]
57,721,194
1901.02364
ddd1fa71ed2eb988d8b901df8fca63b65bc0fb2a
Genetic Algorithm based Multi-Objective Optimization of Solidification in Die Casting using Deep Neural Network as Surrogate Model Shantanu Shahane Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign Narayana Aluru Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign Placid Ferreira Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign Shiv G Kapoor Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign Surya Pratap Vanka Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign Genetic Algorithm based Multi-Objective Optimization of Solidification in Die Casting using Deep Neural Network as Surrogate Model Urbana, Illinois 61801Deep Neural NetworkMulti-Objective OptimizationGenetic AlgorithmDie CastingSolidification In this paper, a novel strategy of multi-objective optimization of die casting is presented. The cooling of molten metal inside the mold is achieved by passing a coolant, typically water through the cooling lines in the die. Depending on the cooling line location, coolant flow rate and die geometry, nonuniform temperatures are imposed on the molten metal at the mold wall. This boundary condition along with the initial molten metal temperature affect the product quality quantified in terms of micro-structure parameters and yield strength. A finite volume based numerical solver is used to correlate the inputs to outputs.The objective of this research is to estimate the initial and wall temperatures so as to optimize the product quality. The non-dominated sorting genetic algorithm (NSGA-II) which is popular for solving multi-objective optimization problems is used. The number of function evaluations required for NSGA-II can be of the order of millions and hence, the finite volume solver cannot be used directly for optimization. Thus, a neural network trained using the results from the numerical solver is used as a surrogate model. Simplified versions 1 arXiv:1901.02364v1 [cs.CE] 8 Jan 2019 of the actual problem are designed to verify results of the genetic algorithm.An innovative local sensitivity based approach is used to rank the final Pareto optimal solutions and choose a single best design. Introduction Die casting is one of the popular manufacturing processes in the industry in which liquid metal is injected into a permanent metal mold and solidified. Generally, die casting is used for parts made of aluminum and magnesium alloys with steel molds. Automotive and housing industrial sectors are common consumers of die casting. In such a complex process, there are several input parameters which affect the final product quality and process efficiency. With the advances in the computing hardware and software in the recent years, the physics of these processes can be modeled using numerical simulation techniques. Detailed flow and temperature histories, micro-structure parameters, mechanical strength etc. can be estimated from these simulations. In today's competitive industrial world, estimating the values of input parameters for which the product quality is optimized has become really important. There has been extensive research in the numerical optimization algorithms which can be coupled with the simulations in order to handle complex optimization problems. Solidification in casting process has been studied by many researchers. Minaie et al. [1] have analyzed metal flow during die filling and solidification in a two dimensional rectangular cavity. The flow pattern during the filling stage is predicted using the volume of fluid (VOF) method and enthalpy equation is used to model the phase change with convection and diffusion inside the cavity. They have studied the effect of gate location on the residual flow field after filling and the solid liquid interface during solidification. Im et al. [2] have done a combined filling and solidification analysis in a square cavity using the implicit filling algorithm with the modified VOF together with the enthaply formulation. Recently, there has been a growing interest in the numerical optimization of various engineering systems. Poloni et al. [5] applied neural network with multi-objective genetic algorithm and gradient based optimizer to the design of a sailing yacht fin. The geometry of the fin was parameterized using Bezier polynomials. The lift and drag on the fin was optimized as a function of the Bezier parameters and thus, an optimal fin geometry was designed. Elsayed and Lacor [6] performed a multi-objective optimization of a gas cyclone which is a device used as a gas-solid separator. They trained a radial basis function neural network (RBFNN) to correlate the geometric parameters like diameters and heights of the cyclone funnel to the performance efficiency and the pressure drop using the data from numerical simulations. They further used the non-dominated sorting genetic algorithm (NSGA-II) to obtain the Pareto front of the cyclone designs. Wang et al. [7] optimized the groove profile to improve hydrodynamic lubrication performance in order to reduce the coefficient of friction and temperature rise of the specimen. They coupled the genetic algorithm (GA) with the sequential quadratic programming (SQP) algorithm such that the GA solutions were provided as initial points to the SQP. Stavrakakis et al. [8] solved for window sizes for optimal thermal comfort and indoor air quality in naturally ventilated buildings. A computational fluid dynamics model was used to simulate the air flow in and around the buildings and generate data for training and testing of a RBFNN which is further used for constrained optimization using the SQP algorithm. Wei and Joshi [9] modeled the thermal resistance of a micro-channel heat exchanger for electronic cooling using a simplified thermal resistance network model. They used a genetic algorithm to obtain optimal geometry of the heat exchanger so as to minimize the thermal resistance subject to constraints of maximum pressure drop and volumetric flow rate. Husain and Kim [10] optimized the thermal resistance and pumping power of a micro-channel heat sink as a function of geometric parameters of the channel. They used a three dimensional finite volume solver to solve the fluid flow equations and generate training data for surrogate models. They used multiple surrogate models like response surface approximations, Kriging and RBFNN. They provided the solutions obtained from the NSGA-II algorithm to SQP as initial guesses. Lohan et al. [11] performed a topology optimization to maximize the heat transfer through a heat sink with dendritic geometry. They used a space colonization algorithm to generate topological patterns with a genetic algorithm for optimization. Amanifard et al. [12] solved an optimization problem to minimize the pressure drop and maximize the Nusselt number with respect to the geometric parameters and Reynolds number for micro-channels. They used a group method of data handling type neural network as a surrogate model with the NSGA-II algorithm for optimization. Esparza et al. [13] optimized the design of a gating system used for gravity filling a casting so as to minimize the gate velocity. They used a commercial program FLOW3D to estimate the gate velocity as a function of runner depth and tail slope and the SQP method for optimization. This paper analyzes the heat transfer and solidification in die casting of a practical geometry. The energy equation coupled with the solid fraction temperature relation are solved using a finite volume numerical method. The product quality is assessed using grain size and yield strength which are estimated using empirical relations. The solidification time is used to quantify the process efficiency. The molten metal and mold wall temperatures are crucial in determining the quality of die casting. The wall temperature is typically nonuniform due to the complex mold geometries and asymmetric placement of cooling lines. This nonuniformity is modeled by domain decomposition of the wall and assigning single temperature value to each domain. Neural networks are trained using the data generated from the simulations to correlate the initial and wall temperatures to the output parameters like solidification time, grain size and yield strength. The optimization problem formulated with these three objectives is solved using genetic algorithm. Numerical Model Description The numerical model incorporates the effects of solidification and heat transfer in die casting. Since the common die casting geometries have thin cross-section, the solidification time is of the order of seconds and hence, the effect of natural convection is negligible. Thus, the momentum equations of the liquid metal are not solved in this work. The energy equation which can be written in terms of temperature has unsteady, diffusion and latent heat terms. ρC p ∂T ∂t = ∇ • (k∇T ) + ρL f ∂f s ∂t (1) where, T is temperature, ρ is density, C p is specific heat, k is thermal conductivity, L f is latent heat of fusion, f s is solid fraction and t is time. The Gulliver-Scheil equation (2) [14] relates solid fraction to temperature for a binary alloy. f s (T ) =                0 if T > T liq 1 if T < T sol 1 − T −T f T liq −T f 1 kp−1 otherwise (2) where, k p is partition coefficient, T f is freezing temperature, T sol is solidus temperature and T liq is liquidus temperature. (2)). The following empirical relations link the cooling rate to SDAS and yield strength. SDAS = λ 2 = A λ ∂T ∂t B λ [in µm].(3) where, A λ = 44.6 and B λ = −0.359 are based on the model for microstructure in aluminum alloys [15]. σ 0.2 = A σ λ −1/2 2 + B σ(4) where, σ 0.2 is in MPa, λ 2 (SDAS) is in µm, A σ = 59.0 and B σ = 120.3 [16]. Grain size estimation is based on the work of Greer et al. [17]. The grain growth rate is given by: dr dt = λ 2 s D s 2r (5) where, r is the grain size, D s is the solute diffusion coefficient in the liquid and t is the time. The parameter λ s is obtained using invariant size approximation: λ s = −S 2π 0.5 + S 2 4π − S 0.5(6) S is given by S = 2(C s − C 0 ) C s − C l(7) where, C l = C 0 (1 − f s ) (k p −1) is solute content in the liquid, C s = k p C l is solute content in the solid at the solid-liquid interface and C 0 is the nominal solute concentration. Hence, from the partition coefficient (k p ) and estimated solid fraction (f s ), eqs. (5) to (7) are solved to get the final grain size. Equations (1) to (7) are solved numerically using the software OpenCast [18] with finite volume method on a collocated grid. The variation due to temperature in thermal conductivity, density and specific heat is taken into account. Practical die casting geometries are dealt with unstructured grids. Tetrahedral mesh generated by GMSH [19] is subdivided into a hexahedral mesh using TETHEX [20]. The details of the numerical algorithm and verification and validation results of OpenCast are discussed in previous publications [18,21]. The clamp geometry whose solidification was studied by Shahane et al. [18] is chosen here for optimization. Figure 1 shows the clamp geometry with a mesh having 334k hexahedral elements. It is important to assess the effects of natural convection. Hence, the clamp geometry is simulated for two cases viz. with and without natural convection using OpenCast. Optimization In die casting the mold cavity is filled with molten metal and solidified. The heat is extracted from the cavity walls by flowing coolants (typically water) through the cooling lines made inside the die. The quality of the finished product depends on the rate of heat extraction which in turn depends on the temperature at the cavity walls. Due to complexity in the die geometry, the wall temperature varies locally. An optimal product quality can be achieved if the temperature distribution on the cavity walls and initial fill temperature are set properly. Thus, in this work, the following optimization problem with three objectives is proposed: Minimize {f 1 (T init , T wall ), f 2 (T init , T wall ), f 3 (T init , T wall )} subject to 900 ≤ T init ≤ 1100 K and 500 ≤ T wall ≤ 700 K(8) where, f 1 = solidification time, f 2 = max (grain size) and f 3 = − min (yield strength). Minimizing the solidification time increases productivity. Reduction in grain size reduces susceptibility to cracking [22] and improves mechanical properties of the product [23]. Thus, minimization of the maximum value of grain size over the entire geometry is set as an optimization objective. Higher yield strength is desirable as it increases the elastic limit of the material. Hence, the minimum yield strength over the entire geometry is to be maximized. For convenience, this maximization problem is converted to minimization by multiplying by minus one. This explains the third objective function f 3 . All the objectives are functions of the initial molten metal temperature (T init ) and mold wall temperature (T wall ). The initial temperature is a scalar parameter in the interval [900, 1100] K which is higher than the liquidus temperature of the alloy. As discussed before, the mold wall temperature need not be uniform in die casting due to locally varying heat transfer to the cooling lines. Thus, in this work, the wall surface is decomposed into multiple domains with each domain having a uniform temperature boundary condition which is held constant with time during entire solidification. If the die design with cooling line placement and coolant flow conditions are available, thermal analysis of the die can be done to identify these domains. Due to the lack of this information, the wall is decomposed into ten domains using the KMeans classification algorithm from Scikit Learn [24]. Figure 4a shows the domain decomposition with ten domain tags and fig. 4b shows a random sample of the temperature boundary condition with a single temperature value assigned uniformly to each domain. Thus, the input wall temperature (T wall ) is a ten dimensional vector in the interval [500, 700] K which is lower than the solidus temperature of the alloy. Hence, this is a multi-objective optimization problem with three minimization objectives which are a function of eleven input temperatures. There are multiple strategies discussed in the literature for each of the above steps [25,26]. A brief overview of the methods used in this work is given here. The population is initialized using the Latin Hypercube Sampling strategy from the python package pyDOE [27]. is. Elitism was found helpful as it ensures that the next generation is at least as good as the previous generation. Multi-Objective Optimization The simultaneous optimization of multiple objectives is different than the single objective optimization problem. In a single objective problem, the best design which is usually the global optimum (minimum or maximum) is searched for. On the other hand, for multi-objective problem, there may not exist a single optimum which is the best design or global optimum with respect to all the objectives simultaneously. This happens due to the conflicting nature of objectives i.e., improvement in one can cause deterioration of the other objectives. Thus, typically there is a set of Pareto optimal solutions which are superior to rest of the solutions in the design space which are known as dominated solutions. All the Pareto optimal solutions are equally good and none of them can be prioritized in the absence of further information. Thus, it is useful to have a knowledge of multiple non-dominated or Pareto optimal solutions so that a single solution can be chosen out of them considering other problem parameters. One possible way of dealing with multiple objectives is to define a single objective as a weighted sum of all the objectives. Any single objective optimization algorithm can be used to obtain an optimal solution. Then the weight vector is varied to get a different optimal solution. The problem with this method is that the solution is sensitive to the weight vector and choosing the weights to get multiple Pareto optimal solutions is difficult for a practical engineering problem. Multi-objective GAs attempt to handle all the objectives simultaneously and thus, annihilating the need of choosing the weight vector. Konak et al. [28] have discussed various popular multi-objective GAs with their benefits and drawbacks. In this work, the Non-dominated Sorting Genetic Algorithm II (NSGA-II) [29] which is a fast and elitist version of the NSGA algorithm [30] is used. The NSGA-II algorithm steps to march from a given generation of population size N to a next generation of same size is as follows: 1. Select parent pairs based on their rank computed before (lower rank implies higher probability of selection) 2. Perform crossover to generate an offspring from each parent pair Before iterating over the above steps, some pre-processing is required. A random population of size N is initiated and steps 5-8 are implemented to rank the initial generation. The parent selection, crossover and mutation steps are identical to the single objective GA described in section 3.1.1. The algorithms for remainder of the steps can be found in the paper by Deb et al. [29]. Ranking the population by front levels and crowding distance enforces both elitism and diversity in the next generation. Neural Network The fitness evaluation step of the GA requires a way to estimate the outputs corresponding to the set of given inputs. Typically, the number of generations can be of the order of thousands with several hundreds of population size per generation and thus, the total number of function evaluations can be around hundred thousands or more. It is computationally difficult to run the full scale numerical estimation software. Thus, a surrogate model is trained which is cheap to evaluate. Separate neural network is trained for each of the three optimization objectives (eq. (8)). Hornik et al. [31] showed that under mild assumptions on the function to be approximated, a neural network can achieve any desired level of accuracy by tuning the hyper-parameters. The building block of a neural network is known as a neuron which has multiple inputs and gives out single output by performing the following operations: Goodfellow et al. [33] describes the neural network ni detail and gives guidelines for the training procedure. For the second case, the initial temperature is held constant (T init = 1000 K) and the boundary temperature is split into two domains instead of ten. Thus, again there are two scalar inputs: (T (1) wall , T wall ). T (1) wall is assigned to domain numbers 1-5 and T (2) wall to domain numbers 6-10 ( fig. 4a). The ranges of wall and initial temperatures are the same as before (section 3). Such a simplified analysis gives an insight into the actual problem. Moreover, since these are problems with two inputs, the optimization can be performed by brute force parameter sweep and compared to the genetic algorithm. This helps to fine tune the parameters and assess the accuracy of the GA. Figure 10 has similar plots for the second case of constant initial temperature and split boundary temperature. Figure 10c shows the effect of nonuniform boundary temperature. The minimum is attained at wall temperatures of 500 K and 700 K since the local gradients and cooling rates vary due to the asymmetry in the geometry. This analysis shows the utility of the optimization with respect to nonuniform mold wall temperatures. [37]. S p is called as the Pareto optimal or non-dominated set whereas, S d is called as the non-Pareto optimal or dominated set. Since the designs in Pareto optimal set are non-dominated with respect to each other, they all are equally good and some additional information regarding the problem is required to make a unique choice out of them. Thus, it is useful to have a list of multiple Pareto optimal solutions. Another way to interpret the Pareto optimal solutions is that any improvement in one objective will worsen at least one other objective thus, resulting in a trade-off [38]. Figure 15 plots the Pareto fronts. As discussed before, some additional problem information is required to choose a single design out of all the Pareto optimal designs. In die casting, there is a lot of stochastic variation in the wall and initial temperatures. Shahane et al. [18] have performed parameter uncertainty propagation and global sensitivity analysis and found that the die casting outputs are sensitive to the input uncertainty. Thus, from a practical point of view, it is sensible to choose a Pareto optimal design which is least sensitive to the inputs. In this work, such an optimal point is known as a 'stable' optimum since any stochastic variation in the inputs has minimal effect on the outputs. Single Objective Optimization Results A local sensitivity analysis is performed to quantify the sensitivity of outputs towards each input for all the Pareto optimal designs. For a function f : R n → R m which takes input x ∈ R n and produces output f (x) ∈ R m , the m × n Jacobian matrix is defined as: J f [i, j] = ∂f i ∂x j ∀ 1 ≤ i ≤ m, 1 ≤ j ≤ n(9) At a given point x 0 , the local sensitivity of f with respect to each input can be defined as the Jacobian evaluated at that point: J f (x 0 ) [39]. Here, there are eleven inputs and two outputs. Thus, the 2 × 11 Jacobian is estimated at all the Pareto optimal solutions with the central difference method evaluated using the neural networks. Then the L 1 norm of the Jacobian given by the sum of absolute values of all its components is defined as a single scalar metric to quantify local sensitivity. Figure 16 plots the norm of the Jacobian for each design on the Pareto front. It can be seen that the norm varies significantly and thus, ranking the designs based on the sensitivity is useful. The design with minimum norm is chosen and marked on the Pareto fronts in fig. 15 as a stable optimum. NSGA-II, which is a popular multi-objective genetic algorithm was used for the optimization process. Since the number of function evaluations required for a GA is extremely high, a deep neural network was used as a surrogate. The training and testing of the neural network was completed with less than thousand full scale finite volume simulations. The run time per simulation using OpenCast was about 20 minutes on a single processor i.e., around 333 compute hours for 1000 simulations. All the simulations were independent and embarrassingly parallel. Thus, a multi-core CPU was used to speed up the process without any additional programming effort for parallelization. Computationally, this was the most expensive part of the process. Subsequent training and testing of the neural network took a few minutes. Implementation of GA is computationally cheap since the evaluation of a neural network is a sequence of matrix products and thus, was completed in few minutes. Hence, it can be seen that the strategy of coupling the GA and neural network with finite volume simulations is computationally beneficial. In this work, the wall is divided into ten domains and together with the initial temperature, this is an optimization problem with eleven inputs. Both This showed the utility of the simultaneous optimization of all the objectives since there was a significant scope for improvement. After estimating multiple Pareto optimal solutions, a common question is to choose a single design. The strategy of choosing the design with minimum local sensitivity towards the in-puts was found to be practically useful due to the stochastic variations in the input process parameters. Overall, although die casting was used as an example for demonstration, this approach can be used for process optimization of other manufacturing processes like sand casting, additive manufacturing, welding etc. They studied the effect of assisting flow and opposite flow due to different gate positions on the residual flow. They found that the liquid metal solidifies faster in the opposite flow than the assisting flow situation. Cleary et al. [3] used the Smoothed Particle Hydrodynamics (SPH) to simulate flow and solidification in three dimensional practical geometries. They demonstrated the approach of short shots to fill and solidify the cavity partially with insufficient metal so that validation can be performed on partial castings. Plotkowski et al. [4] simulated the phase change with fluid flow in a rectangular cavity both analytically and numerically. They simplified the governing equations of the mixture model by scaling analysis followed by an analytical solution and then compared with a complete finite volume solution. Secondary Dendrite Arm Spacing (SDAS) is a microstructure parameter which can be used to estimate the 0.2% yield strength. The cooling rate at each point in the domain is computed by numerically solving the energy equation and solid fraction temperature relation (eqs. (1) and Figure 1 : 1Clamp Figure 2 :Figure 3 : 23and yield strength contours with identical process conditions for both the cases. Since the solidification time is around 2.5 seconds, the velocities due to natural convection in the liquid metal are negligible. It is evident from the contours that there is no significant effect of natural convection and hence, it is neglected in all the further simulations. ClampClamp: Yield Strength (MPa) Figure 4 : 4Boundary Condition Representation by Domain Decomposition Figures 5 and 6 plot temperature and solid fraction for different time steps during solidification for the sample shown in fig. 4b with T init = 986 K. It can be seen that the temperature and solid fraction contours are asymmetric due to non-uniform boundary temperature. For instance, domain number 10 is held at minimum temperature and thus, region near it solidifies first.Figure 7 plots final yield strength and grain size contours. The cooling rates and temperature gradients decrease with time as solidification progresses. Hence, the core regions which are thick take longer time to solidify. As the grains have more time to grow, the grain size is higher in the core region and correspondingly, the yield strength is lower. These trends along with the asymmetry are visible in fig. algorithms (GAs) are global search algorithms based on the mechanics of natural selection and genetics. They apply the 'survival of the fittest' concept on a set of artificial creatures characterized by strings. In the context of a GA, an encoded form of each input parameter is known as a gene. A complete set of genes which uniquely describe an individual (i.e., a feasible design) is known as a chromosome. The value of the objective function which is to be optimized is known as the fitness. The population of all the individuals at a given iteration is known as a generation. The overall steps in the algorithm are as follows: 1. Initialize first generation with a random population 2. Evaluate the fitness of the population 3. Select parent pairs based on their fitness (better fitness implies higher probability of selection) 4. Perform crossover to generate an offspring from each parent pair 5. Mutate some genes randomly in the population 6. Replace the current generation with the next generation 7. If termination condition is satisfied, return the best individual of the current generation; else go back to step 2 Fitness evaluation is the estimation of the objective function which can be done by full scale computational model or by surrogate models. The number of objective function evaluations are typically in the order of millions and thus, step 2 becomes computationally most expensive step if a full scale model is used. Instead, it is common to use a surrogate model which is much cheaper to evaluate. In this work, a neural network based surrogate model is used, details of which are provided in section 3.2. Tournament selection is used to choose the parent pairs to perform crossover. The idea is to choose four individuals at random and select two out of them which have better fitness. Note that since the optimization is cast as a minimization problem, lower fitness value is desired. Uniform crossover is used to recombine the genes of the parents to generate the offspring with a crossover probability of 0.9. Random mutation of the genes of the entire generation is performed with a mutation probability of 0.1. Thereafter, the old generation is replaced by this new generation. Note that the elitist version of GA is used which passes down the fittest individual of the previous generation to this generation as it Merge the parent and offspring population thus giving a set of size 2 × N 5. Evaluate the fitness of the population corresponding to each objective 6. Divide the population into multiple non-dominated levels also known as fronts 7. Compute the crowding distance for each individual along each front 8. Sort the population based on front number and crowding distances and rank them 9. Choose the best set of N individuals as next generation (i.e., N individuals with lowest ranks) 10. If termination condition is satisfied, return the best front of the current generation as an approximation of the Pareto optimal solutions; else go back to step 1 1 . 1Linear transformation: a = n i=1 w i x i + b; where, {x 1 , x 2 , ..., x n } are n scalar inputs, w i are the weights and b is a bias term 2. Nonlinear transformation applied element-wise: y = σ(a); where, y is a scalar output and σ is the activation function Multiple neurons are stacked in a layer and multiple layers are connected together to form a neural network. The first and last layers are known as input and output layers respectively. Information flows from the input to output layer through the intermediate layers known as hidden layers. Each hidden layer adds nonlinearity to the network and thus, a more complex function can be approximated successfully. At the same time, having large number of neurons can cause high variance and thus, blindly increasing the depth of the network may not help. The number of neurons in the input and output layers is defined by the problem specification. On the other hand, number of hidden layers and neurons has to be fine tuned to have low bias and low variance. The interpolation error of the network is quantified as a loss function which is a function of the wights and bias. A numerical optimization algorithm coupled with gradient estimation by the backpropagation algorithm[32] is used to estimate the optimal weights and bias which minimize the loss function for a given training set. This is known as training process. The hyper-parameters of the network are tuned by a separate validation set and the final error is estimated on a testing set. Figure 8 : 8Neural Networks: Error Estimates for 200 Testing Samples 4 Assessment of Genetic Algorithm on Simpler Problems It is difficult to visualize the variation of an output with respect to each of the eleven inputs. Hence in this section, two problems are considered which are simplified versions of the actual problem. In the first case, the boundary temperature is set uniformly and hence, there are two scalar inputs: (T init , T wall ). Figure 9 : 9Parameter Sweep: Uniform Boundary Temperature In this section, all the objectives are analyzed individually. A two dimensional mesh of size 40k with 200 points in each input dimension is used for this analysis. The outputs are estimated from the neural networks for each of these points. Figure 9 plots the response surface contours for each of the three objectives with their corresponding minima. The minima are estimated from the 40k points. The X and Y axis are initial and wall temperatures respectively. When initial temperature is reduced, the total amount of internal energy in the molten metal is reduced and thus, the solidification time decreases. The amount of heat extracted is proportional to the temperature gradient at the mold wall which increases with a drop in the wall temperature. Thus, the drop in wall temperature reduces the solidification time. Hence, the minimum of solidification time is attained at the bottom left corner (fig. 9a). The grain size is governed by the local temperature gradients and cooling rates which are weekly dependent on the initial temperature. Thus, it can be seen that the contour lines are nearly horizontal in fig. 9b. On the other hand, as wall temperature reduces, the rate of heat extraction rises and hence, the grains get less time to grow. This causes a drop in the grain size. Thus, the minimum of the maximum grain size is on the bottom right corner (fig. 9b). The contour values in fig. 9c are negative as maximization objective of the minimum yield strength is converted into minimization by inverting the sign. The minimum is at the top right corner of fig. 9c. Figure 10 : 10Parameter Sweep: Split Boundary Temperature with T init = 1000 K Table 2 lists the optima for the single objective problems estimated from parameter sweep and GA. For all the six cases, the outputs and corresponding inputs show that the GA estimates are accurate. The GA parameters are fine tuned to 50 generations with population size of 25 and crossover and mutation probability of 0.8 and 0.1 respectively. The elitist version of GA is used which passes on the best individual from previous generation to the next generation. Figure 11 :Figure 12 :Figure 13 :Figure 14 : 11121314Uniform Boundary Temperature: Solidification Time v/s Min. Yield Str. (a) Parameter Sweep (b) NSGA-II Uniform Boundary Temperature: Max. Grain Size v/s Min. Yield Str. The blue region in the left parts of figs. 11 to 14 indicates the feasible region. Using a pairwise comparison of the designs in the feasible region obtained by parameter sweep, the Pareto front is estimated which is plotted in red. The right side plots of figs. 11 to 14 show the Pareto fronts obtained using NSGA-II. It can be seen that both the estimates match which implies that the NSGA-II implementation is accurate. A population size of 1000 is evolved over 50 generations. Existence of multiple designs in the Pareto set implies that the objectives are conflicting. This can be confirmed from the single objective analysis. For instance, consider the two objectives solidification time and minimum yield strength in the uniform boundary temperature case. From figs. 9a and 9c it can be seen that individual minima are attained at different corners. Moreover, the directions of descent are different for each objective and thus, at some points, improvement in one objective can worsen other. This effect is visible on the corresponding Pareto front plot in fig. 11. (a) Parameter Sweep (b) NSGA-II Split Boundary Temperature with T init = 1000 K: Solidification Time v/s Min. Yield Str. Split Boundary Temperature with T init = 1000 K: Max. Grain Size v/s Min. Yield Str. Time v/s Min. Yield Str. (b) Max. Grain Size v/s Min. Yield Str. Figure 15 : 15Pareto Front After verification of the NSGA-II implementation on simplified problems, multiobjective design optimization with eleven inputs is solved. To begin with, two pairs of objectives are chosen: {solidification time, minimum yield strength} and {maximum grain size, minimum yield strength}. For both of these cases, population size of 500 with 5000 generations is set. ( a ) aSolidification Time v/s Min. Yield Str. (b) Max. Grain Size v/s Min. Yield Str. Figure 16 :Figure 18 : 1618Local Sensitivity for Designs on Pareto FrontThe next step is to perform the complete multi-objective optimization analysis as mentioned in eq.(8). NSGA-II is used with a population size of 2000 evolved over 250 generations.Figures 17 and 18plot the Pareto optimal designs in a three dimensional objective space and norm of the Jacobian at each of these designs respectively. It can be seen that the norm varies from 0.Local Sensitivity of Designs on Pareto Front for 3 Objectives6 ConclusionsThis paper presents a strategy for multi-objective optimization of the solidification process during die casting. Practically, it is not possible to hold the entire mold wall at a uniform temperature. The final product quality in terms of strength and micro-structure and process productivity in terms of solidification time depends directly on the rate and direction of heat extraction during solidification. Heat extraction in turn depends on the placement of coolant lines and coolant flow rates thus, being a crucial part of die design. In this work, the product quality is assessed as a function of initial molten metal and boundary temperatures. Knowledge of boundary temperature distribution in order to optimize the product quality can be useful in the die design and process planning. single and multi-objective genetic algorithms were programmed and verified with parameter sweep estimation for simplified versions of the problem. The single objective response surfaces were used to get an insight regarding the conflicting nature of the objectives since the individual optimal solutions were completely different from each other. Moreover, the solidification time, maximum grain size and minimum yield strength varied in the ranges [2, 3.5] seconds,[22,34] microns and [134, 145] MPa respectively for the given inputs. Table 2 : 2are less than or equal to d 2 . The design space can be divided two disjoint sets S p and S d such that S p contains all the designs which do not dominate each other and at least one design in S p dominates any design in S dSingle Objective Optimization: Genetic Algorithm Estimates compared with Parameter Sweep Values for Two Input Problems 4.2 Bi-Objective Optimization In this section, two objectives are taken at a time for each of the two simplified problems defined in section 4. As before, a two dimensional mesh of size 40k with 200 points in each input dimension is used for this analysis. The outputs are estimated from the neural networks for each of these points. The feasible region is the set of all the attainable designs in the output space which can be estimated by the parameter sweep. For a minimization problem, a design d 1 is said to dominate another design d 2 if all the objective function values of d 1 Analysis of flow patterns and solidification phenomena in the die casting process. B Minaie, V R Stelson, Voller, Journal of Engineering Materials and Technology. 1133B Minaie, KA Stelson, and VR Voller. Analysis of flow patterns and solidification phenomena in the die casting process. Journal of Engineering Materials and Technology, 113(3):296-302, 1991. A unified analysis of filling and solidification in casting with natural convection. It Im, Kim, Lee, International Journal of Heat and Mass Transfer. 448IT Im, WS Kim, and KS Lee. A unified analysis of filling and solidification in casting with natural convection. International Journal of Heat and Mass Transfer, 44(8):1507-1515, 2001. Short shots and industrial case studies: understanding fluid flow and solidification in high pressure die casting. Pw Cleary, M Ha, T Prakash, Nguyen, Applied Mathematical Modelling. 348PW Cleary, J Ha, M Prakash, and T Nguyen. Short shots and industrial case studies: understanding fluid flow and solidification in high pressure die casting. Applied Mathematical Modelling, 34(8):2018-2033, 2010. Estimation of transient heat transfer and fluid flow for alloy solidification in a rectangular cavity with an isothermal sidewall. A Plotkowski, Mjm Fezi, Krane, Journal of Fluid Mechanics. 779A Plotkowski, K Fezi, and MJM Krane. Estimation of transient heat transfer and fluid flow for alloy solidification in a rectangular cavity with an isothermal sidewall. Journal of Fluid Mechanics, 779:53-86, 2015. Hybridization of a multi-objective genetic algorithm, a neural network and a classical optimizer for a complex design problem in fluid dynamics. C Poloni, Giurgevich, V Onesti, Pediroda, Computer Methods in Applied Mechanics and Engineering. 1862-4C Poloni, A Giurgevich, L Onesti, and V Pediroda. Hybridization of a multi-objective genetic algorithm, a neural network and a classical opti- mizer for a complex design problem in fluid dynamics. Computer Methods in Applied Mechanics and Engineering, 186(2-4):403-420, 2000. Cfd modeling and multi-objective optimization of cyclone geometry using desirability function, artificial neural networks and genetic algorithms. K Elsayed, C Lacor, Applied Mathematical Modelling. 378K Elsayed and C Lacor. Cfd modeling and multi-objective optimization of cyclone geometry using desirability function, artificial neural networks and genetic algorithms. Applied Mathematical Modelling, 37(8):5680-5704, 2013. Optimization of groove texture profile to improve hydrodynamic lubrication performance: Theory and experiments. Friction. W Wang, He, Zhao, Y Mao, J Hu, Luo, W Wang, Y He, J Zhao, J Mao, Y Hu, and J Luo. Optimization of groove texture profile to improve hydrodynamic lubrication performance: Theory and experiments. Friction, pages 1-12, 2018. Selection of window sizes for optimizing occupational comfort and hygiene based on computational fluid dynamics and neural networks. G M Stavrakakis, Karadimou, H Pl Zervas, N C Sarimveis, Markatos, Building and Environment. 462GM Stavrakakis, DP Karadimou, PL Zervas, H Sarimveis, and NC Markatos. Selection of window sizes for optimizing occupational com- fort and hygiene based on computational fluid dynamics and neural net- works. Building and Environment, 46(2):298-314, 2011. Optimization study of stacked micro-channel heat sinks for micro-electronic cooling. X Wei, Y Joshi, IEEE Transactions on Components and Packaging Technologies. 261X Wei and Y Joshi. Optimization study of stacked micro-channel heat sinks for micro-electronic cooling. IEEE Transactions on Components and Packaging Technologies, 26(1):55-61, 2003. Enhanced multi-objective optimization of a microchannel heat sink through evolutionary algorithm coupled with multiple surrogate models. A Husain, Kim, Applied Thermal Engineering. 3013A Husain and KY Kim. Enhanced multi-objective optimization of a mi- crochannel heat sink through evolutionary algorithm coupled with multiple surrogate models. Applied Thermal Engineering, 30(13):1683-1691, 2010. Topology optimization for heat conduction using generative design algorithms. Structural and Multidisciplinary Optimization. Dj Lohan, J T Dede, Allison, 55DJ Lohan, EM Dede, and JT Allison. Topology optimization for heat conduction using generative design algorithms. Structural and Multidisci- plinary Optimization, 55(3):1063-1077, 2017. Modelling and pareto optimization of heat transfer and flow coefficients in microchannels using gmdh type neural networks and genetic algorithms. N Amanifard, M Nariman-Zadeh, Borji, A Khalkhali, Habibdoust, Energy Conversion and Management. 492N Amanifard, N Nariman-Zadeh, M Borji, A Khalkhali, and A Habibdoust. Modelling and pareto optimization of heat transfer and flow coefficients in microchannels using gmdh type neural networks and genetic algorithms. Energy Conversion and Management, 49(2):311-325, 2008. Optimal design of gating systems by gradient search methods. Ce Esparza, Rz Ríos-Mercado Guerrero-Mata, Computational Materials Science. 364CE Esparza, MP Guerrero-Mata, and RZ Ríos-Mercado. Optimal design of gating systems by gradient search methods. Computational Materials Science, 36(4):457-467, 2006. Modeling in materials processing. J A Dantzig, Tucker, Cambridge University PressJA Dantzig and CL Tucker. Modeling in materials processing. Cambridge University Press, 2001. Microporosity simulation in aluminum castings using an integrated pore growth and interdendritic flow model. G Backer, Wang, Metallurgical and Materials Transactions B. 384G Backer and QG Wang. Microporosity simulation in aluminum castings using an integrated pore growth and interdendritic flow model. Metallur- gical and Materials Transactions B, 38(4):533-540, 2007. Precise analysis of microstructural effects on mechanical properties of cast adc12 aluminum alloy. M Okayasu, M Takeuchi, H Yamamoto, T Ohfuji, Ochi, Metallurgical and Materials Transactions A. 464M Okayasu, S Takeuchi, M Yamamoto, H Ohfuji, and T Ochi. Precise analysis of microstructural effects on mechanical properties of cast adc12 aluminum alloy. Metallurgical and Materials Transactions A, 46(4):1597- 1609, 2015. Modelling of inoculation of metallic melts: application to grain refinement of aluminium by Al-Ti-B. A L Greer, Bunn, Tronche, D J Evans, Bristow, Acta Materialia. 4811AL Greer, AM Bunn, A Tronche, PV Evans, and DJ Bristow. Modelling of inoculation of metallic melts: application to grain refinement of aluminium by Al-Ti-B. Acta Materialia, 48(11):2823-2835, 2000. Finite volume simulation framework for die casting with uncertainty quantification. S Shahane, P Aluru, Ferreira, S P Sg Kapoor, Vanka, arXiv:1810.08572arXiv preprintS Shahane, N Aluru, P Ferreira, SG Kapoor, and SP Vanka. Finite volume simulation framework for die casting with uncertainty quantification. arXiv preprint arXiv:1810.08572, 2018. Gmsh: A 3-d finite element mesh generator with built-in pre-and post-processing facilities. C Geuzaine, Remacle, International Journal for Numerical Methods in Engineering. 7911C Geuzaine and JF Remacle. Gmsh: A 3-d finite element mesh generator with built-in pre-and post-processing facilities. International Journal for Numerical Methods in Engineering, 79(11):1309-1331, 2009. TETHEX: Conversion from tetrahedra to hexahedra. M Artemiev, M Artemiev. TETHEX: Conversion from tetrahedra to hexahedra, 2015. URL https://github.com/martemyev/tethex. Virtually-guided certification with uncertainty quantification applied to die casting. S Shahane, Mujumdar, Kim, Priya, P Aluru, Ferreira, S Sg Kapoor, Vanka, ASME 2018 Verification and Validation Symposium. S Shahane, S Mujumdar, N Kim, P Priya, N Aluru, P Ferreira, SG Kapoor, and S Vanka. Virtually-guided certification with uncertainty quantification applied to die casting. In ASME 2018 Verification and Validation Sympo- sium, pages V001T03A003-V001T03A003. American Society of Mechani- cal Engineers, 2018. Effect of grain refiner and grain size on the susceptibility of al-mg die casting alloy to cracking during solidification. R Kimura, Hatayama, Shinozaki, Murashima, M Asada, Yoshida, Journal of Materials Processing Technology. 2091R Kimura, H Hatayama, K Shinozaki, I Murashima, J Asada, and M Yoshida. Effect of grain refiner and grain size on the susceptibility of al-mg die casting alloy to cracking during solidification. Journal of Materials Processing Technology, 209(1):210-219, 2009. Grain refinement of gravity die cast secondary alsi7cu3mg alloys for automotive cylinder heads. Transactions of Nonferrous Metals Society of China. G Camicia, G Timelli, 26G Camicia and G Timelli. Grain refinement of gravity die cast secondary alsi7cu3mg alloys for automotive cylinder heads. Transactions of Nonfer- rous Metals Society of China, 26(5):1211-1221, 2016. Scikit-learn: Machine learning in Python. F Pedregosa, Varoquaux, Gramfort, Michel, Thirion, Grisel, P Blondel, Prettenhofer, V Weiss, Dubourg, Vanderplas, Passos, M Cournapeau, M Brucher, E Perrot, Duchesnay, Journal of Machine Learning Research. 12F Pedregosa, G Varoquaux, A Gramfort, V Michel, B Thirion, O Grisel, M Blondel, P Prettenhofer, R Weiss, V Dubourg, J Vanderplas, A Passos, D Cournapeau, M Brucher, M Perrot, and E Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12: 2825-2830, 2011. Genetic Algorithms in Search, Optimization and Machine Learning. D Goldberg, Addison-Wessley Publishing Company, IncD Goldberg. Genetic Algorithms in Search, Optimization and Machine Learning. Addison-Wessley Publishing Company, Inc., 1989. An introduction to genetic algorithms for scientists and engineers. Da Coley, World Scientific Publishing CompanyDA Coley. An introduction to genetic algorithms for scientists and engi- neers. World Scientific Publishing Company, 1999. pyDOE: The experimental design package for python. pyDOE: The experimental design package for python. URL https:// pythonhosted.org/pyDOE/. Multi-objective optimization using genetic algorithms: A tutorial. A Konak, A E Coit, Smith, Reliability Engineering & System Safety. 919A Konak, DW Coit, and AE Smith. Multi-objective optimization using genetic algorithms: A tutorial. Reliability Engineering & System Safety, 91(9):992-1007, 2006. A fast and elitist multiobjective genetic algorithm: NSGA-II. K Deb, Pratap, T Agarwal, Meyarivan, IEEE transactions on evolutionary computation. 62K Deb, A Pratap, S Agarwal, and T Meyarivan. A fast and elitist multi- objective genetic algorithm: NSGA-II. IEEE transactions on evolutionary computation, 6(2):182-197, 2002. Muiltiobjective optimization using nondominated sorting in genetic algorithms. M Srinivas, Deb, Evolutionary Computation. 23M Srinivas and K Deb. Muiltiobjective optimization using nondominated sorting in genetic algorithms. Evolutionary Computation, 2(3):221-248, 1994. Multilayer feedforward networks are universal approximators. K Hornik, H Stinchcombe, White, Neural Networks. 25K Hornik, M Stinchcombe, and H White. Multilayer feedforward networks are universal approximators. Neural Networks, 2(5):359-366, 1989. Backpropagation: theory, architectures, and applications. Y Chauvin, De Rumelhart, Psychology PressY Chauvin and DE Rumelhart. Backpropagation: theory, architectures, and applications. Psychology Press, 2013. Deep Learning. Goodfellow, A Bengio, Courville, MIT PressI Goodfellow, Y Bengio, and A Courville. Deep Learning. MIT Press, 2016. http://www.deeplearningbook.org. TensorFlow: Large-scale machine learning on heterogeneous systems. M Abadi, P Agarwal, E Barham, Brevdo, C Chen, Citro, Gs Corrado, Davis, Dean, Devin, Ghemawat, Goodfellow, Harp, M Irving, Y Isard, Jia, Jozefowicz, M Kaiser, Kudlur, Levenberg, Mané, Monga, Moore, C Murray, M Olah, J Schuster, Shlens, Steiner, Sutskever, Talwar, Tucker, Vanhoucke, Vasudevan, Viégas, Vinyals, Warden, M Wattenberg, Y Wicke, X Yu, Zheng, M Abadi, A Agarwal, P Barham, E Brevdo, Z Chen, C Citro, GS Cor- rado, A Davis, J Dean, M Devin, S Ghemawat, I Goodfellow, A Harp, G Irving, M Isard, Y Jia, R Jozefowicz, L Kaiser, M Kudlur, J Levenberg, D Mané, R Monga, S Moore, D Murray, C Olah, M Schuster, J Shlens, B Steiner, I Sutskever, K Talwar, P Tucker, V Vanhoucke, V Vasudevan, F Viégas, O Vinyals, P Warden, M Wattenberg, M Wicke, Y Yu, and X Zheng. TensorFlow: Large-scale machine learning on heterogeneous sys- tems, 2015. URL https://www.tensorflow.org/. Software available from tensorflow.org. . F Chollet, F Chollet et al. Keras. https://keras.io, 2015. Adam: A method for stochastic optimization. Dp Kingma, Ba, arXiv:1412.6980arXiv preprintDP Kingma and J Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Multi-objective optimization using evolutionary algorithms. K Deb, John Wiley and SonsK Deb. Multi-objective optimization using evolutionary algorithms. John Wiley and Sons, 2001. Optimization in practice with MATLAB®: for engineering students and professionals. A Messac, Cambridge University PressA Messac. Optimization in practice with MATLAB®: for engineering students and professionals. Cambridge University Press, 2015. Uncertainty quantification: theory, implementation, and applications. R Smith, R Smith. Uncertainty quantification: theory, implementation, and appli- cations. 2013.
[ "https://github.com/martemyev/tethex." ]
[ "Time-reversal symmetry breaking and emergence in driven-dissipative Ising models", "Time-reversal symmetry breaking and emergence in driven-dissipative Ising models" ]
[ "Daniel A Paz *[email protected] \nDepartment of Physics and Astronomy\nMichigan State University\n426 Auditorium Rd. East Lansing48823MIUSA\n", "Mohamamd F Maghrebi \nDepartment of Physics and Astronomy\nMichigan State University\n426 Auditorium Rd. East Lansing48823MIUSA\n" ]
[ "Department of Physics and Astronomy\nMichigan State University\n426 Auditorium Rd. East Lansing48823MIUSA", "Department of Physics and Astronomy\nMichigan State University\n426 Auditorium Rd. East Lansing48823MIUSA" ]
[]
Fluctuation-dissipation relations (FDRs) and time-reversal symmetry (TRS), two pillars of statistical mechanics, are both broken in generic driven-dissipative systems. These systems rather lead to non-equilibrium steady states far from thermal equilibrium. Driven-dissipative Ising-type models, however, are widely believed to exhibit effective thermal critical behavior near their phase transitions. Contrary to this picture, we show that both the FDR and TRS are broken even macroscopically at, or near, criticality. This is shown by inspecting different observables, both even and odd operators under time-reversal transformation, that overlap with the order parameter. Remarkably, however, a modified form of the FDR as well as TRS still holds, but with drastic consequences for the correlation and response functions as well as the Onsager reciprocity relations. Finally, we find that, at criticality, TRS remains broken even in the weakly-dissipative limit. SciPost Physics Submission 6.1 Green's functions 22 6.2 FDR* for non-Hermitian operators 23 6.3 Weakly-dissipative limit 23 7 Conclusion and Outlook 24 References 25
10.21468/scipostphys.12.2.066
[ "https://arxiv.org/pdf/2105.12747v2.pdf" ]
237,571,845
2105.12747
b39f71f86fb994a11ee921a64fd5684d177de4d7
Time-reversal symmetry breaking and emergence in driven-dissipative Ising models September 21, 2021 Daniel A Paz *[email protected] Department of Physics and Astronomy Michigan State University 426 Auditorium Rd. East Lansing48823MIUSA Mohamamd F Maghrebi Department of Physics and Astronomy Michigan State University 426 Auditorium Rd. East Lansing48823MIUSA Time-reversal symmetry breaking and emergence in driven-dissipative Ising models September 21, 2021SciPost Physics Submission Fluctuation-dissipation relations (FDRs) and time-reversal symmetry (TRS), two pillars of statistical mechanics, are both broken in generic driven-dissipative systems. These systems rather lead to non-equilibrium steady states far from thermal equilibrium. Driven-dissipative Ising-type models, however, are widely believed to exhibit effective thermal critical behavior near their phase transitions. Contrary to this picture, we show that both the FDR and TRS are broken even macroscopically at, or near, criticality. This is shown by inspecting different observables, both even and odd operators under time-reversal transformation, that overlap with the order parameter. Remarkably, however, a modified form of the FDR as well as TRS still holds, but with drastic consequences for the correlation and response functions as well as the Onsager reciprocity relations. Finally, we find that, at criticality, TRS remains broken even in the weakly-dissipative limit. SciPost Physics Submission 6.1 Green's functions 22 6.2 FDR* for non-Hermitian operators 23 6.3 Weakly-dissipative limit 23 7 Conclusion and Outlook 24 References 25 Introduction Quantum systems in or near equilibrium define a paradigm of modern physics. The past two decades, however, have witnessed a surge of interest in non-equilibrium systems thanks to the advent of novel experimental techniques where quantum matter is observed far from equilibrium. An immediate challenge is that general guiding principles of equilibrium statistical mechanics are not directly applicable in this new domain. One such general feature is the principle of detailed balance in equilibrium [1]. Extensions of this principle to the quantum domain have been studied extensively for both closed and open systems [2][3][4][5]. In all such settings, detailed balance is directly tied to time-reversal symmetry (TRS) under reversing the direction of time (in two-time correlators, e.g.). A second defining characteristic of equilibrium systems is the fluctuation-dissipation relations (FDRs) relating the dynamical response of the system to their inherent fluctuations. Importantly, these two principles are not independent: a proper formulation of the TRS leads to the FDRs [6]. A generic non-equilibrium setting is defined by driven-dissipative systems characterized by the competition of an external drive and dissipation due to coupling to the environment. This competition leads the system towards a non-equilibrium steady state far from thermal equilibrium [7,8]. Due to the non-equilibrium dissipative dynamics, both TRS and FDR are generally broken in these steady states [9]; the guiding principles of equilibrium physics are thus absent in their driven-dissipative counterparts. Nonetheless, it has become increasingly clear that the critical properties of a large class of many-body driven-dissipative systems (yet not all [10][11][12][13][14]) are described by an effective equilibrium behavior near their respective phase transitions [15][16][17][18][19][20][21][22][23][24][25][26][27]. This is particularly the case for Ising-like phase transitions where the order parameter takes a relatively simple form and the dynamics is rather constrained. Thermal critical behavior is even observed in the driven-dissipative Dicke phase transition [28]. In this work, we consider driven-dissipative Ising-type models, but, contrary to what is generally believed, we show that both FDR and TRS are broken even macroscopically at or near criticality. This is shown by inspecting different observables that overlap with the order parameter and crucially encompass both even and odd operators under time-reversal transformation. We show that these observables satisfy emergent FDR-like relations but with effective temperatures that are opposite in sign; we dub such relations FDR*. Moreover, while TRS is broken macroscopically, we show that a modified form of the time-reversal symmetry of twotime correlators, dubbed TRS*, emerges at or near criticality where correlation and response functions exhibit definite, but possibly opposite, parities under time-reversal transformation. This is in sharp contrast with equilibrium where correlation and response functions exhibit the same parity. We showcase our results in the context of two relatively simple models, enabling exact analytical and numerical calculations. The main model considered here is an infinite-range driven-dissipative Ising model, a descendant of the paradigmatic open Dicke model [29,30]. We also consider a short-range quadratic model of driven-dissipative bosons with the Ising symmetry. These models provide an ideal testbed for the general questions about the fate of the FDR and TRS in driven-dissipative systems, the role of the time-reversal symmetry (breaking), and the emergence of modified fluctuation-dissipation relations. We begin by summarizing our main results in Section 2. Building on the techniques developed in a recent work [31], we set up a non-equilibrium field theory and calculate the exact correlation and response functions in Section 3. In Section 4, we determine the effective temperatures and provide evidence for the modified FDR* and TRS* via exact analytics and numerics. We further show that, even in the limit of vanishing dissipation, the TRS breaking or restoring depends on a certain order of limits. In Section 5, we present an effective field theory from which we prove the FDR* and TRS* and furthermore derive the modified Onsager reciprocity relations. Finally, in Section 6, we make the case for the broader application of our conclusions in the setting of a short-range driven-dissipative model of coupled bosons. Main Results Characteristic information about a given quantum system and a set of observablesÔ i can be obtained from the two-point functions C O i O j (t) = {Ô i (t),Ô j } , χ O i O j (t) = −iΘ(t) [Ô i (t),Ô j ] ,(1) which define the correlation function and the causal response function, respectively; the former captures fluctuations (e.g., at equal times), while the latter describes the response of the system to a perturbation at an earlier time. The function Θ(t) is the Heaviside step function, used to enforce causality. The fluctuation-dissipation theorem, a pillar of statistical mechanics, relates these two quantities in equilibrium. For our purposes, we write the fluctuation-dissipation relation (FDR) as [32] FDR : χ O i O j (t) = 1 2T Θ(t)∂ t C O i O j ,(2) valid for classical systems (with the respective classical definitions of C(t) and χ(t) [33]), as well as quantum systems at finite temperature and at long times [32]. An alternative representation of the FDR in the frequency domain takes the form χ O i O j (ω) = ω 4T C O i O j (ω) ,(3) where χ O i O j (t) ≡ 1 2 [Ô i (t),Ô j ] , and the Fourier transform has been defined as f (ω) = t e iωt f (t) for a function f . Furthermore, if the system satisfies microreversiblity, or (quantum) detailed balance, two-time correlators exhibit a time-reversal symmetry [3,4]. Assuming that the operatorÔ i has a definite parity i under time-reversal (in the absence of magnetic fields), the correlation and response functions then satisfy [34] C O i O j (t) = i j C O j O i (t) ,(4a)χ O i O j (t) = i j χ O j O i (t) .(4b) In this work, we shall refer to such relations as TRS of two-time correlators, or just TRS. Notice that these set of equations are also consistent with the FDR in Eq. (2), and are valid in the frequency domain as well. The above equations form the origin of the Onsager reciprocity relations [35]. FDR and TRS are both broken in driven-dissipative systems as they give rise to a nonequilibrium steady states at long times. Extensive effort has gone into identifying the steady states of many-body driven-dissipative systems as well as their phase transitions. A large body of work, however, has shown that a variety of driven-dissipative many-body systems exhibit critical behavior that is effectively equilibrium [15][16][17][18][19][20][21][22][23][24][25][26][27][28]. Specifically, an effective temperature T eff emerges that governs the critical properties (e.g., critical exponents) near their phase transitions at long times/wavelengths. An effective TRS may be then expected to emerge as well given that TRS and FDR are intimately tied [6]. In this work, we consider driven-dissipative systems whose Hamiltonian-in the rotating frame-is itself time-reversal symmetric:TĤT −1 =Ĥ withT the antiunitary operator associated with the time-reversal transformation; here,T = K is simply complex conjugation. Dissipative coupling to the environment, however, explicitly breaks TRS and exposes the non-equilibrium nature of the system. Additionally, we assume that the full dynamics under the Liouvillian L comes with an Ising Z 2 symmetry that defines the order parameter at the phase transition. Non-equilibrium systems with the Z 2 symmetry are generally expected to fall under the familiar Ising universality class at their phase transitions. In fact, it is known that the Ising universality class is robust against non-equilibrium perturbations [36]. In harmony with this picture, previous work on driven-dissipative Ising-type systems has reported an emergent FDR governing the order-parameter dynamics for some T eff [17,20,28,[37][38][39]. Notwithstanding the evidence for emergent equilibrium near criticality, here we report that FDR and TRS are both macroscopically broken in quadratic driven-dissipative Isingtype systems. This becomes manifest by considering other observables that overlap with the order parameter, i.e., observables that share the same Z 2 symmetry. In the Ising model, for example, besideŜ x typically signifying the order parameter, we will also considerŜ y (with the transverse field along the z direction). This expanded set of observables exhibit critical scaling, but they do not obey an effective FDR. Interestingly, however, we show that a modified form of the FDR emerges at long times, FDR* : χ O i O * j (t) 1 2T eff Θ(t)∂ t C O i O j ,(5) where the sign means we have neglected noncritical corrections here and throughout the rest of the paper; we dub this modified relation FDR*. Here, we have assumed that theÔ i 's are Hermitian operators 1 which have the same Ising symmetry as the order parameter; we have also definedÔ * j =TÔ jT −1 (recall thatT = K). In the example of the Ising spin model, S * x =Ŝ x whileŜ * y = −Ŝ y . We emphasize that FDR* is only applicable for this subset of observables, and not for all observables as is the case in Eq. (2). In the frequency domain, FDR* takes the form Im χ O i O * j (ω) ω 4T eff C O i O j (ω) ,(6) which does not reduce to Eq. (3), in particular when the two operators have opposite parities. The FDR* is radically different from its equilibrium counterpart, and has important consequences. To see this, let us again assume that the operatorÔ i has a definite parity i under time-reversal transformation. In this case, the FDR can be written as χ O i O j (t) j 2T eff Θ(t)∂ t C O i O j .(7) This means that an emergent FDR is satisfied with χ O i O j = (1/2T ij )∂ t C O i O j but with different temperatures for different observables, T ij = j T eff , same in magnitude but possibly with opposite signs depending on the observables. For example, ifÔ 1 is even under time-reversal ( 1 = 1) andÔ 2 is odd ( 2 = −1), we find T 11 = −T 12 = T 21 = −T 22 = T eff . We further show that an unusual form of TRS holds at or near criticality: C O i O j (t) C O j O i (t), (8a) χ O i O j (t) i j χ O j O i (t).(8b) In parallel with FDR*, the above relations will be referred to as TRS*. Notice that the above equations are consistent with the FDR* in Eq. (7). Interestingly, the correlation and response functions transform differently under time-reversal transformation, in sharp contrast with equilibrium; cf. Eq. (4). While violating TRS, these functions still have a definite parity under time-reversal transformation. Moreover, combining Eqs. (7) and (8), we further show that the Onsager reciprocity relation finds a modified form with the opposite parity. This is surprising in light of the broken TRS, but is a direct consequence of the emergent TRS*. We derive these results via a simple field-theoretical analysis that identifies a slow mode in the vicinity of the phase transition. We show that the FDR* and TRS* are a consequence of the non-Hermitian form of the dynamics generator, due to the TRS of the Hamiltonian, TĤT −1 =Ĥ, combined with the Ising Z 2 symmetry of the Liouvillian L. Driven-Dissipative Ising Model Here, we briefly introduce the infinite-ranged driven-dissipative Ising model with spontaneous emission (DDIM) [31]. The DDIM describes a system of N driven, fully-connected 2-level atoms under a transverse field, and subject to individual atomic spontaneous emission. In the rotating frame of the drive, the Hamiltonian is given bŷ H = − J NŜ 2 x + ∆Ŝ z ,(9) with J an effective Ising coupling and ∆ the transverse field. For clarity, we use the total spin operatorsŜ α = iσ α i , withσ α the usual Pauli matrices. The Markovian dynamics of the system is given by the quantum master equation [40] dρ dt = L[ρ] = −i[Ĥ,ρ] + Γ iσ − iρσ + i − 1 2 {σ + iσ − i ,ρ} .(10) Here,ρ is the reduced density matrix of the system and the (curly) brackets represents the (anti)commutator. The first term on the RHS generates the usual quantum coherent dynamics, while the remaining terms describe the spontaneous emission of individual atoms at a rate Γ. While there is no time dependence in the rotating frame, detailed balance is directly broken, and the model is indeed non-equilibrium [41]. Equation (10) exhibits a Z 2 symmetry (σ x,y i → −σ x,y i ), which is spontaneously broken in the phase transition from the normal phase ( Ŝ x,y = 0) to the ordered phase ( Ŝ x,y = 0). This Ising-type model provides a minimal setting that leads to a driven-dissipative phase transition. As we shall see in Sec. 5, the Ising symmetry enables a simple field-theory description that allows us to derive the main results of this paper. Due to the collective interaction, the DDIM phase diagram is exactly obtained via mean field theory in the thermodynamic limit [31], where the critical value of Γ is given by Γ c = 4 ∆(2J − ∆). However, it is also a physically relevant model, realized experimentally either in the large-detuning limit of the celebrated open Dicke model [31,[42][43][44][45][46][47], or directly through trapped-ions [48]. At the same time, this model allows for exact analytical and numerical calculations, and provides an ideal testbed for our conclusions. We will also consider a shortrange model in Section 6 where we arrive at the same conclusions. In the models considered in this paper, the operatorT = K is simply complex conjugation. The Hamiltonian in Eq. (9) is time-reversal symmetric because it is real. Time-reversal transformation (i.e., acting with the anti-unitary operatorT together with sending t → −t) leaves the von Neumann equation ∂ tρ = −i[Ĥ,ρ] invariant. In the ground state, this symmetry enforces Ŝ y = 0 asTŜ yT −1 = −Ŝ y ; this is true even in the ordered phase where Ŝ y = 0 while Ŝ x = 0 [39]. Furthermore, correlators such as {Ŝ x ,Ŝ y } that are odd under timereversal must be zero. More generally, these symmetry considerations can be extended to thermal states under unitary dynamics as they satisfy the KMS condition and exhibit an equilibrium symmetry that involves time-reversal [49,50]. Two time-correlators then satisfy the symmetry relations in Eq. (8). However, the driven-dissipative model in Eq. (10) breaks such symmetries. This is because Eq. (10) is derived in the rotating frame of the drive, hence breaking detailed balance. The resulting steady state is then not a thermal state, and TRS of two-time correlators no longer holds [51]. Specifically, this allows for nonzero expectation values of odd observables such as Ŝ y (in the ordered phase) and correlators such as {Ŝ x ,Ŝ y } . Despite the infinite-range nature of the model, individual atomic dissipation makes the problem nontrivial since the total spin is no longer conserved. To make analytical progress, we adopt the approach that we have developed in Ref. [31]. We provide the technical steps in the following subsections; a non-technical reader may wish to skip ahead to Section 4 for the relevant results. Non-equilibrium field theory Using a non-equilibrium quantum-to-classical mapping introduced in Ref. [31,39], we can map exactly the non-equilibrium partition function (normalized to unity) Z = Tr (ρ ss ) = lim t→∞ Tr e tLρ (0) = 1,(11) to a Keldysh path-integral over a pair of real scalar fields, representing the order parameter of the phase transition. We have introduced the steady state density matrix ρ ss , defined as the long-time limit of the Liouvillian dynamics governed by Eq. (10). The process is done by vectorizing, or purifying, the density matrix, such that the non-equilibrium partition function takes the form Z = lim t→∞ I| e tL |ρ 0 ,(12) where we have performed the transformation |i j| → |i ⊗ |j on the basis elements of the density matrix. The inner product in the vectorized space is equivalent to the Hilbert-Schmidt norm in the operator space, A|B = Tr(A † B). The matrix L is given by L = −i Ĥ ⊗Î −Î ⊗Ĥ + Γ i σ − i ⊗σ − i − 1 2 σ + iσ − i ⊗Î +Î ⊗σ + iσ − i .(13) Following the vectorization procedure, we perform a quantum-to-classical mapping via a Suzuki-Trotter decomposition in the basis that diagonlizes the Ising interaction, and then utilize the Hubbard-Stratonovich transformation on the (now classical) collective Ising term [31,39]. Tracing out the leftover spin degrees of freedom leaves us with a path-integral representation of the partition function: Z = D[m c (t), m q (t)]e iS[m c/q (t)] ,(14) with the Keldysh action S = −2JN t m c (t)m q (t) − iN ln Tr T e t T(m c/q (t)) ,(15) where T is the time-ordering operator. For convenience, we have introduced the classical and quantum Hubbard-Stratonovich fields m c/q in the usual Keldysh basis [19,52]. The order parameter Ŝ x is given by the average of m c which takes a non-zero expectation value in the ordered phase; correlators involving the operatorŜ x too can be directly written in terms of the fields m c/q [31]. The matrix T in Eq. (15) in the basis defined byσ x ⊗Î andÎ ⊗σ x , wherê σ x = 1 0 0 −1 ,σ y = 0 i −i 0 ,σ z = 0 1 1 0 ,(16) is given by T =         − Γ 4 + i2 √ 2Jm q i∆ −i∆ Γ 4 i∆ − Γ 2 − 3Γ 4 + i2 √ 2Jm c − Γ 4 −i∆ − Γ 2 −i∆ − Γ 2 − Γ 4 − 3Γ 4 − i2 √ 2Jm c i∆ − Γ 2 Γ 4 −i∆ i∆ − Γ 4 − i2 √ 2Jm q         .(17) This matrix generates the (purified) dynamics of a pair of spins. Imaginary elements characterize the coherent evolution inheriting the factor of i from the unitary evolution. Beside the transverse field, the latter also includes the time-dependent fields m c (t) and m q (t) which mimic the "mean field" due to the Ising interaction. Real elements in T are all proportional to Γ, and characterize dissipation. Note the overall factor of N in Eq. (15) due to the collective nature of the Ising interaction, meaning that the saddle-point approximation is exact in the limit that N → ∞. We mention here that Eq. (15) indeed describes the steady state of the quantum master equation in Eq. (10), arising due to the competition between drive and dissipation. Correlation and response functions This action is only in terms of the scalar fields m c/q , which are related to the observablê S x [31]. To obtain the correlation and response functions forŜ y and the cross-correlations withŜ x , we introduce source fields α (u/l) and β (u/l) to Eq. (13) which couple toŜ x andŜ y respectively: L (t) = L + iα (u) (t)Ŝ x √ N ⊗Î − iα (l) (t)Î ⊗Ŝ x √ N + iβ (u) (t)Ŝ y √ N ⊗Î + iβ (l) (t)Î ⊗Ŝ y √ N ,(18) and perform the non-equilibrium quantum-to-classical mapping as usual. The absence of a minus sign on the last term stems from the vectorization transformation in the mapping. Introducing the sources does not affect the quadratic term in m in Eq. (15), but changes the T matrix to the new matrix T = T + T α + T β where T α = i 2 N     α q 0 0 0 0 α c 0 0 0 0 −α c 0 0 0 0 −α q     ,(19) and T β = 1 √ 2N     0 −β c + β q −β c − β q 0 β c − β q 0 0 −β c − β q −β c − β q 0 0 −β c + β q 0 −β c − β q β c − β q 0     .(20) We have performed the Keldysh rotation α c/q = (α (u) ± α (l) )/ √ 2, β c/q = (β (u) ± β (l) )/ √ 2 for convenience. Next, we expand the action to quadratic order in both x c/q and the source fields around m c/q = α c/q = β c/q = 0, S = 1 2 t,t         m c m q α c α q β c β q         T t         P 0 0 4JP αα P αα 0 4JP βα 2P βα P ββ         t−t         m c m q α c α q β c β q         t ,(21) where the kernel becomes a lower triangular block matrix. The block matrices take the usual Keldysh structure P = 0 P A P R P K , P αα = 1 4J 2 P + 0 2Jδ(t) 2Jδ(t) 0 , P βα = 0 P A βα P R βα P K βα , P ββ = P αα , and the matrix elements for each block matrix are P R (t) = P A (−t) = −2Jδ(t) + Θ(t)8J 2 e − Γ 2 t sin (2∆t) ,(22a)P K (t) = i8J 2 e − Γ 2 |t| cos (2∆t) ,(22b)P R βα (t) = −P A βα (−t) = −Θ(t)2e − Γ 2 |t| cos(2∆t) , (22c) P K βα (t) = −i2e − Γ 2 |t| sin(2∆t) ,(22d) Equation (21) is exact in the thermodynamic limit, as higher-order terms in the expansion are at least of the order O(1/N ). After Fourier transformation, defined as m(t) = ω e −iωt m(ω) with the integration measure ω = ∞ −∞ dω/2π, we integrate out the m c/q fields to obtain the generating functional W [α c/q , β c/q ] = −i ln Z as W = − 1 2 ω     α q β q α c β c     † ω     G K G R G A 0     ω     α q β q α c β c     ω .(23) The Green's function block matrices are given by G K = G K xx G K xy G K yx G K yy , G R = G R xx G R xy G R yx G R yy ,(24) and satisfy G K (ω) = −[G K ] † (ω) and G R (ω) = [G A ] † (ω). In terms of the original observ- ablesŜ x ,Ŝ y , the Green's functions become G K jj (ω) = −iF ω {Ŝ j (t),Ŝ j (0)} /N and G R jj (ω) = −iF ω Θ(t) [Ŝ j (t),Ŝ j (0)] /N , with F ω (f (t)) = ∞ −∞ dte iωt f (t). The elements of Eq. (24) are given by G K xx (ω) = −iΓ[Γ 2 + 4(4∆ 2 + ω 2 )] 2(ω − ω 1 )(ω − ω 2 )(ω − ω * 1 )(ω − ω * 2 ) , (25a) G K xy (ω) = 4Γ(iJΓ + 2Jω − 2∆ω) (ω − ω 1 )(ω − ω 2 )(ω − ω * 1 )(ω − ω * 2 ) , (25b) G K yy (ω) = −iΓ[Γ 2 + 16(2J − ∆) 2 + 4ω 2 ] 2(ω − ω 1 )(ω − ω 2 )(ω − ω * 1 )(ω − ω * 2 ) ,(25c)G R xx (ω) = 4∆ (ω − ω 1 )(ω − ω 2 ) ,(25d)G R xy (ω) = Γ − 2iω (ω − ω 1 )(ω − ω 2 ) , (25e) G R yx (ω) = −Γ + 2iω (ω − ω 1 )(ω − ω 2 ) , (25f) G R yy (ω) = −4(2J − ∆) (ω − ω 1 )(ω − ω 2 ) ,(25g) where ω 1/2 = − i 2 (Γ ∓ Γ c ), Γ c = 4 ∆(2J − ∆). Non-Equilibrium Signatures In this section, we discuss the macroscopic, critical behavior of the driven-dissipative Ising model introduced in Eq. (10). It is generally believed that such Ising models find an emergent equilibrium behavior near their phase transition. This is often argued by considering a single observable such as the order parameter and showing that it satisfies an effective FDR [17,20,[37][38][39]. In contrast, we consider different observables and show that the associated FDR and TRS are both violated even macroscopically. However, we show that a modified form of these relations emerge, dubbed as the FDR* and TRS*, which govern the critical behavior of this system. In the following subsections, we derive the effective temperatures for different set of observables, discuss the breaking and emergence of (modified) TRS, and finally discuss our results in the limit of vanishing dissipation. Effective temperature At thermal equilibrium and at low frequencies, the FDR in frequency space can be written as G R ij (ω) − G A ij (ω) = ω 2T G K ij (ω) .(26) We focus on the low-frequency limit as we will investigate the system at or near criticality where the dynamics is governed by a soft mode. To compare against the FDR in the time domain, we identify C O i O j (t) ≡ iG K ij (t) and χ O i O j (t) ≡ G R ij (t). 2 The above equation follows from another version of the FDR given by [32] −i χ O i O j (t) = 1 4T ∂ t C O i O j (t) ,(27) where χ O i O j (t) ≡ 1 2 [Ô i (t),Ô j ] = 1 2i G R ij (t) − G A ij (t) ; the retarded and advanced Green functions are defined directly from the operators as G R/A ij (t) ≡ ∓iΘ(±t) [Ô i (t),Ô j ] . While Eq. (2) is restricted to t > 0, the above equation is valid at all t, making it more suitable for the transition to Fourier space, i.e., Eq. (26). Of course, the two (causal and non-causal) versions of the FDR are equivalent in equilibrium. Equation (26) has been extensively used to identify an effective temperature even for non-equilibrium systems [31,37,38]. In the non-equilibrium setting of our model, however, we would immediately run into a problem for i = j when the corresponding operators have different parities under time-reversal transformation (e.g., T eff becomes infinite or complex valued). To see why, let us anticipate that the TRS* relations reported in Eq. (8) indeed hold, a fact that we will later justify near criticality and at long times. It is then easy to see that C O i O j (t) = C O j O i (−t) C O i O j (−t) while χ O i O j (t) = − χ O j O i (−t) − i j χ O i O j (−t). Now for two distinct operatorsÔ i andÔ j where i = − j , we find that both C O i O j (t) and χ O i O j (t)O i O j (t) and χ O i O j (t) to have opposite parities. Postulating an effective FDR in this case, valid for all t, forces us to include a sign function, sgn(t), that is, we should substitute (2) when t > 0, but is now conveniently valid at all times. This extension is informed by the anticipated form of the TRS* which we will justify later. The fluctuationdissipation relation is now conveniently cast in frequency space: for arbitrary operatorsÔ i andÔ j (with i and j being the same or distinct), the updated FDR takes the form χ O i O j (t) = G R ij (t) − G A ij (t) /2i → sgn(t) χ O i O j (t) = G R ij (t) + G A ij (t) /G R ij (ω) − i j G A ij (ω) = ω 2T ij (ω) G K ij (ω) ,(28) where we have now allowed for a frequency-and operator-dependent effective temperature T ij (ω). It is now clear that, while for i = j the above equation recovers the structure of the FDR (cf. Eq. (26)), a different combination, G R ij (ω) + G A ij (ω), appears on the left hand side when i = − j . The above equation can be brought into a more compact version again by anticipating the TRS* in Eq. (8b) to write i j G A ij (ω) G A ji (ω). Utilizing the relation G A ji (ω) = G R ij (ω) * , we are finally in a position to write an equation for the effective temperature in the low-frequency limit: T ij = lim ω→0 ω 2 G K ij G R ij (ω) − G A ji (ω) = lim ω→0 ω 4 −iG K ij Im G R ij (ω) .(29) We have taken the low-frequency limit appropriate near criticality. Again we stress that the above equation is consistent with the standard form of the effective FDR for i = j, and it correctly incorporates the TRS* for i = j with opposite parities. A shorter, but perhaps less physically motivated, route to the above equation is to start directly from the causal form of the FDR in Eq. (2). The Fourier transform of this equation is given by [34] χ O i O j (ω) = 1 2T P dω 2π ω ω − ω C O i O j (ω ) − iω 2 C O i O j (ω) ,(30) where P stands for the principal part. Here too, we shall assume the TRS* in Eq. (8a): with C O i O j (t) C O j O i (t) = C O i O j (−t) regardless of the operators' parities, the correlation function C ij (t) is even in time, hence its Fourier transform, C O i O j (ω), is purely real. Taking the imaginary part of the above equation then yields Im χ O i O j (ω) = −(ω/4T )C O i O j (ω) where T has to be identified with the effective temperature T ij (ω). Therefore, we arrive at the same definition of the effective temperature in Eq. (29). Using Eq. (29), we can now identify the effective temperature in the driven-dissipative Ising model (defining i, j ∈ {x, y}) T xx = Γ 2 + 16∆ 2 32∆ ,(31a)T yy = Γ 2 + 16(∆ − 2J) 2 32(∆ − 2J) ,(31b)T xy = −T yx = −2JΓ 2 Γ 2 + 16∆(2J − ∆) . These expressions are calculated everywhere in the normal phase and generally take different values (see also [38]), underscoring the non-equilibrium nature of the model at the microscopic level. We note that the effective temperatures reported above have a physical significance only near the phase transition where the slow mode governs the dynamics. This is because we have neglected noncritical contributions in the derivation of Eq. (28) (by invoking TRS*). Equations (31a) and (31b) display non-analytic behaviour, though in different regions of the phase diagram. T xx diverges when ∆ → 0, in agreement with Ref. [53] that reports an infinite temperature in theσ x basis. In contrast, T yy diverges when ∆ = 2J for any finite value of Γ. This divergence coincides with the change in the dynamical behaviour from overdamped to underdamped dynamics as pointed out in [31]. Finally, T xy = −T yx are everywhere finite but opposite for the opposite order of the observables; this is tied to the TRS* as we will discuss later. The definition of the low-frequency effective temperature is particularly motivated near the phase boundary where there exists a soft mode that characterizes the low-frequency dynamics [31]. Interestingly, at (or near) the phase transition, we find T xx = −T xy = T yx = −T yy = J.(32) Remarkably, these effective temperatures find the same magnitude, but possibly with different signs. While focusing on a single observable (sayŜ x ) and its dynamics, one might be led to conclude that the system is in effective equilibrium. However, a different observable (sayŜ y ) exhibits the opposite effective temperature. Notice that all correlation functions (involvinĝ S x and/orŜ y ) are divergent at the phase transition, i.e., they are all sensitive to the soft mode; we will make this more precise in Section 5 where we develop an effective field theory. This suggests that although the critical behavior is governed by a single (soft) mode at the transition, the system is genuinely non-equilibrium even macroscopically. To support these analytical results, we have numerically simulated [39] the FDR in the time domain (cf. Eq. (2)) and at a representative critical point on the phase boundary for a finite, yet large, system with N = 100 spins. Correlation and response functions at criticality and at a finite system size require an analysis beyond the quadratic treatment presented here and thus serves as a nontrivial check of our results. Also, working in the time domain and restricting to t > 0, we circumvent the issues that arise in the frequency domain; see the discussion in the beginning of this subsection. Indeed, we find an excellent agreement in Fig. 2 between the analytical results (in frequency space) and the numerical results (in the time domain) with the exception of short time differences; the discrepancy at short times is a consequence of the fact that the (observable-dependent) effective temperature is defined in the zero-frequency limit of Eq. (28), therefore characterizing the long-time dynamics. In fact, the agreement is remarkably good even at relatively short times Jt 1. Finally, we remark that the difference at short times is not due to finite-size effects, and exists even in the limit N → ∞. χ O i O j (t) = Θ(t)C O i O j (t)/2T ij , TRS breaking As discussed in Section 3, broken TRS allows for nonzero correlators such as {Ŝ x ,Ŝ y } that are otherwise odd under the time-reversal transformation. Indeed, we find that this correlator is nonzero and is even critical. More precisely, we find from Eq. (25b) that C xy (t) ≡ iG K xy (t) = 4 Γ c −JΓ + sgn(t)(J − ∆)(Γ − Γ c ) Γ − Γ c e − Γ−Γc 2 |t| − −JΓ + sgn(t)(J − ∆)(Γ + Γ c ) Γ + Γ c e − Γ+Γc 2 |t| .(33) (For ease of notation, we have replaced C S i S j by C ij ; similarly for χ ij .) Specifically, at equal times, we have C xy (t = 0) = −8JΓ/(Γ 2 −Γ 2 c ) . Indeed, the equal-time cross correlation diverges as ∼ 1/(Γ − Γ c ) upon approaching the critical point Γ → Γ c . This is a stark manifestation of broken TRS at a macroscopic level. We also note that both C xx , C yy ∼ 1/(Γ − Γ c ) diverge in a similar fashion. Again, this is becauseŜ x andŜ y share the same soft mode, as will be shown in Section 5. The macroscopic breaking of TRS alters the Onsager symmetry relations in an exotic fashion that is distinct for the correlation and response functions. Indeed, the analytical expression in Eq. (33) shows that, near criticality and at sufficiently long times, C xy (t) − 4JΓ Γ c (Γ − Γ c ) e − Γ−Γc 2 |t| ,(34) hence, C xy (t) C xy (−t), or equivalently, C xy (t) C yx (t) up to noncritical corrections; far from criticality, the correlation functions do not generally satisfy this symmetry relation. Furthermore, the analytical expressions for the response functions in Eqs. (25e) and (25f) show that χ xy (t) = − χ yx (t). Interestingly, the cross-correlation and -response functions exhibit opposite parities. These analytical considerations are further supported by the numerical simulation shown in Fig. 3 at criticality confirming C xy (t) C yx (t) , (35a) χ xy (t) − χ yx (t) ,(35b) consistent with the TRS* in Eq. (8). Despite the broken TRS, the correlation and response functions retain definite, though distinct, parities under time-reversal. Weakly-dissipative limit In this section, we briefly consider a special limit of the driven-dissipative Ising model, namely a weakly-dissipative critical point at at ∆ → 2J and Γ → 0; see Fig. 1(b). It was shown in previous work that this limit leads to a different critical dynamics than a generic critical point at finite Γ [31,39]. Here, we are interested in the TRS breaking and its possible emergence in the limit of vanishing dissipation. Interestingly, we find that the fate of the TRS depends on the way that this critical point is approached, and that this distinction only exists in the thermodynamic limit where the system is gapless at the phase boundary. We shall consider two different scenarios below. In the first scenario, let us set ∆ = 2J and take the limit Γ → 0. Fourier transforming Eq. (25c) to the time domain gives C yy (t) = lim ∆→2J iG K yy (t) = 2e − 1 2 Γ|t| .(36) We thus see that theŜ y correlator is finite at the weakly-dissipative critical point, indicating thatŜ y has become "gapped". This appears to suggest a return to the equilibrium scenario whereŜ y plays no role in critical behaviour. However, the cross-correlation given by Eq. (33) remains nonzero and even critical at the weakly-dissipative point: C xy (t = 0) ∼ 1/Γ. Therefore, even in the limit of vanishing dissipation, TRS is macroscopically broken. In the second scenario, we consider ∆ > 2J and first take the limit Γ → 0. In this case, we have Γ c = i ∆(∆ − 2J) ≡ iω c , which then leads to lim Γ→0 C xy (t) = −4(∆ − J) ω c sin ω c t 2 .(37) This expression goes to zero at t = 0 for any value of ∆ including the weakly dissipative critical point as ∆ → 2J + , recovering the equilibrium result. The different behavior in the two scenarios lies in the fact that the system has a finite dissipative gap when we send Γ → 0 before sending ∆ → 2J but not vice versa. It has been shown that the steady state of a system with a finite dissipative gap becomes purely a function of the Hamiltonian in the limit of vanishing dissipation, i.e.,ρ ss = f (Ĥ) [54]; see also [55]. In this case, the steady state for our model can be written as a function of the Hamiltonian in Eq. (9), and thus respects TRS. A system of finite size would fall under this category as well because the system will always be gapped for N < ∞, irrespective of the order of limits taken w.r.t. ∆ and Γ. The argument about weakly-dissipative states commuting with H in gapped systems, however, fails in a gapless system corresponding to the first order of limits, where we sent ∆ → 2J before taking the weakly-dissipative limit. Indeed, we find that in this case the TRS is macroscopically broken even in the limit of vanishing dissipation. One can also determine the behavior of the effective temperature at the weakly-dissipative critical point. However, since the operatorŜ y is gapped, the definition of the low-frequency effective temperature doesn't seem appropriate. In fact, one finds that the effective temperatures involving this operator take different values (and even diverge) depending on the order of limits. Therefore, we will not report the effective temperature in this limit. Effective Field Theory In this section, we develop a simple, generic field-theory analysis that elucidates the origin of the effective temperatures and their signs as well as FDR* and TRS*. We first need to construct an action that maps the spin operatorsŜ x andŜ y to the fields x(t) and y(t), respectively. This is achieved by starting from the generating functional W in Eq. (23) and constructing a quadratic action in terms of x and y fields that exactly reproduces the correlations of the corresponding operators. This is simply done via a Hubbard-Stratonovich transformation on exp(iW [α c/q , β c/q ]) as e iW = D[x c/q , y c/q ]e iS eff [x c/q ,y c/q ]+i ω j T (−ω)v(ω) ,(38) where we have absorbed an unimportant normalization factor into the measure, and we have defined the source and field vectors j = (α q , β q , α c , β c ) T and v = (x c , y c , x q , y q ) T . The resulting action is given by where we have written the kernel in terms of 2 × 2 block matrices: S eff = 1 2 ω v † (ω) 0 D A D R D K ω v(ω) ,(39)D R (ω) = [D A ] T (−ω) = 2J − ∆ 1 4 (Γ − 2iω) 1 4 (−Γ + 2iω) −∆ , D K (ω) = i Γ 2 1 0 0 1 .(40) By inspecting the form of D R , we can identify the soft mode. At the critical point (Γ → Γ c ≡ 4 ∆(2J − ∆)), this matrix takes the form D R cr (ω = 0) = 2J − ∆ ∆(2J − ∆) − ∆(2J − ∆) −∆ .(41) A convenient decomposition of D R cr (ω = 0) is given by D R cr (ω = 0) = UΛU where U = 1 √ 2J∆ ∆ − 1 4 Γ c 1 4 Γ c ∆ , Λ = 0 0 0 −2J ,(42) valid for 0 < ∆ < 2J; the regime ∆ > 2J needs to be dealt with separately. The matrix U is orthogonal, i.e., UU T = I. Notice that this decomposition can be viewed as an SVD where D R cr (ω = 0) = UΛV T , where V = U T with both U and V being orthogonal matrices. In this sense, the left and right vectors are rotated with respect to the original directions in opposite directions; see Fig. 4(a). As we shall see, this is the reason behind the new FDR* and TRS*. This decomposition allows us to express both classical and quantum components of φ, ζ in terms of x, y as φ c ζ c = U x c y c = 1 √ 2J∆ ∆x c − 1 4 Γ c y c 1 4 Γ c x c + ∆y c , (43a) φ q ζ q = U T x q y q = 1 √ 2J∆ ∆x q + 1 4 Γ c y q − 1 4 Γ c x q + ∆y q . (43b) We note that the diagonal elements of Λ define the masses of the fields φ and ζ on the phase boundary. Therefore, we can identify φ as the soft mode and ζ as the gapped field. In addition, the Keldysh element of the kernel remains unchanged, U T D K U = D K . FDR* and TRS* The field-theory representation makes the origin of the results shown in Section 4 clear. The correlation and response functions can now be expressed in terms of φ and ζ. At the phase boundary, the low-frequency effective temperature is captured purely by the soft mode φ because ζ is gapped and does not affect the low-frequency behavior of the model. In other words, the dominant contribution to the effective temperature follows from the correlation and response functions involving φ, while those involving ζ as well as the cross-correlations produce noncritical corrections which can be neglected. We have, up to these corrections, T xx T φ ,(44a)T xy U 12 U 21 T φ = −T φ ,(44b)T yx T φ ,(44c)T yy U 12 U 21 T φ = −T φ ,(44d) where T φ ≡ lim ω→0 ω 2 φ c (ω)φ c (−ω) φ c (ω)φ q (−ω) − φ q (ω)φ c (−ω) ,(44e) can be viewed as the effective temperature of the soft mode. This interesting result is purely a consequence of the non-Hermitian structure of Eq. (41). Technically, one can see that the same pattern of effective temperatures emerges whenever the inverse retarded Green's function D 0 ≡ D R cr (ω = 0) obeys the relation τ z D 0 τ z = D T 0 , with τ z = 1 0 0 −1 ,(45) which simply states that the off-diagonal part of the matrix D 0 is antisymmetric. Note that D 0 is real, but non-Hermitian. The fact that the kernel D 0 satisfies the above property can be argued solely on the grounds that the Hamiltonian itself is time-reversal symmetric. To show this, let us assume the contrary, namely that the off-diagonal part of the matrix D 0 has a symmetric component. This would give rise to a coupling ∼ x c y q + x q y c where the fields' time dependence is implicit. Rewriting the classical and quantum fields in terms of the fields on the forward and backward branches of the Keldysh contour [52], such coupling becomes ∼ x + y + − x − y − . This term takes the structure of a Hamiltonian contribution to the action (S + − S − ); however, the Hamiltonian does not couple x and y since it is time-reversal invariant. We should then conclude that the off-diagonal part of D 0 is antisymmetric. In equilibrium, the off-diagonal terms are simply zero (at ω = 0); however, in a driven-dissipative system, dissipation naturally gives rise to nonzero (though antisymmetric) off-diagonal matrix elements. The role of the Z 2 symmetry in this analysis is to guarantee that there is only one soft mode described by a single real field, in contrast with other symmetries (such as U (1) symmetry) which would require more (or complex) fields to describe the critical behavior. In addition, an explicit Z 2 symmetry forbids a linear term in the action whose existence could alter our results. We remark that a generalized version of the FDR, G K (ω) = G R (ω)F(ω) − F(ω)G A (ω) , is also utilized in the literature [17,19] to determine the distribution function matrix F(ω). While in thermal equilibrium F(ω) = coth(ω/2T )I is proportional to the identity, the distribution function is allowed to become a nontrivial matrix in driven-dissipative systems, specifically in the context of the open Dicke model (possessing the same symmetries as those considered here). For the cavity mode, it was shown that this matrix finds two eigenvalues, ±λ(ω), whose low-frequency behaviour is given by λ(ω) ∼ 2T eff /ω [17]. The positive eigenvalue was then identified as the effective temperature. In contrast, our analysis clarifies the interpretation of the negative effective temperature in the form of the FDR* in Eq. (5) and its origin due to time-reversal symmetry breaking. Next, we derive the TRS* relations for the correlation and response functions in Eq. (8), namely C xy (t) C yx (t) while χ xy (t) − χ yx (t) up to noncritical corrections. As for the effective temperatures, the key is to keep only the critical contributions from φ c/q . Recall that ζ c/q are gapped, hence leading to noncritical corrections at or near the phase transition. The symmetry of the correlation function follows in a simple fashion as C xy (t) = 2 x c (t)y c (0) 2U 11 U 12 φ c (t)φ c (0) 2 y c (t)x c (0) = C yx (t).(46) For the response function, we have χ xy (t) = x c (t)y q (0) U 11 U 21 φ c (t)φ q (0) , χ yx (t) = y c (t)x q (0) U 12 U 11 φ c (t)φ q (0) .(47) Again using the fact that U 12 = −U 21 , one can see that χ xy (t) − χ yx (t). Finally, we remark that the field y becomes gapped at the weakly dissipative point as one can see from Eq. (43) (see also Fig. 4(b)), which leads to the noncritical Ŝ 2 y fluctuations. One thus recovers the equilibrium behavior, although, care should be taken with the Γ → 0 limit due to the order of limits discussed in Section 4.3. Onsager reciprocity relations In this section, we derive the modified form of the Onsager reciprocity relations. As a starting point, consider the saddle-point solution of Eq. (39): D R (i∂ t ) · x(t) = 0 where we have replaced ω → i∂ t and defined x = (x, y); we have dropped the subscript c for convenience. By rearranging the time derivatives, we find the equation d dt x(t) = −M · x(t) , with M = Γ/2 2∆ 4J − 2∆ Γ/2 .(48) This equation describes the average dynamics of x(t) (i.e., Ŝ x,y ) near the steady state and governs its decay to zero. Adopting a slightly more general notation, the dynamics near the steady state can be written as d dt x i t = − M ik x k t ,(49) where {x i } denote a set of macroscopic variables, and · t represents the statistical (and, the quantum) average at time t; we later specialize to the variable x by setting x 1 ≡ x and x 2 ≡ y. Now defining L ij = k M ik x k x j , Onsager reciprocity relations in equilibrium take the form L ij = i j L ji ,(50) where i denotes the parity of the corresponding field under time-reversal transofrmation. These relations are a direct consequence of the equilibrium FDR-in the form of Onsager's regression hypothesis-together with the TRS. The Onsager reciprocity relations are of great importance for their fundamental significance as well as practical applications. We shall refer the interested reader to Ref. [34] for the proof of the reciprocity relations in a classical setting. In the non-equilibrium context of our model with both FDR and TRS broken, the Onsager reciprocity relations do not generally hold; however, given the modified form of the FDR* and TRS* in Eqs. (7) and (8), one may expect a modified form of the Onsager relations perhaps with a different parity than the one expected in equilibrium. Here, we show that this is indeed the case. To this end, we first note that the Onsager's regression hypothesis is modified in a straightforward fashion as x i t = j λ k B T x i (t)x j (0) ,(51) assuming that a "magnetic" field λ has been applied along the i direction before it is turned off at time t = 0. The only difference from the standard Onsager regression hypothesis is the prefactor j appearing out in front, a factor that simply carries over from Eq. (7). Combining with Eq. (49), we have d dt x i (t)x j (0) = − M ik x k (t)x j (0) .(52) Notice that the factors of j cancel out on both sides. Finally, using the TRS* of the correlation function, C ij (t) = C ji (t) regardless of the corresponding parities, and setting t = 0, we find 3 L ij L ji .(53) Notice the absence of the TRS parity factors i j ; cf. the equilibrium Onsager reciprocity relation in Eq. (50). To verify that this relation holds in our non-equilibrium setting, it is important to distinguish the contribution of the soft mode, responsible for the critical behavior, from the gapped mode. Therefore, we shall consider the dynamics at a coarse-grained level where the gapped mode is "integrated out". To this end, let's write M = mφ R φ L + M ζ R ζ L ,(54) where we have used a dyadic notation. Here, φ R/L and ζ R/L define the right/left eigenvectors of the matrix M. These vectors are biorthogonal, that is, φ L · φ R = ζ L · ζ R = 1 while φ L · ζ R = ζ L · φ R = 0. Furthermore, m and M represent the two eigenvalues of the matrix M: the eigenvalue m vanishes at the critical point defining the soft mode, while M remains finite (at the order of J) and defines the gapped mode. The notation for the soft and gapped modes mirror our conventions for the effective field theory. In fact, the above diagonalization is a similar decomposition to that of the previous section but in a different basis (notice that M is "rotated" with respect to D R ). While we do not need the explicit form of the eigenvalues and the (right and left) eigenvectors, here we provide them for completeness: φ R = − ∆ 2J − ∆ , 1 , φ L = 1 2 − 2J − ∆ ∆ , 1 , m = Γ − 4 (2J − ∆)∆ , ζ R = ∆ 2J − ∆ , 1 , ζ L = 1 2 2J − ∆ ∆ , 1 , M = Γ + 4 (2J − ∆)∆ .(55) Now, the coarse-grained dynamics at sufficiently long times is governed solely by the soft mode, while the gapped field quickly decays to zero (ζ L ·x = 0). Therefore, the slow dynamics is given by d dt x = −M · x ,(56) where we have defined M = mφ R φ L keeping only the critical component. We are finally in a position to study the relation between L xy and L yx explicitly defined by L xy = M xx xy + M xy yy , L yx = M yx xx + M yy yx .(57) Now notice that the fluctuations x i x j ∼ φ R i φ R j φ 2 where φ 2 represents the critical fluctuations (to be identified with φ 2 c in the previous section); this simply means that the dominant contribution to fluctuations is given by the overlap of dynamical variables with the critical field. Additionally, using the biorthogonality ζ L · φ R = 0, we have ζ L 1 , ζ L 2 ∝ −φ R 2 , φ R 1 . We can then write L xy − L yx ∝ ζ L · M · φ R = 0 ,(58) where the last equality follows from ζ L · M ∝ ζ L · φ R = 0. 4 We thus arrive at the relation L yx L xy in harmony with our modified version of the Onsager reciprocity relation. This should be contrasted with the reciprocity relation in equilibrium: L xy = −L yx with x (y) even (odd) under time-reversal transformation. Driven-Dissipative Coupled Bosons In this section, we go beyond the infinite-range model discussed so far and consider a quadratic model of driven-dissipative bosons. The model being quadratic can be solved exactly using any number of techniques. For a coherent presentation, we will adopt a simple (Keldysh) field-theoretical analysis. Our main point is however that the conclusions of this work apply to a wider range of models. To be specific, consider a bosonic model on a cubic lattice in d dimensions with the Hamiltonian H = − J 2d ij (â i +â † i )(â j +â † j ) + 2∆ iâ † iâ i ,(59) and subject to the dissipationL i = √ Γâ i .(60) The coefficients in the Hamiltonian are chosen for later convenience. Notice that the Hamiltonian is time-reversal symmetric. This follows from either writing the operatorâ in terms of two quadratures that are even and odd under time-reversal (see below), or directly by noting thatTâT −1 =â and similarly forâ † (site index suppressed) althoughT is antiunitary (T iT −1 = −i) [56]. The above bosonic Hamiltonian is therefore real and time-reversal symmetric. In addition, the Liouvillian governing the dynamics is Z 2 symmetric under the transformation a → −a, similar to the driven-dissipative Ising model considered in Eq. (10). This symmetry is broken at the phase transition where a becomes nonzero in the ordered phase. As discussed in Sec. 3, the Z 2 symmetry provides a minimal setting where the timereversal symmetry breaking or emergence can be investigated near criticality. The Keldysh action for this model can be constructed in a straightforward fashion using a coherent-state representation mapping operators to c-valued fields asâ i → a i (t) and a † i → a * i (t). A path-integral formalism can be straightforwardly constructed in terms of these bosonic fields on a closed contour with the Keldysh action given by [20] S K = S H + S D ,(61) where S H,D represent the coherent and dissipative terms, respectively. The coherent term in the action is given by S H = σ=+,− σ t i a * iσ i∂ t a iσ − H[a iσ , a * iσ ] ,(62) with σ = ± representing the forward and backward branches of the contour. The last term represents the (normal-ordered) Hamiltonian in the coherent-state representation. The relative sign of the forward and backward branches has its origin in the commutator [Ĥ,ρ]. The dissipative term in the action takes the form S D = −iΓ i t a i+ a * i− − 1 2 a * i+ a i+ + a * i− a i− .(63) Upon a Keldysh rotation a cl/q ≡ (a + ± a − )/ √ 2 (site index i being implicit), the Keldysh action is then written in terms of classical and quantum fields. Here, it is more convenient to cast the bosonic field in terms of its real and imaginary parts (the two quadratures) as a i (t) = (Φ i (t) − iΠ i (t))/2 where the factor of 1/2 is chosen for later convenience. The corresponding operators can be viewed as a scalar field and the conjugate momentum. These Hermitian operators obey the same symmetry relations as x and y in the DDIM, where Φ is even under TRS while Π is odd. The anti-unitary nature of the time-reversal transformation makes the bosonic fields real and invariant under TRS. The Lagrangian L K defined via the Keldysh action S K = dtL K then takes the form [20] L K = i 1 2 Φ iq ∂ t Π ic − 1 2 Π iq ∂ t Φ ic − ∆(Φ ic Φ iq + Π ic Π iq ) + Γ 4 (Φ iq Π ic − Φ ic Π iq + iΦ 2 iq + iΠ 2 iq ) + ij J 2d (Φ ic Φ jq + Φ iq Φ jc ) ,(64) in terms of classical and quantum fields Φ ic/q and Π ic/q . In momentum space, the Keldysh action takes almost an identical form to Eq. (39) with the substitution v → (Φ c , Π c , Φ q , Π q ) where the frequency and momentum (ω, k) are implicit and J → J k = J d (cos k 1 + · · · + cos k d ). This implies that this model too exhibits a phase transition at the same set of parameters. While a nonlinear term is needed to regulate things on the ordered side, we shall only consider the critical behavior. Green's functions Since Eq. (64) is identical to Eq. (39) upon the above substitutions, we can immediately write the correlation and response functions of Φ and Π. They are simply given by Eq. (25) once with J is substituted by J k . Using the definitions of the bosonic variables in terms of the real fields, we can easily determine the form of the bosonic Green's functions: G K =   G K aa † G K aa G K a † a † G K a † a   , G R =   G R aa † G R aa G R a † a † G R a † a   ,(65) where G K aa † (ω, k) = G K a † a (−ω, k) = −iΓ 3Γ 2 + 4(32J 2 k + 12∆ 2 + 8∆ω + 3ω 2 − 8J k (4∆ + ω)) 8(ω − ω 1 )(ω − ω 2 )(ω − ω * 1 )(ω − ω * 2 ) ,(66a)G K aa (ω, k) = − G K a † a † (ω, k) * = iΓ 128J 2 k + Γ 2 − 16iJ k (Γ − 8i∆) + 4(4∆ 2 + ω 2 ) 8(ω − ω 1 )(ω − ω 2 )(ω − ω * 1 )(ω − ω * 2 ) ,(66b)G R aa † (ω, k) = G R a † a (−ω, k) * = −4J k + 4∆ + 2ω + iΓ 2(ω − ω 1 )(ω − ω 2 ) ,(66c)G R aa (ω, k) = G R a † a † (ω, k) * = −2(J k − ∆) (ω − ω 1 )(ω − ω 2 ) ,(66d) and G R (ω, k) = [G A (ω, k)] † , G K (ω, k) = −[G K (ω, k)] † . In a slight abuse of notation, we have defined the modes ω 1/2 = −i(Γ∓Γ c (J k ))/2 (introduced earlier in Section 3.2) and defined the function Γ c (J) ≡ 4 ∆(2J − ∆). For comparison with the FDR in the time-domain, we quote the long-wavelength (k → 0) limit of the correlation and response functions at criticality: G K aa † (t, k) = G K a † a (−t, k) ∼ −i4dJ ∆k 2 e −Ak 2 |t| ,(67a)G K aa (t, k) = − G K a † a † (t, k) * ∼ 4d k 2 −i(J + ∆) ∆ + 4(2J − ∆) Γ c e −Ak 2 |t| ,(67b)G R aa † (t, k) = G R a † a (t, k) * ∼ Θ(t) 8(J − ∆) Γ c − 2i e −Ak 2 t ,(67c)G R aa (t, k) = G R a † a † (t, k) * ∼ Θ(t) −8J Γ c e −Ak 2 t ,(67d) where we have defined A = −JΓ c /4d(2J − ∆) and Γ c = Γ c (J). The expressions above are obtained by first setting Γ = Γ c and then taking the limit k → 0 while keeping k 2 t = const. These expressions are valid all along the phase boundary except at the weakly-dissipative critical point since we have assumed k 2 2J − ∆ in our derivation. sending Γ → 0, we find that the cross-correlation G K ΦΠ (t = 0, k = 0) ∼ 1/Γ diverges even at the weakly-dissipative critical point, while this quantity remains zero in equilibrium as it is odd under the time-reversal transformation. Conclusion and Outlook In this work, we have considered Ising-like driven-dissipative systems where the Hamiltonian itself is time-reversal symmetric although dissipation breaks this symmetry. We have shown that, despite an emergent effective temperature, the FDR and TRS are macroscopically violated when one considers multiple operators that overlap with the order parameter and are even or odd under time-reversal transformation. Nevertheless, we have argued that a modified form of the fluctuation-dissipation relation (dubbed FDR*) governs the critical behavior. Similarly, a modified form of time-reversal symmetry (dubbed TRS*) arises where correlation and response functions find definite, but possibly opposite, parities under time-reversal transformation; in sharp contrast with TRS in equilibrium, one cannot assign a well-defined parity to a given operator while correlation and response functions exhibit definite parities. Additionally, we have derived a modified form of the Onsager reciprocity relation in harmony with the TRS* while violating the TRS. These conclusions are based on the underlying symmetries (time-reversal symmetry of the Hamiltonian and the Ising symmetry of the full Liouvillian) and the existence of a single soft mode at the phase transition. They follow from a generic field-theoretical analysis that leads to a non-Hermitian kernel for the dynamics. We have presented our results in the context of two relatively simple Ising-like driven-dissipative systems. Finally, we have shown that even in the limit of vanishing dissipation, TRS is not necessarily restored. We distinguish our results from recent interesting works where quantum detailed balance, microreversibility and time-reversal symmetry [57][58][59] or extensions thereof [60] are an exact property of a special class of open quantum systems. On the other hand, the modified time-reversal symmetry of two-time correlators introduced here arises near criticality, but is expected to hold for a large class of driven-dissipative systems near their phase transitions, specifically mean-field or quadratic models (or models with irrelevant interactions in the sense of renormalization group) exhibiting an Ising type phase transition and described by a time-reversal invariant Hamiltonian. A natural extension of the models considered here is the full Dicke model with both bosonic and spin operators in a single-or multi-mode cavity [30,42,61]. The two components of the spin operators as well as the two quadratures of the cavity mode(s) constitute a larger space of operators that overlap with the order parameter and are even/odd under time-reversal transformation, but similar results should be expected. Another interesting direction is to go beyond mean-field or quadratic models and consider nonlinear interactions and their effect on the modified fluctuation-dissipation relations and time-reversal symmetry. An important future direction is to investigate if similar FDR* and TRS* emerge for phase transitions governed by different symmetries, as the order parameter takes a more complicated form. It is possible that a generalization of the results reported in this work would depend on the underlying symmetries, as well as the weak or strong nature of such symmetries [62]. Similarly, one may consider models where the time-reversal transformation takes a more complicated form than complex conjugation. More generally, it is desired to identify emergent forms of time-reversal symmetry governing the macroscopic behavior of driven-dissipative systems despite this symmetry being broken microscopically. are even in time (for a fixed set of operators). However, this is not compatible with Eq.(27) as it requires C 2i on the left hand side of Eq. (27) when i = − j . Notice that the extended FDR is consistent with the causal FDR in Eq. Figure 1 : 1(a) Effective temperatures T xx , T xy , T yx and T yy as a function of ∆ at or away from the phase boundary; we choose the parameters J = 1, Γ = 4 with the point ∆ = 1 representing the critical point at the tip of the phase boundary (see the red dot in panel (b)). The effective temperatures become equal up to a sign at the critical point. The same pattern emerges on any point along the phase boundary and away from Γ = 0. (b) The phase diagram of the DDIM. The shaded region is the ordered phase where Ŝ x,y = 0. Figure 2 : 2Numerical plots of correlation and response functions at a representative critical point with J = 1, ∆ = 1, Γ = 4 and the system size N = 100. A modified fluctuationdissipation relation, emerges at long times. The effective temperatures take the same value up to a sign: T xx = −T xy = T yx = −T yy = J. Figure 3 : 3Cross-correlation and -response functions at criticality (J = 1, ∆ = 1, Γ = 4, N = 100). A modified form of TRS emerges at criticality where correlation (panel a) and response (panel b) functions exhibit opposite parities under time-reversal transformation. Figure 4 : 4Schematic representation of the massless and massive fields φ and ζ in terms of the x and y fields that representŜ x andŜ y . (a) The gapped/gapless fields are shown at a generic critical point. The classical and quantum fields are rotated with respect to the x-y axes, but in opposite directions, a fact that leads to the opposite signs of the effective temperatures. (b) At the weakly-dissipative critical point, ∆ = 2J, Γ → 0, the gapless and gapped fields align with the x and y axes, respectively, similar to thermal equilibrium. Unlike the standard FDR, the FDR* is sensitive to the operators being Hermitian or not; see Section 6.2. We are including a normalization factor 1/N in the definition of correlation and response functions for convenience. Since the modified FDR doesn't hold at short times, setting t = 0 might seem problematic. However, the error incurred in the process only amounts to a noncritical correction. While one might be tempted to conclude that M ∝ m → 0 at the critical point, the product m φ 2 remains finite due to the diverging fluctuations and thus Lxy assumes a nonzero value at the critical point. FDR* for non-Hermitian operatorsThe Green's functions of Φ and Π of the short-range model considered here are identical to those of the DDIM once we substitute J → J k . Therefore, the low-frequency effective temperatures of this model in the long-wavelength limit k → 0 are identical to those of the DDIM in Eq.(31). In other words, at criticality and at long wavelengths and frequencies this short-ranged model obeys the FDR*. The latter can be extended to the bosonic operatorsâ k andâ † k too. Taking the linear combination of the FDR* for the two quadratures, we findThese relations can be explicitly verified by plugging in Eq. (67) with the effective temperature identified as T eff = J. Interestingly, the set of operators on the two sides of these FDR-like equations are different, namely the first operator (appearing at the earlier time) transforms into its adjoint between the two sides of these equations. The above equation suggests a more general form of the FDR* also applicable to non-Hermitian operators,where theÔ i 's are not necessarily Hermitian. The transpose T arises due to the combined action of taking the adjoint as well as conjugation due to the time-reversal transformation. This equation reduces to the FDR* for Hermitian operator in Eq. (5), while reproducing Eq. (68) for non-Hermitian (but real) bosonic operators.Weakly-dissipative limitFinally, we investigate the bosonic Green's functions at the weakly-dissipative point; this parallels our discussion of the weakly-dissipative DDIM in Section 4.3. Again we must be careful in taking the order of limits. We shall first Fourier transform Eq. (66) to the time domain, send ∆ → 2J, and then take the long-wavelength limit k → 0 in which case we have J k ∼ J(1 − k 2 /2d) and Γ c (J k ) ∼ i4 √ 2J|k|/d. Finally, we take the weakly-dissipative limit Γ → 0 and report only the critical contribution at long wavelengths:for α, β ∈ {a, a † }. Note that the dynamical exponent (z) is now different as the scaling variable is |k|t compared to k 2 t in Eq. (67), i.e., we find ballistic (z = 1) rather than diffusive dynamics (z = 2). Fluctuations diverge in the same fashion, G K αβ ∼ 1/k 2 , regardless of the dissipation, while the dynamical behavior undergoes a crossover; for a similar behavior of the DDIM, see Ref.[39]. As we kept k finite while taking Γ → 0, the system remains gapped. Therefore, the density matrix commutes with the Hamiltonian, in parallel with our discussion in Section 4.3. The TRS is then restored and the correlation and response functions satisfy the equilibrium FDR as one can directly see from Eq. (70). If we instead take k → 0 before S D Groot, P Mazur, Non-equilibrium thermodynamics. Dover PublicationsS. D. Groot and P. Mazur, Non-equilibrium thermodynamics, Dover Publications (1984). R Zwanzig, Nonequilibrium statistical mechanics. Oxford University PressR. Zwanzig, Nonequilibrium statistical mechanics, Oxford University Press (2001). Open quantum Markovian systems and the microreversibility. G S Agarwal, 10.1007/BF01391504Z. Phys. A. 2585409G. S. Agarwal, Open quantum Markovian systems and the microreversibility, Z. Phys. A 258(5), 409 (1973), doi:10.1007/BF01391504. Detailed balance in open quantum markoffian systems. H Carmichael, D Walls, 10.1007/BF01318974Z. Phys. B. 233299H. Carmichael and D. Walls, Detailed balance in open quantum markoffian systems, Z. Phys. B 23(3), 299 (1976), doi:10.1007/BF01318974. R Alicki, K Lendi, 10.1007/3-540-70861-8Quantum Dynamical Semigroups and Applications. Berlin HeidelbergSpringer-VerlagR. Alicki and K. Lendi, Quantum Dynamical Semigroups and Applications, Lecture Notes in Physics. Springer-Verlag, Berlin Heidelberg, ISBN 978-3-540-70860-5, doi:10.1007/3- 540-70861-8 (2007). G Biroli, 10.1093/acprof:oso/9780198768166.001.0001Strongly Interacting Quantum Systems out of Equilibrium: Lecture Notes of the Les Houches. T. Giamarchi, A. J. Millis, O. Parcollet, H. Saleur and L. F. CugliandoloOxford University Press99Slow relaxations and nonequilibrium dynamics in classical and quantum systemsG. Biroli, Slow relaxations and nonequilibrium dynamics in classical and quantum sys- tems, In T. Giamarchi, A. J. Millis, O. Parcollet, H. Saleur and L. F. Cugliandolo, eds., Strongly Interacting Quantum Systems out of Equilibrium: Lecture Notes of the Les Houches Summer School: Volume 99, August 2012, vol. 99, chap. 3, pp. 207-264. Oxford University Press, doi:10.1093/acprof:oso/9780198768166.001.0001 (2016). Quantum states and phases in driven open quantum systems with cold atoms. S Diehl, A Micheli, A Kantian, B Kraus, H P Büchler, P Zoller, 10.1038/nphys1073Nat. Phys. 411S. Diehl, A. Micheli, A. Kantian, B. Kraus, H. P. Büchler and P. Zoller, Quantum states and phases in driven open quantum systems with cold atoms, Nat. Phys. 4(11), 878 (2008), doi:10.1038/nphys1073. Quantum computation and quantum-state engineering driven by dissipation. F Verstraete, M M Wolf, J Ignacio Cirac, 10.1038/nphys1342Nat. Phys. 59633F. Verstraete, M. M. Wolf and J. Ignacio Cirac, Quantum computation and quantum-state engineering driven by dissipation, Nat. Phys. 5(9), 633 (2009), doi:10.1038/nphys1342. Time-asymmetric fluctuations of light and the breakdown of detailed balance. A Denisov, H M Castro-Beltran, H J Carmichael, 10.1103/PhysRevLett.88.243601Phys. Rev. Lett. 88243601A. Denisov, H. M. Castro-Beltran and H. J. Carmichael, Time-asymmetric fluctuations of light and the breakdown of detailed balance, Phys. Rev. Lett. 88, 243601 (2002), doi:10.1103/PhysRevLett.88.243601. Two-dimensional superfluidity of exciton polaritons requires strong anisotropy. E Altman, L M Sieberer, L Chen, S Diehl, J Toner, 10.1103/PhysRevX.5.011017Phys. Rev. X. 511017E. Altman, L. M. Sieberer, L. Chen, S. Diehl and J. Toner, Two-dimensional superflu- idity of exciton polaritons requires strong anisotropy, Phys. Rev. X 5, 011017 (2015), doi:10.1103/PhysRevX.5.011017. Driven markovian quantum criticality. J Marino, S Diehl, 10.1103/PhysRevLett.116.070407Phys. Rev. Lett. 11670407J. Marino and S. Diehl, Driven markovian quantum criticality, Phys. Rev. Lett. 116, 070407 (2016), doi:10.1103/PhysRevLett.116.070407. Emergent phases and critical behavior in a non-markovian open quantum system. H F H Cheung, Y S Patil, M Vengalattore, 10.1103/PhysRevA.97.052116Phys. Rev. A. 9752116H. F. H. Cheung, Y. S. Patil and M. Vengalattore, Emergent phases and critical be- havior in a non-markovian open quantum system, Phys. Rev. A 97, 052116 (2018), doi:10.1103/PhysRevA.97.052116. Quantum critical states and phase transitions in the presence of non-equilibrium noise. E G Dalla Torre, E Demler, T Giamarchi, E Altman, 10.1038/nphys1754Nat. Phys. 610806E. G. Dalla Torre, E. Demler, T. Giamarchi and E. Altman, Quantum critical states and phase transitions in the presence of non-equilibrium noise, Nat. Phys. 6(10), 806 (2010), doi:10.1038/nphys1754. Nonequilibrium Fixed Points of Coupled Ising Models. J T Young, A V Gorshkov, M Foss-Feig, M F Maghrebi, 10.1103/PhysRevX.10.011039Phys. Rev. X. 10111039J. T. Young, A. V. Gorshkov, M. Foss-Feig and M. F. Maghrebi, Nonequilib- rium Fixed Points of Coupled Ising Models, Phys. Rev. X 10(1), 011039 (2020), doi:10.1103/PhysRevX.10.011039. Nonequilibrium Quantum Criticality in Open Electronic Systems. A Mitra, S Takei, Y B Kim, A J Millis, 10.1103/PhysRevLett.97.236808Phys. Rev. Lett. 9723236808A. Mitra, S. Takei, Y. B. Kim and A. J. Millis, Nonequilibrium Quantum Criticality in Open Electronic Systems, Phys. Rev. Lett. 97(23), 236808 (2006), doi:10.1103/PhysRevLett.97.236808. Absence of long-range coherence in the parametric emission of photonic wires. M Wouters, I Carusotto, 10.1103/PhysRevB.74.245316Phys. Rev. B. 7424245316M. Wouters and I. Carusotto, Absence of long-range coherence in the parametric emission of photonic wires, Phys. Rev. B 74(24), 245316 (2006), doi:10.1103/PhysRevB.74.245316. Keldysh approach for nonequilibrium phase transitions in quantum optics: Beyond the Dicke model in optical cavities. E G D Torre, S Diehl, M D Lukin, S Sachdev, P Strack, 10.1103/PhysRevA.87.023831Phys. Rev. A. 87223831E. G. D. Torre, S. Diehl, M. D. Lukin, S. Sachdev and P. Strack, Keldysh approach for nonequilibrium phase transitions in quantum optics: Beyond the Dicke model in optical cavities, Phys. Rev. A 87(2), 023831 (2013), doi:10.1103/PhysRevA.87.023831. Unconventional magnetism via optical pumping of interacting spin systems. T E Lee, S Gopalakrishnan, M D Lukin, 10.1103/PhysRevLett.110.257204Phys. Rev. Lett. 110257204T. E. Lee, S. Gopalakrishnan and M. D. Lukin, Unconventional magnetism via op- tical pumping of interacting spin systems, Phys. Rev. Lett. 110, 257204 (2013), doi:10.1103/PhysRevLett.110.257204. Keldysh field theory for driven open quantum systems. L M Sieberer, M Buchhold, S Diehl, 10.1088/0034-4885/79/9/096001Rep. Prog. Phys. 79996001L. M. Sieberer, M. Buchhold and S. Diehl, Keldysh field theory for driven open quantum systems, Rep. Prog. Phys. 79(9), 096001 (2016), doi:10.1088/0034-4885/79/9/096001. Nonequilibrium many-body steady states via Keldysh formalism. M F Maghrebi, A V Gorshkov, 10.1103/PhysRevB.93.014307Phys. Rev. B. 93114307M. F. Maghrebi and A. V. Gorshkov, Nonequilibrium many-body steady states via Keldysh formalism, Phys. Rev. B 93(1), 014307 (2016), doi:10.1103/PhysRevB.93.014307. Quantum correlations and limit cycles in the driven-dissipative Heisenberg lattice. E T Owen, J Jin, D Rossini, R Fazio, M J Hartmann, 10.1088/1367-2630/aab7d3New J. Phys. 20445004E. T. Owen, J. Jin, D. Rossini, R. Fazio and M. J. Hartmann, Quantum correlations and limit cycles in the driven-dissipative Heisenberg lattice, New J. Phys. 20(4), 045004 (2018), doi:10.1088/1367-2630/aab7d3. Limit-cycle phase in driven-dissipative spin systems. C.-K Chan, T E Lee, S Gopalakrishnan, 10.1103/PhysRevA.91.051601Phys. Rev. A. 91551601C.-K. Chan, T. E. Lee and S. Gopalakrishnan, Limit-cycle phase in driven-dissipative spin systems, Phys. Rev. A 91(5), 051601 (2015), doi:10.1103/PhysRevA.91.051601. Collective phases of strongly interacting cavity photons. R M Wilson, K W Mahmud, A Hu, A V Gorshkov, M Hafezi, M Foss-Feig, 10.1103/PhysRevA.94.033801Phys. Rev. A. 94333801R. M. Wilson, K. W. Mahmud, A. Hu, A. V. Gorshkov, M. Hafezi and M. Foss-Feig, Collective phases of strongly interacting cavity photons, Phys. Rev. A 94(3), 033801 (2016), doi:10.1103/PhysRevA.94.033801. Steady-State Phases and Tunneling-Induced Instabilities in the Driven Dissipative Bose-Hubbard Model. A Le Boité, G Orso, C Ciuti, 10.1103/PhysRevLett.110.233601Phys. Rev. Lett. 11023233601A. Le Boité, G. Orso and C. Ciuti, Steady-State Phases and Tunneling-Induced Instabil- ities in the Driven Dissipative Bose-Hubbard Model, Phys. Rev. Lett. 110(23), 233601 (2013), doi:10.1103/PhysRevLett.110.233601. Emergent equilibrium in many-body optical bistability. M Foss-Feig, P Niroula, J T Young, M Hafezi, A V Gorshkov, R M Wilson, M F Maghrebi, 10.1103/PhysRevA.95.043826Phys. Rev. A. 95443826M. Foss-Feig, P. Niroula, J. T. Young, M. Hafezi, A. V. Gorshkov, R. M. Wilson and M. F. Maghrebi, Emergent equilibrium in many-body optical bistability, Phys. Rev. A 95(4), 043826 (2017), doi:10.1103/PhysRevA.95.043826. Multicritical behavior in dissipative Ising models. V R Overbeck, M F Maghrebi, A V Gorshkov, H Weimer, 10.1103/PhysRevA.95.042133Phys. Rev. A. 95442133V. R. Overbeck, M. F. Maghrebi, A. V. Gorshkov and H. Weimer, Multicrit- ical behavior in dissipative Ising models, Phys. Rev. A 95(4), 042133 (2017), doi:10.1103/PhysRevA.95.042133. Critical slowing down in driven-dissipative Bose-Hubbard lattices. F Vicentini, F Minganti, R Rota, G Orso, C Ciuti, 10.1103/PhysRevA.97.013853Phys. Rev. A. 97113853F. Vicentini, F. Minganti, R. Rota, G. Orso and C. Ciuti, Critical slowing down in driven-dissipative Bose-Hubbard lattices, Phys. Rev. A 97(1), 013853 (2018), doi:10.1103/PhysRevA.97.013853. Real-time observation of fluctuations at the driven-dissipative dicke phase transition. F Brennecke, R Mottl, K Baumann, R Landig, T Donner, T Esslinger, 10.1073/pnas.1306993110Proceedings of the National Academy of Sciences. 1102911763F. Brennecke, R. Mottl, K. Baumann, R. Landig, T. Donner and T. Esslinger, Real-time observation of fluctuations at the driven-dissipative dicke phase transition, Proceedings of the National Academy of Sciences 110(29), 11763 (2013), doi:10.1073/pnas.1306993110, http://www.pnas.org/content/110/29/11763.full.pdf. Equilibrium statistical mechanics of matter interacting with the quantized radiation field. K Hepp, E H Lieb, 10.1103/PhysRevA.8.2517Phys. Rev. A. 82517K. Hepp and E. H. Lieb, Equilibrium statistical mechanics of matter interacting with the quantized radiation field, Phys. Rev. A 8, 2517 (1973), doi:10.1103/PhysRevA.8.2517. Proposed realization of the Dicke-model quantum phase transition in an optical cavity QED system. F Dimer, B Estienne, A S Parkins, H J Carmichael, 10.1103/PhysRevA.75.013804Phys. Rev. A. 75113804F. Dimer, B. Estienne, A. S. Parkins and H. J. Carmichael, Proposed realization of the Dicke-model quantum phase transition in an optical cavity QED system, Phys. Rev. A 75(1), 013804 (2007), doi:10.1103/PhysRevA.75.013804. D A Paz, M F Maghrebi, arXiv:2101.05297Driven-dissipative Ising Model: An exact field-theoretical analysis. D. A. Paz and M. F. Maghrebi, Driven-dissipative Ising Model: An exact field-theoretical analysis, arXiv:2101.05297 (2021). U C Täuber, 10.1017/CBO9781139046213Critical Dynamics: A Field Theory Approach to Equilibrium and Non-Equilibrium Scaling Behavior. Cambridge University PressU. C. Täuber, Critical Dynamics: A Field Theory Approach to Equilibrium and Non- Equilibrium Scaling Behavior, Cambridge University Press, ISBN 978-1-139-86720-7, doi:10.1017/CBO9781139046213 (2014). P M Chaikin, T C Lubensky, 10.1017/CBO9780511813467Principles of Condensed Matter Physics. Cambridge University PressISBN 9780521794503P. M. Chaikin and T. C. Lubensky, Principles of Condensed Matter Physics, Cambridge University Press, ISBN 9780521794503, doi:10.1017/CBO9780511813467 (2000). L Peliti, 10.1515/9781400839360Statistical mechanics in a nutshell. Princeton University Press10L. Peliti, Statistical mechanics in a nutshell, vol. 10, Princeton University Press, doi:10.1515/9781400839360 (2011). On the renormalized field theory of nonlinear critical relaxation. H K Janssen, 10.1142/9789814355872_0007From Phase Transitions to Chaos. WORLD SCIENTIFICH. K. Janssen, On the renormalized field theory of nonlinear critical relaxation, In From Phase Transitions to Chaos, pp. 68-91. WORLD SCIENTIFIC, ISBN 978-981-02-0938-4, doi:10.1142/9789814355872 0007 (1992). Critical dynamics of nonconserved ising-like systems. K E Bassler, B Schmittmann, 10.1103/PhysRevLett.73.3343Phys. Rev. Lett. 733343K. E. Bassler and B. Schmittmann, Critical dynamics of nonconserved ising-like systems, Phys. Rev. Lett. 73, 3343 (1994), doi:10.1103/PhysRevLett.73.3343. Many-body quantum optics with decaying atomic spin states: (γ, κ) Dicke model. J Gelhausen, M Buchhold, P Strack, 10.1103/PhysRevA.95.063824Phys. Rev. A. 95663824J. Gelhausen, M. Buchhold and P. Strack, Many-body quantum optics with decay- ing atomic spin states: (γ, κ) Dicke model, Phys. Rev. A 95(6), 063824 (2017), doi:10.1103/PhysRevA.95.063824. Fluorescence Spectrum and Thermalization in a Driven Coupled Cavity Array. D Kilda, J Keeling, 10.1103/PhysRevLett.122.043602Phys. Rev. Lett. 122443602D. Kilda and J. Keeling, Fluorescence Spectrum and Thermalization in a Driven Coupled Cavity Array, Phys. Rev. Lett. 122(4), 043602 (2019), doi:10.1103/PhysRevLett.122.043602. D A Paz, M F Maghrebi, arXiv:1906.08278v2Driven-dissipative Ising model: Dynamical crossover at weak dissipation. D. A. Paz and M. F. Maghrebi, Driven-dissipative Ising model: Dynamical crossover at weak dissipation, arXiv:1906.08278v2 (2021). Quantum Noise: A Handbook of Markovian and Non-Markovian Quantum Stochastic Methods with Applications to Quantum Optics. C Gardiner, P Zoller, 978-3-540-22301-6SpringerC. Gardiner and P. Zoller, Quantum Noise: A Handbook of Markovian and Non- Markovian Quantum Stochastic Methods with Applications to Quantum Optics, Springer, 3 edn., ISBN 978-3-540-22301-6 (2004). Keldysh field theory for driven open quantum systems. L M Sieberer, M Buchhold, S Diehl, 10.1088/0034-4885/79/9/096001Rep. Prog. Phys. 79996001L. M. Sieberer, M. Buchhold and S. Diehl, Keldysh field theory for driven open quantum systems, Rep. Prog. Phys. 79(9), 096001 (2016), doi:10.1088/0034-4885/79/9/096001. Dicke quantum phase transition with a superfluid gas in an optical cavity. K Baumann, C Guerlin, F Brennecke, T Esslinger, 10.1038/nature09009Nature. 46472931301K. Baumann, C. Guerlin, F. Brennecke and T. Esslinger, Dicke quantum phase tran- sition with a superfluid gas in an optical cavity, Nature 464(7293), 1301 (2010), doi:10.1038/nature09009. Exploring Symmetry Breaking at the Dicke Quantum Phase Transition. K Baumann, R Mottl, F Brennecke, T Esslinger, 10.1103/PhysRevLett.107.140402Phys. Rev. Lett. 10714140402K. Baumann, R. Mottl, F. Brennecke and T. Esslinger, Exploring Symmetry Break- ing at the Dicke Quantum Phase Transition, Phys. Rev. Lett. 107(14), 140402 (2011), doi:10.1103/PhysRevLett.107.140402. Dynamical phase transition in the open Dicke model. J Klinder, H Keßler, M Wolke, L Mathey, A Hemmerich, 10.1073/pnas.1417132112PNAS. 112113290J. Klinder, H. Keßler, M. Wolke, L. Mathey and A. Hemmerich, Dynamical phase transi- tion in the open Dicke model, PNAS 112(11), 3290 (2015), doi:10.1073/pnas.1417132112. . Z Zhiqiang, C H Lee, R Kumar, K J Arnold, S J Masson, A S Parkins, M D Barrett, 10.1364/OPTICA.4.000424Optica. 44424Nonequilibrium phase transition in a spin-1 Dicke modelZ. Zhiqiang, C. H. Lee, R. Kumar, K. J. Arnold, S. J. Masson, A. S. Parkins and M. D. Barrett, Nonequilibrium phase transition in a spin-1 Dicke model, Optica 4(4), 424 (2017), doi:10.1364/OPTICA.4.000424. Atom-only descriptions of the driven-dissipative Dicke model. F Damanet, A J Daley, J Keeling, 10.1103/PhysRevA.99.033845Phys. Rev. A. 99333845F. Damanet, A. J. Daley and J. Keeling, Atom-only descriptions of the driven-dissipative Dicke model, Phys. Rev. A 99(3), 033845 (2019), doi:10.1103/PhysRevA.99.033845. Exploring dynamical phase transitions with cold atoms in an optical cavity. J A Muniz, D Barberena, R J Lewis-Swan, D J Young, J R K Cline, A M Rey, J K Thompson, 10.1038/s41586-020-2224-xNature. 5807805602J. A. Muniz, D. Barberena, R. J. Lewis-Swan, D. J. Young, J. R. K. Cline, A. M. Rey and J. K. Thompson, Exploring dynamical phase transitions with cold atoms in an optical cavity, Nature 580(7805), 602 (2020), doi:10.1038/s41586-020-2224-x. Verification of a Many-Ion Simulator of the Dicke Model Through Slow Quenches across a Phase Transition. A Safavi-Naini, R Lewis-Swan, J Bohnet, M Gärttner, K Gilmore, J Jordan, J Cohn, J Freericks, A Rey, J Bollinger, 10.1103/PhysRevLett.121.040503Phys. Rev. Lett. 121440503A. Safavi-Naini, R. Lewis-Swan, J. Bohnet, M. Gärttner, K. Gilmore, J. Jordan, J. Cohn, J. Freericks, A. Rey and J. Bollinger, Verification of a Many-Ion Simulator of the Dicke Model Through Slow Quenches across a Phase Transition, Phys. Rev. Lett. 121(4), 040503 (2018), doi:10.1103/PhysRevLett.121.040503. Thermodynamic equilibrium as a symmetry of the Schwinger-Keldysh action. L M Sieberer, A Chiocchetta, A Gambassi, U C Täuber, S Diehl, 10.1103/PhysRevB.92.134307Phys. Rev. B. 9213134307L. M. Sieberer, A. Chiocchetta, A. Gambassi, U. C. Täuber and S. Diehl, Thermodynamic equilibrium as a symmetry of the Schwinger-Keldysh action, Phys. Rev. B 92(13), 134307 (2015), doi:10.1103/PhysRevB.92.134307. Non) equilibrium dynamics: a (broken) symmetry of the Keldysh generating functional. C Aron, G Biroli, L F Cugliandolo, 10.21468/SciPostPhys.4.1.008SciPost Phys. 48C. Aron, G. Biroli and L. F. Cugliandolo, (Non) equilibrium dynamics: a (bro- ken) symmetry of the Keldysh generating functional, SciPost Phys. 4, 008 (2018), doi:10.21468/SciPostPhys.4.1.008. Breakdown of Photon Blockade: A Dissipative Quantum Phase Transition in Zero Dimensions. H Carmichael, 10.1103/PhysRevX.5.031028Phys. Rev. X. 5331028H. Carmichael, Breakdown of Photon Blockade: A Dissipative Quantum Phase Transition in Zero Dimensions, Phys. Rev. X 5(3), 031028 (2015), doi:10.1103/PhysRevX.5.031028. A Kamenev, 10.1017/CBO9781139003667Field Theory of Non-Equilibrium Systems. Cambridge University PressA. Kamenev, Field Theory of Non-Equilibrium Systems, Cambridge University Press, doi:10.1017/CBO9781139003667 (2011). Solvable Family of Driven-Dissipative Many-Body Systems. M Foss-Feig, J T Young, V V Albert, A V Gorshkov, M F Maghrebi, 10.1103/PhysRevLett.119.190402Phys. Rev. Lett. 119190402M. Foss-Feig, J. T. Young, V. V. Albert, A. V. Gorshkov and M. F. Maghrebi, Solvable Family of Driven-Dissipative Many-Body Systems, Phys. Rev. Lett. 119(19), 190402 (2017), doi:10.1103/PhysRevLett.119.190402. Thermalization in open many-body systems based on eigenstate thermalization hypothesis. T Shirai, T Mori, 10.1103/PhysRevE.101.042116Phys. Rev. E. 101442116T. Shirai and T. Mori, Thermalization in open many-body systems based on eigenstate thermalization hypothesis, Phys. Rev. E 101(4), 042116 (2020), doi:10.1103/PhysRevE.101.042116. Pumping approximately integrable systems. F Lange, Z Lenarčič, A Rosch, 10.1038/ncomms15767Nat. Commun. 815767F. Lange, Z. Lenarčič and A. Rosch, Pumping approximately integrable systems, Nat. Commun. 8, 15767 (2017), doi:10.1038/ncomms15767. Quantum mechanics. A Messiah, North-Holland Publishing Company AmsterdamIIA. Messiah, Quantum mechanics: volume II, North-Holland Publishing Company Ams- terdam (1962). Fragility of time-reversal symmetry protected topological phases. M Mcginley, N R Cooper, 10.1038/s41567-020-0956-zNat. Phys. 16121181M. McGinley and N. R. Cooper, Fragility of time-reversal symmetry protected topological phases, Nat. Phys. 16(12), 1181 (2020), doi:10.1038/s41567-020-0956-z. Symmetry classes of open fermionic quantum matter. A Altland, M Fleischhauer, S Diehl, 10.1103/PhysRevX.11.021037Phys. Rev. X. 1121037A. Altland, M. Fleischhauer and S. Diehl, Symmetry classes of open fermionic quantum matter, Phys. Rev. X 11, 021037 (2021), doi:10.1103/PhysRevX.11.021037. S Lieu, M Mcginley, O Shtanko, N R Cooper, A V Gorshkov, arXiv:2105.02888Kramers' degeneracy for open systems in thermal equilibrium. S. Lieu, M. McGinley, O. Shtanko, N. R. Cooper and A. V. Gorshkov, Kramers' degen- eracy for open systems in thermal equilibrium, arXiv:2105.02888 (2021). Hidden time-reversal symmetry, quantum detailed balance and exact solutions of driven-dissipative quantum systems. D Roberts, A Lingenfelter, A Clerk, arXiv:2011.02148D. Roberts, A. Lingenfelter and A. Clerk, Hidden time-reversal symmetry, quan- tum detailed balance and exact solutions of driven-dissipative quantum systems, arXiv:2011.02148 (2020). Supermode-density-wave-polariton condensation with a bose-einstein condensate in a multimode cavity. A J Kollár, A T Papageorge, V D Vaidya, Y Guo, J Keeling, B L Lev, 10.1038/ncomms14386Nat. Commun. 811A. J. Kollár, A. T. Papageorge, V. D. Vaidya, Y. Guo, J. Keeling and B. L. Lev, Supermode-density-wave-polariton condensation with a bose-einstein condensate in a multimode cavity, Nat. Commun 8(1), 1 (2017), doi:10.1038/ncomms14386. A note on symmetry reductions of the Lindblad equation: transport in constrained open spin chains. B Buča, T Prosen, 10.1088/1367-2630/14/7/073007New J. Phys. 14773007B. Buča and T. Prosen, A note on symmetry reductions of the Lindblad equa- tion: transport in constrained open spin chains, New J. Phys. 14(7), 073007 (2012), doi:10.1088/1367-2630/14/7/073007.
[]
[ "Rotational properties of the O-type star population in the Tarantula region", "Rotational properties of the O-type star population in the Tarantula region" ]
[ "O H Ramírez-Agudelo ", "S Simón-Díaz ", "H Sana ", "A De Koter ", "C Sabín-Sanjulían ", "S E De Mink ", "P L Dufton ", "G Gräfener ", "C J Evans ", "A Herrero ", "N Langer ", "D J Lennon ", "J Maíz Apellániz ", "N Markova ", "F Najarro ", "J Puls ", "W D Taylor ", "J S Vink " ]
[]
[]
The 30 Doradus (30 Dor) region in the Large Magellanic Cloud (also known as the Tarantula Nebula) is the nearest massive starburst region, containing the richest sample of massive stars in the Local Group. It is the best possible laboratory to investigate aspects of the formation and evolution of massive stars. Here, we focus on rotation which is a key parameter in the evolution of these objects. We establish the projected rotational velocity, v e sin i, distribution of an unprecedented sample of 216 radial velocity constant (∆RV ≤ 20 kms −1 ) O-type stars in 30 Dor observed in the framework of the VLT-FLAMES Tarantula Survey (VFTS). The distribution of v e sin i shows a two-component structure: a peak around 80 kms −1 and a high-velocity tail extending up to ∼600 kms −1 . Around 75% of the sample has 0 ≤ v e sin i ≤ 200 kms −1 with the other 25% distributed in the high-velocity tail. The presence of the low-velocity peak is consistent with that found in other studies of late-O and early-B stars. The high-velocity tail is compatible with expectations from binary interaction synthesis models and may be predominantly populated by post-binary interaction, spun-up, objects and mergers. This may have important implications for the nature of progenitors of long-duration gamma ray bursts.Rotational properties of the O-type star population in the Tarantula regionAffiliations:
null
[ "https://arxiv.org/pdf/1309.7250v1.pdf" ]
118,440,567
1309.7250
c8c3ab17968bf606352c2f2b88e298b3cfd86937
Rotational properties of the O-type star population in the Tarantula region 10-14 June 2013 O H Ramírez-Agudelo S Simón-Díaz H Sana A De Koter C Sabín-Sanjulían S E De Mink P L Dufton G Gräfener C J Evans A Herrero N Langer D J Lennon J Maíz Apellániz N Markova F Najarro J Puls W D Taylor J S Vink Rotational properties of the O-type star population in the Tarantula region 10-14 June 2013Massive Stars: From α to Ω Rhodes, Greece, Massive Stars: From α to Ω Rhodes, Greece, 10-14 June 2013 (Affiliations can be found after the references) The 30 Doradus (30 Dor) region in the Large Magellanic Cloud (also known as the Tarantula Nebula) is the nearest massive starburst region, containing the richest sample of massive stars in the Local Group. It is the best possible laboratory to investigate aspects of the formation and evolution of massive stars. Here, we focus on rotation which is a key parameter in the evolution of these objects. We establish the projected rotational velocity, v e sin i, distribution of an unprecedented sample of 216 radial velocity constant (∆RV ≤ 20 kms −1 ) O-type stars in 30 Dor observed in the framework of the VLT-FLAMES Tarantula Survey (VFTS). The distribution of v e sin i shows a two-component structure: a peak around 80 kms −1 and a high-velocity tail extending up to ∼600 kms −1 . Around 75% of the sample has 0 ≤ v e sin i ≤ 200 kms −1 with the other 25% distributed in the high-velocity tail. The presence of the low-velocity peak is consistent with that found in other studies of late-O and early-B stars. The high-velocity tail is compatible with expectations from binary interaction synthesis models and may be predominantly populated by post-binary interaction, spun-up, objects and mergers. This may have important implications for the nature of progenitors of long-duration gamma ray bursts.Rotational properties of the O-type star population in the Tarantula regionAffiliations: Introduction Rotation is a key parameter in the evolution of massive stars affecting their evolution, chemical yields, budget of ionizing photons and their final fate as supernovae and long gamma-ray bursts. The 30 Dor starburst region in the Large Magellanic Cloud contains the richest sample of massive stars in the Local Group and is the best possible laboratory to 2 Rotational properties of the O-type star population in the Tarantula region investigate aspects of the formation and evolution of massive stars, and to establish statistically meaningful distributions of their physical properties. In this paper, we report on the measured projected rotational velocity, v e sin i, for more than 200 O-type stars observed as part of the VLT-FLAMES Tarantula Survey (VFTS, Evans et al., 2011). Sample and Method VFTS is a multi-epoch intermediate-resolution spectroscopic campaign targeting over 350 O and over 400 early B-type stars across the 30 Dor region. The sample used in this paper is composed of 172 O-type stars with no significant radial velocity variations and 44 which display only small shifts (∆RV ≤ 20 kms −1 ; Sana et al., 2013). We use Fourier transform (Gray, 1976;Simón-Díaz & Herrero, 2007) and line profile fitting (Simón-Díaz et al., 2010) methods to measure projected rotational velocities. Discussion of the methods and achieved precision of the full list of measurements can be found in Ramírez-Agudelo et al. (2013) 3 Results and discussion The distribution of projected rotational velocities of our sample shows a two-component structure: a low-velocity peak at around 80 kms −1 and a high velocity-tail starting at about 200 kms −1 and extending up to ∼600 kms −1 (see Fig. 1). The main conclusions regarding these two components are: Slow rotators: 75% of our sample stars have v e sin i < 200 kms −1 . These stars thus rotate at less than 20% of their break-up velocity. For the bulk of the sample, mass loss in a stellar wind and/or envelope expansion is not efficient enough to significantly spin down these stars. If massive-star formation results in fast rotating stars at birth, as one may think from angular momentum conservation considerations, most massive stars have to spin down quickly by some other mechanism to reproduce the observed distribution. Fast rotators: The presence of a well populated high-velocity tail (25% with v e sin i > 200 kms −1 ) is compatible with predictions from binary evolution. de Mink et al. (2013, see the insert panel in Fig. 1) show that such a tail in the v e sin i distribution arises naturally from mass transfer and mergers. A sizable fraction of the post-interaction systems may have only small radial velocity shifts. Combined with the merged systems, the bulk of this population may thus fulfill our selection criterium ∆RV ≤ 20 kms −1 . That rapid rotators result from spin-up through mass transfer and mergers has important implication for the evolutionary origin of the progenitors of long gamma-ray bursts. These progenitor systems may be dominated by, or be exclusively due to, post-interacting binaries or mergers. Conclusion We have estimated projected rotational velocities for the presumably single O-type stars in the VFTS sample (216 stars). The most distinctive feature of this distribution is a two- component structure with a peak at low velocities and a pronounced tail at high velocities with over 50 stars rotating faster than 200 kms −1 . A further discussion of our results, including an analysis of the intrinsic distribution corrected for projection effects and the v e sin i distribution as a function of spatial distribution, luminosity class and spectral type, is presented in Ramírez-Agudelo et al. (2013). Figure 1 : 1Distribution of the projected rotational velocities of the 216 O-type stars in our 30 Dor sample. The insert panel shows the effects of binary interaction on the v e sin i distribution (de Mink et al., 2013). . S E De Mink, N Langer, R G Izzard, H Sana, A De Koter, ApJ. 764166de Mink, S. E., Langer, N., Izzard, R. G., Sana, H., & de Koter, A. 2013, ApJ, 764, 166 . C J Evans, A&A. 530108Evans, C. J., et al. 2011, A&A, 530, A108 The Observation and Analysis of Stellar Photospheres. D Gray, Cambridge University PressThird Edition ed.Gray, D. 1976, The Observation and Analysis of Stellar Photospheres (Third Edition ed.) (Cambridge University Press) . O H Ramírez-Agudelo, Ramírez-Agudelo, O. H., et al. 2013, ArXiv e-prints . H Sana, A&A. 550107Sana, H., et al. 2013, A&A, 550, A107 . S Simón-Díaz, A Herrero, A&A. 4681063Simón-Díaz, S., & Herrero, A. 2007, A&A, 468, 1063 . S Simón-Díaz, A Herrero, K Uytterhoeven, N Castro, C Aerts, J Puls, ApJ. 720174Simón-Díaz, S., Herrero, A., Uytterhoeven, K., Castro, N., Aerts, C., & Puls, J. 2010, ApJ, 720, L174
[]
[ "Background Subtraction using Compressed Low-resolution Images", "Background Subtraction using Compressed Low-resolution Images" ]
[ "Journal Of L A T E X Class ", "Files " ]
[]
[]
Image processing and recognition are an important part of the modern society, with applications in fields such as advanced artificial intelligence, smart assistants, and security surveillance. The essential first step involved in almost all the visual tasks is background subtraction with a static camera. Ensuring that this critical step is performed in the most efficient manner would therefore improve all aspects related to objects recognition and tracking, behavior comprehension, etc.. Although background subtraction method has been applied for many years, its application suffers from real-time requirement. In this letter, we present a novel approach in implementing the background subtraction. The proposed method uses compressed, low-resolution grayscale image for the background subtraction. These low-resolution grayscale images were found to preserve the salient information very well. To verify the feasibility of our methodology, two prevalent methods, ViBe and GMM, are used in the experiment. The results of the proposed methodology confirm the effectiveness of our approach.
null
[ "https://arxiv.org/pdf/1810.10155v1.pdf" ]
53,085,757
1810.10155
e14781b600ba5acd520565fd7dfa84e529bff654
Background Subtraction using Compressed Low-resolution Images AUGUST 2015 1 Journal Of L A T E X Class Files Background Subtraction using Compressed Low-resolution Images 148AUGUST 2015 1Index Terms-change/motion detectionbackground subtrac- tionvisual redundancylow-resolution Image processing and recognition are an important part of the modern society, with applications in fields such as advanced artificial intelligence, smart assistants, and security surveillance. The essential first step involved in almost all the visual tasks is background subtraction with a static camera. Ensuring that this critical step is performed in the most efficient manner would therefore improve all aspects related to objects recognition and tracking, behavior comprehension, etc.. Although background subtraction method has been applied for many years, its application suffers from real-time requirement. In this letter, we present a novel approach in implementing the background subtraction. The proposed method uses compressed, low-resolution grayscale image for the background subtraction. These low-resolution grayscale images were found to preserve the salient information very well. To verify the feasibility of our methodology, two prevalent methods, ViBe and GMM, are used in the experiment. The results of the proposed methodology confirm the effectiveness of our approach. I. INTRODUCTION I N many image processing and visual application scenarios, a crucial preprocessing step is to segment moving foreground objects from a almost static background [4]. Background subtraction (BS) is first applied to extract moving objects from a video stream, without any a priori knowledge about these objects [14], [16]. Although BS technique has been used for many years, temporal adaptation is achieved at a price of slow processing or large memory requirement, limiting their utilities in real-time video applications [5], [8], [12]. In the public areas, cameras are everywhere. Most of the videos are taken outdoor, they capture complex mixture of the motion and clutter of the background. More and more high resolution cameras are used in the surveillance scenes, the video frames are high-resolution as well, extracting foreground objects from the surveillance videos suffered from storage capacity and processing time. In this letter, we proposed a simple, training-less, new approach method to implement the BS method. The low-resolution video frames were used as the input data. Among the various BS methods [3], [21], the two mainstream pixel-wise techniques ViBe [1], [2], [17], [18] and the Gaussian Mixture Model (GMM) [7], [15] test our proposed approach. To some extent, image compression and change/motion detection are two different research area, but considering real-time applications and storage capacities, etc. using compressed low-resolution images will save a lot of processing time and storage requirement. Building upon this aspect, our framework is: 1)compressing every frames of a video sequence with 100 different ratios, 2)using these compressed frames as the input data for ViBe and GMM method, 3)recording all the processing time, 4)resizing the results images to its original size in convenient to compare with the groundtrth ones. What we do in this letter is to find out whether our approach is feasibility and try to draw out the reasonable and representative results, which will be the quantitatively references compromised between running speed and accuracy. Moreover, the selection will not defect the subsequent visual applications. II. RELATED WORK A. Image redundancy and the compression Neighbouring pixels in most of images, are correlated and therefore they contain redundant information [11], [20]. In a sequence frames, one frame commonly has three types of redundancies, they are: Coding redundancy: some pixel values more common than others; Inter-pixel redundancy: neighbouring pixels have similar values; Psycho visual redundancy: some color differences are imperceptible. When considering memory capacity, transporting bandwidth, and processing speed, compressed images will have some advantages. B. Visual saliency scheme Human vision system actively seeks interesting regions in images to reduce the search effort in tasks, such as object detection and recognition [15]. Visual scenes often contain more information than can be acquired concurrently due to the visual system's limited processing capacity [23]. Judd et al. [6] found that fixations on low-resolution color (LC) images (76*64 pixels) can predict fixations on high-resolution (HC) images (610* 512 pixels) quite well. Shivanthans study [22] has found that the low-resolution grayscale (LG) model required significantly less training time and is much faster performing detection compared to the same network trained and evaluated on HC images. According to Shivanthans research work, when compressing images to lower resolution, the saliency information preserved very well, and LG images can be easily used in many vision tasks. In computer vision, background subtraction is the fundamental low-level task to detect the objects of interest or foreground in videos or images [3], [10]. The approach is to detect the foreground target from surveillance videos, and the target was extracted as the differences from the video images. Modelling the background of the video sequences is the first step. It is a sophisticate problem to create the background model, the changed sunlight will make the original background model unsuitable, the trembling camera will affect the image subtraction results, the disappearing of some of the background objects will also let the background model unsuitable. Many researchers are devoted to improve its accuracy and real-time applications, but still most of them achieved at a price of time complexity or large memory requirement, limiting their utility in real-time video applications [8], [9]. Background subtraction can be categorized into parametric and non-parametric methods. One of the most prominent pixel-based parametric method is the Gaussian model. Stauffer etc. [15] proposed the GMM, modelling every pixel with a mixture of K Gaussian functions. Barnich etc.. [1], [2], [17] proposed a pixel-based nonparametric approach named ViBe to detect the moving target using a novel random selected scheme. The two prominent method are selected to test our approach here. III. THE PROPOSED METHOD In order to verify the feasibility of our method, two prominent BS method GMM and ViBe are applied to testify our approach.The proposed method exploits the effectiveness of low-resolution grayscale images. Given an input video sequence, every frames were compressed at the ratio from 0% to 99%. The compression ratio used here means compressing to the proportion to the original rows or columns. We compressed the image with the same percentage for the rows and columns. For example, an original image resolution of 320*240 pixels when use a compression ratio of 20% in this letter, which means the compressed image will be 64*48 pixels. Therefor, 0% denotes no compression, 20% represents compressed to 64% of its original resolution. GMM and ViBe methods are processed separately with every compression frames. The results of the compressed frames are then resized to its original ones in order to compare them with the groundtruth ones. During processing, the CPU time used by the two methods were recorded respectively, metrics were also recorded. These data are used for later results analysis and selection. IV. EXPERIMENTS RESULTS Two mainstream algorithms (ViBe and GMM) are experimented with the proposed method. To evaluate the method proposed in this letter, three different video sequences are used, which are extensively tested by the video analytic research. One of the dataset is taken from the Carnegie Mellon Test Images Sequences [13], the other two highway and turnpike datasets are taken from the changedetection.net [19], . A. Datasets For performance evaluation we use 3 different scenarios of a typical surveillance setting. The following enumeration introduces the datasets: 1) CHANGEDETECTION dataset 2014 [19]: Two datasets with the corresponding groundtruth masks in this change detection dataset 2014 are used in our experiments. highway: It is a basic task in change and motion detection with some isolated shadows. It is a highway surveillance scenario combining a multitude of challenges for general performance overview. turnpike: It is a low frame-rate sequence, captured at 0.85 fps. Thus a large amount of information about the background change is missed. 2) CMU dataset [13]: Carnegie Mellon Test Images Sequences, available at http : //www.cs.cmu.edu/ yaser/ new backgroundsubtraction.htm. It contains 500 raw TIF images and corresponding manually segmented binary masks. The sequence dataset captured by a camera mounted on a tall tripod. The wind caused the tripod to sway back and forth causing nominal motion in the scene. Furthermore, some illumination changes occur. The groundtruth is proved for all the frame allowing a reliable evaluation. All the experiments run on an Intel Core i5-6500 3.2GHz processor with 8GB DDR3 RAM and Windows 10 OS. The proposed method was implemented by C++. For GMM method, we used the implementations available in openCV (www.opencv.org). We used the ViBe source code according to the author's implementation available at www.telecom.ulg.ac.be/research/vibe. B. performance measure The evaluation of the method proposed in this letter is an important part. For our experiments, a set of parameters used are unique. Every frames in the three dataset sequences are compressed with 100 different ratios. In order to testify whether our method is feasible and try to find out the exactly compromise between the processing speed and performance, we used 100 different compression ratios in the experiment. Then, all the frames with different compression ratios are processed separately. The binary result masks are compared with groundtruth masks. In our performance evaluation, several criteria have been considered. We use the common terminology of True Positive(TP), True Negative(TN), False Positive(FP), and False Negative(FN). In the following, results are evaluated in terms of 3 metrics: the Precision, the Recall and the F-Measure, thanks to the ChangeDetection.NET [19]. The Precision is expressed as P recision = T P T P + F P(1) the recall is Recall = T P T P + F N(2) and the F-Measurement is F -M easurement = 2 * P recision * Recall P recision + Recall(3) 1) Parameters: All these parameters tuned in this letter were all fixed for all of the three datasets. Most of them were the same as the original methods. C. Experimental results In this letter, we propose a new approach to implement the ViBe and GMM method. From a practical point of view, it is very necessary to compress the images when large quantities of video frames needed to be processed, the memory occupancy, the capacity of the computing and the storage spaces will all get profit from this approach. Our tasks are mainly focus on the best selection, when the compression ratio and the accuracy are considered simultaneously. Therefore, we use 100 different compression ratios to quantify the selection of later applications. Figure 1. shows the three datasets , the CMU, highway, turnpike datasets, with the original frames, groundtruths, part of the results with 3 different compression ratios. In Figure 2(a)-2(f)., values of the Relative Precision, Relative F-Measure, Relative Recall and the Relative CPU time under different ratios are given respectively. In these figures, we can clearly see the Precision is almost has no decrease, the F-measure and Recall are decreasing slowly, while the CPU time increased quickly along with the compression ratio increased. When the compression ratio is 60%, these three metrics started to decrease dramatically, on the contrary, the CPU time decreased sharply. Accordingly, when an image compressed to 60% of its original frame size, the useful and unuseful informations inside the frame will all lose a large part, therefore, the three metrics decrease dramatically at certain compression ratio is reasonable. In addition, the CPU time decreasing sharply with the compression ratio, mainly because the compressed frame is much smaller than its original ones, the processing time decreased sharply is definitely reasonable as well. From Figure 2(a)-2(f)., we can get the conclusion, when using the compressed frames, the useful information are preserved very well, especially in the BS method, the compression ratio can be choosed from 0% to 60%, while the accuracy of the results is reduced slightly, but the CPU time is greatly saved. From the Figure 2(a)-2(f)., it also shows that some of the CPU time are less steady, it seems to have some relationship with the mechanism of resource allocation within the operating system. V. CONCLUSION In this letter, we propose a novel approach of using lowresolution grayscale images to implement the BS method. The proposed method is much faster than the two original methods, the accuracy decrease slightly when using the low-resolution images. Based on the experiment, we confirm that the proposed method can be effectively used in motion detection or other visual applications, the results of our approach can also be effectively used in later vision tasks. This approach can be a fundamental basis in real-time visual applications research. Figure 1 .Figure 2 . 12Background subtraction results with 3 datasets, the first line is CMU, the second is highway and the last is turnpike dataset. (a)the original frames, (b)groundtruth, (c)-(f) : ViBe method and (g)-(i)GMM method results with compression ratios of 10%, 20%, 40%, and 60% respectively. The image compression ratio versus relative metrics, the first line is CMU, the second is highway and the last is turnpike dataset, with (a),(c),(e) show the ViBe method results, and (b),(d),(f) show the GMM method results. are chosen to This work is partially supported by the China Scholarship Council (CSC No. 201709360008), and partially supported by the Fujian Natural Science Fund Project(No.2018J01637). Andy Song and Shivanthan A.C. Yhanandan are with the Royal Melbourne Institute of Technology University, Melbourne, Victoria, Australia (e-mail: [email protected], [email protected]). Min Chen and Jing Zhang are with the Fujian University of Technology, Fuzhou, P. R. China (e-mail: [email protected], [email protected]). Vibe: a powerful random technique to estimate the background in video sequences. Olivier Barnich, Marc Van Droogenbroeck, Acoustics, Speech and Signal Processing. IEEEIEEE International Conference onOlivier Barnich and Marc Van Droogenbroeck. Vibe: a powerful random technique to estimate the background in video sequences. In Acoustics, Speech and Signal Processing, 2009. ICASSP 2009. IEEE International Conference on, pages 945-948. IEEE. Vibe: A universal background subtraction algorithm for video sequences. Olivier Barnich, Marc Van Droogenbroeck, IEEE Transactions on Image processing. 206Olivier Barnich and Marc Van Droogenbroeck. Vibe: A universal background subtraction algorithm for video sequences. IEEE Transactions on Image processing, 20(6):1709-1724, 2011. Traditional and recent approaches in background modeling for foreground detection: An overview. Thierry Bouwmans, Computer Science Review. 11Thierry Bouwmans. Traditional and recent approaches in background modeling for foreground detection: An overview. Computer Science Review, 11:31-66, 2014. Background segmentation with feedback: The pixel-based adaptive segmenter. Martin Hofmann, Philipp Tiefenbacher, Gerhard Rigoll, Computer Vision and Pattern Recognition Workshops. CVPRW2012Martin Hofmann, Philipp Tiefenbacher, and Gerhard Rigoll. Background segmentation with feedback: The pixel-based adaptive segmenter. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2012 . IEEE Computer Society Conference on. IEEEIEEE Computer Society Conference on, pages 38-43. IEEE, 2012. A robust background subtraction and shadow detection. Thanarat Horprasert, David Harwood, Larry S Davis, Proc. ACCV. ACCVThanarat Horprasert, David Harwood, and Larry S Davis. A robust background subtraction and shadow detection. In Proc. ACCV, pages 983-988. Fixations on lowresolution images. Tilke Judd, Fredo Durand, Antonio Torralba, Journal of Vision. 114Tilke Judd, Fredo Durand, and Antonio Torralba. Fixations on low- resolution images. Journal of Vision, 11(4):14-14, 2011. Background subtraction using illumination-invariant structural complexity. Wonjun Kim, Youngsung Kim, IEEE Signal Processing Letters. 235Wonjun Kim and Youngsung Kim. Background subtraction using illumination-invariant structural complexity. IEEE Signal Processing Letters, 23(5):634-638, 2016. Effective gaussian mixture learning for video background subtraction. Dar-Shyang Lee, IEEE Transactions on Pattern Analysis & Machine Intelligence. 5Dar-Shyang Lee. Effective gaussian mixture learning for video back- ground subtraction. IEEE Transactions on Pattern Analysis & Machine Intelligence, (5):827-832, 2005. Background subtraction for moving object detection in rgbd data: A survey. Lucia Maddalena, Alfredo Petrosino, Journal of Imaging. 4571Lucia Maddalena and Alfredo Petrosino. Background subtraction for moving object detection in rgbd data: A survey. Journal of Imaging, 4(5):71, 2018. Hierarchical improvement of foreground segmentation masks in background subtraction. Diego Ortego, C Juan, Jos M Sanmiguel, Martnez, IEEE Transactions on Circuits and Systems for Video Technology. Diego Ortego, Juan C SanMiguel, and Jos M Martnez. Hierarchical im- provement of foreground segmentation masks in background subtraction. IEEE Transactions on Circuits and Systems for Video Technology, 2018. A new lossless method of image compression and decompression using huffman coding techniques. H Jagadish, Pujar, M Lohit, Kadlaskar, Journal of Theoretical & Applied Information Technology. 15Jagadish H Pujar and Lohit M Kadlaskar. A new lossless method of image compression and decompression using huffman coding techniques. Journal of Theoretical & Applied Information Technology, 15, 2010. Robust techniques for background subtraction in urban traffic video. Chandrika S Cheung Sen-Ching, Kamath, Visual Communications and Image Processing. 5308S Cheung Sen-Ching and Chandrika Kamath. Robust techniques for background subtraction in urban traffic video. In Visual Communications and Image Processing 2004, volume 5308, pages 881-893. International Society for Optics and Photonics. Bayesian modeling of dynamic scenes for object detection. Yaser Sheikh, Mubarak Shah, IEEE transactions on pattern analysis and machine intelligence. 27Yaser Sheikh and Mubarak Shah. Bayesian modeling of dynamic scenes for object detection. IEEE transactions on pattern analysis and machine intelligence, 27(11):1778-1792, 2005. A comprehensive review of background subtraction algorithms evaluated with synthetic and real videos. Andrews Sobral, Antoine Vacavant, Computer Vision and Image Understanding. 122Andrews Sobral and Antoine Vacavant. A comprehensive review of background subtraction algorithms evaluated with synthetic and real videos. Computer Vision and Image Understanding, 122:4-21, 2014. Learning patterns of activity using real-time tracking. Chris Stauffer, W , Eric L Grimson, IEEE Transactions. 228Chris Stauffer and W. Eric L. Grimson. Learning patterns of activity using real-time tracking. IEEE Transactions on pattern analysis and machine intelligence, 22(8):747-757, 2000. Wallflower: Principles and practice of background maintenance. Kentaro Toyama, John Krumm, Barry Brumitt, Brian Meyers, The Proceedings of the Seventh IEEE International Conference on. IEEE1Computer VisionKentaro Toyama, John Krumm, Barry Brumitt, and Brian Meyers. Wallflower: Principles and practice of background maintenance. In Com- puter Vision, 1999. The Proceedings of the Seventh IEEE International Conference on, volume 1, pages 255-261. IEEE, 1999. Vibe: A disruptive method for background subtraction. Background Modeling and Foreground Detection for Video Surveillance. Marc Van Droogenbroeck, Olivier Barnich, Marc Van Droogenbroeck and Olivier Barnich. Vibe: A disruptive method for background subtraction. Background Modeling and Foreground Detection for Video Surveillance, pages 7.1-7.23, 2014. Background subtraction: Experiments and improvements for vibe. Marc Van Droogenbroeck, Olivier Paquot, Computer Vision and Pattern Recognition Workshops (CVPRW), 2012 IEEE Computer Society Conference on. IEEEMarc Van Droogenbroeck and Olivier Paquot. Background subtraction: Experiments and improvements for vibe. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2012 IEEE Computer Society Conference on, pages 32-37. IEEE. Cdnet 2014: An expanded change detection benchmark dataset. Yi Wang, Pierre-Marc Jodoin, Fatih Porikli, Janusz Konrad, Yannick Benezeth, Prakash Ishwar, Yi Wang, Pierre-Marc Jodoin, Fatih Porikli, Janusz Konrad, Yannick Benezeth, and Prakash Ishwar. Cdnet 2014: An expanded change detection benchmark dataset. pages 393-400, 2014. An introduction to image compression. Wei-Yi Wei, Taipei, Taiwan, ROCNational Taiwan UniversityWei-Yi Wei. An introduction to image compression. National Taiwan University, Taipei, Taiwan, ROC, 2008. Background modeling methods in video analysis: A review and comparative evaluation. Yong Xu, Jixiang Dong, Bob Zhang, Daoyun Xu, CAAI Transactions on Intelligence Technology. 11Yong Xu, Jixiang Dong, Bob Zhang, and Daoyun Xu. Background modeling methods in video analysis: A review and comparative evaluation. CAAI Transactions on Intelligence Technology, 1(1):43-60, 2016. A C Shivanthan, Adrian G Yohanandan, Dacheng Dyer, Andy Tao, Song, arXiv:1712.02048Saliency preservation in low-resolution grayscale images. arXiv preprintShivanthan AC Yohanandan, Adrian G Dyer, Dacheng Tao, and Andy Song. Saliency preservation in low-resolution grayscale images. arXiv preprint arXiv:1712.02048, 2017. Visual attention detection in video sequences using spatiotemporal cues. Yun Zhai, Mubarak Shah, Proceedings of the 14th ACM international conference on Multimedia. the 14th ACM international conference on MultimediaACMYun Zhai and Mubarak Shah. Visual attention detection in video sequences using spatiotemporal cues. In Proceedings of the 14th ACM international conference on Multimedia, pages 815-824. ACM.
[]
[ "Evidence for a new magnetic field scale in CeCoIn 5", "Evidence for a new magnetic field scale in CeCoIn 5" ]
[ "I Sheikin \nGrenoble High Magnetic Field Laboratory (CNRS)\nBP 16638042GrenobleFrance (\n", "H Jin \nLaboratoire de Physique Quantique (CNRS)\nESPCI\n10 Rue de Vauquelin75231ParisFrance (\n", "R Bel \nLaboratoire de Physique Quantique (CNRS)\nESPCI\n10 Rue de Vauquelin75231ParisFrance (\n", "K Behnia \nLaboratoire de Physique Quantique (CNRS)\nESPCI\n10 Rue de Vauquelin75231ParisFrance (\n", "C Proust \nLaboratoire National des Champs Magnétiques Pulsés (CNRS)\nBP 424531432ToulouseFrance (\n", "J Flouquet \nDRFMC/SPSMS\nCommissariatà l'Energie AtomiqueF-38042GrenobleFrance (\n", "Y Matsuda \nDepartment of Physics\nUniversity of Kyoto\n608-8502KyotoJapan (\n", "D Aoki \nDRFMC/SPSMS\nCommissariatà l'Energie AtomiqueF-38042GrenobleFrance (\n", "Y Ōnuki \nGraduate School of Science\nOsaka University\n560-0043ToyonakaOsakaJapan\n" ]
[ "Grenoble High Magnetic Field Laboratory (CNRS)\nBP 16638042GrenobleFrance (", "Laboratoire de Physique Quantique (CNRS)\nESPCI\n10 Rue de Vauquelin75231ParisFrance (", "Laboratoire de Physique Quantique (CNRS)\nESPCI\n10 Rue de Vauquelin75231ParisFrance (", "Laboratoire de Physique Quantique (CNRS)\nESPCI\n10 Rue de Vauquelin75231ParisFrance (", "Laboratoire National des Champs Magnétiques Pulsés (CNRS)\nBP 424531432ToulouseFrance (", "DRFMC/SPSMS\nCommissariatà l'Energie AtomiqueF-38042GrenobleFrance (", "Department of Physics\nUniversity of Kyoto\n608-8502KyotoJapan (", "DRFMC/SPSMS\nCommissariatà l'Energie AtomiqueF-38042GrenobleFrance (", "Graduate School of Science\nOsaka University\n560-0043ToyonakaOsakaJapan" ]
[]
The Nernst coefficient of CeCoIn5 displays a distinct anomaly at H K ∼ 23 T. This feature is reminiscent of what is observed at 7.8 T in CeRu2Si2, a well-established case of metamagnetic transition. New frequencies are observed in de Haas-van Alphen oscillations when the field exceeds 23 T, which may indicate a modification of the Fermi surface at this field.
10.1103/physrevlett.96.077207
[ "https://arxiv.org/pdf/cond-mat/0505765v1.pdf" ]
14,695,830
cond-mat/0505765
74cdb75959866d659e6e4af8e8592dab8449aebc
Evidence for a new magnetic field scale in CeCoIn 5 May 2005 I Sheikin Grenoble High Magnetic Field Laboratory (CNRS) BP 16638042GrenobleFrance ( H Jin Laboratoire de Physique Quantique (CNRS) ESPCI 10 Rue de Vauquelin75231ParisFrance ( R Bel Laboratoire de Physique Quantique (CNRS) ESPCI 10 Rue de Vauquelin75231ParisFrance ( K Behnia Laboratoire de Physique Quantique (CNRS) ESPCI 10 Rue de Vauquelin75231ParisFrance ( C Proust Laboratoire National des Champs Magnétiques Pulsés (CNRS) BP 424531432ToulouseFrance ( J Flouquet DRFMC/SPSMS Commissariatà l'Energie AtomiqueF-38042GrenobleFrance ( Y Matsuda Department of Physics University of Kyoto 608-8502KyotoJapan ( D Aoki DRFMC/SPSMS Commissariatà l'Energie AtomiqueF-38042GrenobleFrance ( Y Ōnuki Graduate School of Science Osaka University 560-0043ToyonakaOsakaJapan Evidence for a new magnetic field scale in CeCoIn 5 May 2005(Dated: October 23, 2018) The Nernst coefficient of CeCoIn5 displays a distinct anomaly at H K ∼ 23 T. This feature is reminiscent of what is observed at 7.8 T in CeRu2Si2, a well-established case of metamagnetic transition. New frequencies are observed in de Haas-van Alphen oscillations when the field exceeds 23 T, which may indicate a modification of the Fermi surface at this field. PACS numbers: 75. 30.Kz, 73.43.Nq, 71. 27.+a Heavy-Fermion (HF) compounds [1] display a dazzling variety of physical phenomena which still lack a satisfactory general picture. The "non-Fermi-liquid" behavior emerging in the vicinity of a magnetic quantum critical point (QCP), associated with a continuous (i.e. secondorder) phase transition at zero temperature, has recently attracted much attention [2]. The case of the HF superconductor CeCoIn 5 [3] is intriguing. Magnetic field alters the normal-state properties of this system in a subtle way. In zero field, the system displays neither a T 2 resistivity nor a T -linear specific heat (standard features of a Landau Fermi liquid) down to the superconducting transition. When superconductivity is destroyed by the application of pressure [4] or magnetic field [5,6], the Fermi-liquid state is restored. In the latter case, the fieldtuned QCP identified in this way is pinned to the upper critical field, H c2 of the superconducting transition [7,8]. This is unexpected, not only because quantum criticality is often associated with the destruction of a magnetic order, but also because the superconducting transition becomes first order at very low temperatures [9]. The possible existence of a magnetic order with a field scale close to H c2 and accidentally hidden by superconductivity, has been speculated [6] but lacks direct experimental support. Comparing CeCoIn 5 with the well-documented case of CeRu 2 Si 2 [10,11] is instructive. In the latter system a metamagnetic transition occurs at H m = 7.8 T: the magnetization jumps from 0.6 µ B to 1.2 µ B in a narrow yet finite window( ∆H m = 0.04 T in the T = 0 limit). The passage from an antiferromagnetically (AF) correlated system below H m to a polarized state dominated by local fluctuations above is accompanied by a sharp enhancement in the quasi-particle mass in the vicinity of H m which is thus akin to a field-tuned QCP. A sudden change of the Fermi surface (FS) topology across the metamagnetic transition has been established by de Hass-van Alphen (dHvA) effect studies [12], where new frequencies were detected above H m . In this letter, we report on two sets of experimental studies which indicate that the effect of the magnetic field on the normal-state properties of these two systems share some common features. They point to the existence of another field scale in CeCoIn 5 which has not been previously identified. By measuring the Nernst coefficient and studying quantum oscillations, we find compelling evidence that close to H K ∼ 23 T, the FS is modified. Therefore, in the T = 0 limit, CeCoIn 5 appears to display at least two distinct field scales. Single crystals of CeCoIn 5 were grown using a self-flux method. Thermoelectric coefficients were measured using a one-heater-two-thermometer set-up. dHvA measurements were done using a torque cantilever magnetometer. The magnetometer was mounted in a top-loading dilution refrigerator equipped with a low-temperature rotation stage. We begin by presenting the field-dependence of the Nernst coefficient in CeRu 2 Si 2 , which demonstrates the remarkable sensitivity of this probe. Fig. 1 shows the field-dependence of the Nernst signal (N= Ey ∇xT ) in CeRu 2 Si 2 . As seen in the upper panel of the figure, N abruptly changes sign around the metamagnetic transition field, H m = 7.8 T. Besides this striking feature, the field-dependence of the Nernst signal presents additional structure. The lower panel of the same figure shows the field-dependence of the dynamic Nernst coefficient, ν = ∂N ∂B at 2.2 K. It presents two anomalies just below and above H m : a sharp minimum at ∼ 8.2 T and a smaller maximum at 7 T. The inset of the figure shows the temperature-dependence of these two anomalies which closely follow the lines of the pseudo-phase diagram of CeRu 2 Si 2 . These are crossover lines which represent anomalies detected by specific heat [13,14] and thermal expansion [15] measurements. The case of CeRu 2 Si 2 shows how sensitively the Nernst signal probes metamagnetism. This is presumably due to its intimate relationship with the energy dependence of the scattering rate (the so-called Mott formula: ν = π 2 k 2 B T 3m ( ∂τ ∂ǫ )| ǫ=µ ). With this in mind, let us focus on the case of CeCoIn 5 . The first study of the Nernst effect in this compound found a very large zero-field Nernst coefficient emerging below T * ∼ 20K [16]. Below this temper- [15]) and specific heat (solid and empty squares [13,14]). ¡ ¢ £ ¤ ¥ ¦ § ¨ © ! " ν # $ % & ' ( ) 0 1 2 3 4 5 6 7 8 9 @ A B C D E F G H I P Q R S T U V W X Y ` a b c d e f CeRu g Si h i p q r s t u v ature, resistivity is linear in temperature [3,5,17], the Hall coefficient is anomalously large [17,18], the thermoelectric power is anomalously small [16] and the electronic specific heat rises rapidly with decreasing temperature [3,6]. All these anomalous properties gradually disappear when a magnetic field is applied. In particular, the giant Nernst effect also gradually fades away in presence of a moderate magnetic field [16]. The field-dependence of the Nernst coefficient in the 12 − 28 T field range, reported here for the first time, reveals new features emerging at still higher magnetic fields. The H−T (pseudo-)phase diagram of the system is apparently more complicated than previously suggested, and the field associated with the emergence of the Fermiliquid close to H c2 (0) (∼ 5T) is not the only relevant field scale for CeCoIn 5 . Fig. 2 presents the field-dependence of the Nernst signal, N , and its dynamic derivative, ν, in a magnetic field along the c-axis and up to 28 T. At the onset of superconductivity (T = 2.2 K), N rises rapidly as a function of field at low fields. Thus, its derivative, ν is large (anomaly no.1). With the application of a moderate mag- netic field, the Nernst signal saturates and, therefore, ν becomes very small. This is the behavior previously detected and identified as the signature of a field-induced Fermi-liquid state. However, above 15 T, N increases suddenly again, reaches rapidly a maximum and then decreases. The field-dependence of N at T = 1.1 K repeats the same scheme with all field-scales shifted to lower values. The two anomalies of opposite signs revealed in ν(B) (marked no.2 and no.3 in the figure) are reminiscent of what was observed in the case of CeRu 2 Si 2 . This indicates the presence of two distinct field scales in CeCoIn 5 . However, contrary to the case of CeRu 2 Si 2 , they do not tend to merge in the zero-temperature limit. The lower-field anomaly (no.2) lies close to the line already identified by the resistivity measurements [5]. At T = 0, this line ends up very close to the superconducting upper critical field (∼ 5 T for this field orientation). The high-field anomaly (no. 3) identifies another (almost horizontal) line in the H − T plane and yields a second field scale (∼ 23 T). A sketch of the (pseudo-)phase diagram of CeCoIn 5 is shown in the inset of Fig. 2. w x y d e f g h i j k l m n o p q r s t u v CeCoIn w x y z { | } ~ ν ¡ ¢ £ ¤ ¥ FIG After detecting the Nernst effect anomalies, we performed high-resolution dHvA measurements at high fields. They provide another evidence for the existence of Only the fundamental frequencies are denoted by letter. The frequencies that are present both below and above 23 T and were observed in the previous measurements are denoted by F with a numeric index, while the new frequencies that appear above 23 T only are denoted by by F with a letter index. ¦ § ¨ © ª « ¬ ® ¯ ° ± ² ³ ´ µ ¶ · ¸ ¹ º » ¼ ½ ¾ ¿ À Á Â Ã Ä Å AE Ç È É Ê Ë Ì Í Î Ï Ð Ñ Ò Ó Ô Õ Ö × Ø Ù Ú Û Ü Ý Þ ß à á â ã ä å ae ç è é ê ë ì í î ï ð ñ ò ó ô õ ö ÷ ø ù ú û ü ý þ ÿ ¡ ¢ £ ¤ ¥ ¦ § ¨ © CeCoIn ! " # $ % & '( ) 0 1v w x y z { θ|}~ b ¡ ¢ £ ¤ ¥ ¦ § ¨ © ª « ¬ ® ¯ ° ± ² ³ ´ µ ¶ · θ¸¹º» ¼ ½ ¾ ¿ À Á  c Ã Ä Å AE Ç È É Ê Ë Ì Í Î Ï Ð Ñ Ò θÓÔÕÖ × Ø Ù Ú Û Ü Ý d Þ ß à á â ã ä å ae ç è é ê ë ì í î another field scale at H ∼ 23 T in CeCoIn 5 . Fig. 3 shows the torque signal at T = 40 mK as a function of magnetic field applied close to the c-axis. Clear anomalies observed at around 5 T correspond to the suppression of superconductivity. There are no other remarkable anomalies at higher field, in particular similar to that observed at 9 T in CePd 2 Si 2 , where a metamagnetic transition was established by torque measurements [24]. However, a sudden emergence of a new dHvA frequency is clearly detected above ∼ 23 T and becomes evident after subtracting the background (insets of Fig. 3). Remarkably, the amplitude of the new frequency is so strong that it dominates all the other dHvA oscillations above 23 T at very small inclinations from the c-axis. The field-induced emergence of new dHvA frequencies becomes evident when comparing the Fourier spectra of the dHvA oscillations below and above 23 T. Such a comparison is shown in Fig. 4 for magnetic field applied at 0.5 • and 2.5 • from the c-axis, the two orientations for which the effective masses were measured. The dHvA frequencies and corresponding effective masses for these two orientations are given in Table I. All the fundamental frequencies observed both below and above 23 T are marked in the figure. The frequencies F i , i = 1...5 are observed both below and above 23 T and are in good agreement with those found in the previous dHvA studies performed at lower fields [25,26,27]. It was shown [26] that all these frequencies correspond to quasi-twodimensional FS that are well accounted for by the itinerant f -electron band structure calculations. For the magnetic field orientation at 0.5 • from the caxis, two new frequencies F a and F b appear in the oscillatory spectrum above 23 T (Fig. 4 b). Neither of them was observed in the previous lower field dHvA studies. The corresponding effective masses of the new frequencies are quite high, being of the order of 35 m 0 and 65 m 0 . The frequency F a might correspond to one of the closed orbits of the 15-electron band of the band structure calculations [26]. In this case, however, there is no reason for this frequency not being observed at lower field. Even taking into account its high effective mass and its presumable field dependence it should have been observed at lower fields since its effective mass is considerably smaller than that of F b , while the amplitude is much stronger. The other frequency, F b , can not be reconciled with any orbit from the theoretical calculations. For the magnetic field at 2.5 • to the c-axis (Fig. 4 c and d), F a and F b with high effective masses are also present above 23 T. Furthermore, two more higher frequencies, F c and F d emerge above 23 T for this orientation (Fig. 4 d). Like F a , these two frequencies were not observed either below 23 T or in the previous studies. The effective masses corresponding to these frequencies are also quite enhanced being about 18 m 0 and 24 m 0 respectively. The frequencies F a and F b exist only over a small angular range close to the c-axis. Therefore, they might be due to a magnetic breakdown. This is not the case of the frequencies F c and F d which survive over a wide angular range between the [001] and [100] directions. Their emergence above 23T seems to imply an important modification of the FS topology. Since these new frequencies are associated with large effective masses, the drastic change observed in the Nernst signal at 23T would be a natural consequence of such a modification. Thus, two independent sets of evidence points to the existence of a new field scale in CeCoIn 5 at 23 T: A drastic change in the magnitude and sign of the Nernst signal and the appearance of the new frequencies in dHvA spectrum. Both these features have been observed in CeRu 2 Si 2 as experimental signatures of the metamagnetic transition. Let us also note that a temperature scale of ∼ 20 K, (comparable to the energy associated with a field of 23 T) marks most of the zero-field properties of the heavy-electron fluid in CeCoIn 5 . However, what occurs in the case of CeCoIn 5 does not appear as a metamagnetic transition [23] (defined as a jump in the magnetization in a narrow field window). Let us discuss the possible origins of this feature below. In HF systems, the interplay between magnetic intersite interactions with a characteristic energy scale E m and local Kondo fluctuations with a characteristic energy k B T K is changed with the application of a magnetic field. When the associated Zeeman energy becomes comparable with one of the characteristic energy scales, the balance between these two interactions is modified. This defines two characteristic field scales, H m and H K , given by gµH m ≃ E m and gµH K ≃ k B T K . Further-more, an external magnetic field adds a ferromagnetic (F) component to the existing AF one. The field redistribution among different AF, F and Kondo components depends on the type of the local anisotropy (Ising, planar or Heisenberg character) and the lattice deformation produced by magnetostriction. In this context, an important difference between CeRu 2 Si 2 and CeCoIn 5 is that the susceptibility in the latter compound is much less anisotropic. CeRu 2 Si 2 , with a magnetic anisotropy of 15 compared to 2 in CeCoIn 5 is much closer to an Ising-like system. In the latter compound, magnetic field seems to induce a QCP, i.e. the Néel temperature vanishes at H c2 (0) [5,6]. Due to the weak magnetic anisotropy of CeCoIn 5 H m should vanish at the field induced QCP [1] as was observed in YbRh 2 Si 2 [28] and CeNi 2 Ge 2 [29]. For CeCoIn 5 , this may explain the existence of a large field domain between H c2 and H K where AF and F correlations compete. The possible occurrence of a metamagnetic transition in the CeMIn 5 family (M = Co, Rh or In) has been a subject of recent debate [19,20,21,22]. Several studies suggest a metamagnetic transition in CeIrIn 5 in the 25 − 42 T field range [19,20,21,22]. On the other hand, the modification of the FS topology observed here may be closely related to the drastic change of the FS recently detected in CeRhIn 5 at P = 2.4 GPa, just above its critical pressure P c = 2.35 GPa [30]. The opposite effects on the FS produced by pressure and magnetic field are in good agreement with the opposite actions on the volume (contraction and expansion respectively). In conclusion, we have demonstrated the existence of another field scale in CeCoIn 5 besides H c2 . The new characteristic field H K ∼ 23 T is marked by an anomaly in the Nernst signal reminiscent of that observed at the metamagnetic field of CeRu 2 Si 2 . The characteristic field is also associated with the appearance of new frequencies in dHvA oscillations, like in CeRu 2 Si 2 . However, contrary to the latter system, there appear to be two distinct field scales widely separated in the T = 0 limit. FIG. 1 : 1Upper panel: field-dependence of the Nernst signal in CeRu2Si2 for selected temperatures. Lower panel: the Nernst coefficient, defined as the derivative of the signal at T = 2.2 K. The arrows show the position of a maximum and a minimum close to the metamagnetic transition field. The inset shows the position of these anomalies (triangles) in the H − T plane compared to those detected by previous studies of thermal expansion (solid and empty circles FIG. 3 : 3Field-dependence of the magnetic torque in CeCoIn5 observed at 40 mK for two orientations of the magnetic field close to the c-axis. The insets show the high field oscillatory torque signal after subtracting the background. FIG. 4 : 4Fourier spectra of the dHvA oscillations below (a and c) and above (b and d) 23 T are compared for magnetic field applied at 0.5 • (a and b) and 2.5 • (c and d) from the c-axis. . 2: Upper panel: field-dependence of the Nernst signal in CeCoIn5 for two temperatures. Lower panel: the Nernst coefficient for 2.2 K obtained by taking the derivative of the data presented in the upper panel. Arrows identify three anomalies in the 2.2 K curve. A sketch of the H − T phase diagram based on these anomalies is shown in the inset of the upper panel. TABLE I : IdHvA frequencies and corresponding effective masses observed below and above H K ∼ 23 T.16 − 23 T 23 − 28 T F (kT) m * (m0) F (kT) m * (m0) θ = 0.5 • F1 0.31 11.8 0.32 Fa 1.15 34.5 F b 2.34 65.7 F2 4.39 7.4 4.37 6.0 F3 4.84 12.8 4.81 14.5 F4 5.48 18.4 5.42 13.2 F5 7.14 24.2 6.97 26.0 θ = 2.5 • F1 0.30 10.0 0.29 4.41 Fa 1.13 33.4 F b 2.12 2.36 48.8 F2 4.39 6.8 4.38 4.3 F3 4.86 11.2 4.81 15.2 F4 5.47 14.2 5.44 11.0 Fc 5.89 18.3 F d 6.42 23.6 F5 7.15 31.3 6.98 27.0 For a review see. cond-mat/0501602J. Flouquet. For a review see J. Flouquet, cond-mat/0501602 . G R Setwart, Rev. Mod. Phys. 73797G. R. Setwart, Rev. Mod. Phys. 73, 797 (2001) . C Petrovic, J. Phys. Condens. Matter. 13337C. Petrovic et al., J. Phys. Condens. Matter 13, L337 (2001) . V A Sidorov, Phys. Rev. Lett. 89157004V. A. Sidorov et al., Phys. Rev. Lett. 89, 157004 (2002) . J Paglione, Phys. Rev. Lett. 91246405J. Paglione et al.,Phys. Rev. Lett. 91, 246405 (2003) . A Bianchi, Phys. Rev. Lett. 91257001A. Bianchi et al., Phys. Rev. Lett. 91, 257001 (2003) . E D Bauer, Phys. Rev. Lett. 9447001E. D. Bauer et al., Phys. Rev. Lett. 94, 047001 (2004) . R Ronning, cond-mat/0409750R. Ronning et al., cond-mat/0409750 (2004) . A Bianchi, Phys. Rev. Lett. 89137002A. Bianchi et al., Phys. Rev. Lett. 89, 137002 (2002) . P Haen, J. Low Temp. Phys. 67391P. Haen et al., J. Low Temp. Phys. 67, 391 (1987) . J Flouquet, P Haen, S Raymond, D Aoki, G Knebel, Physica B. 319251J. Flouquet, P. Haen, S. Raymond, D. Aoki and G. Knebel, Physica B 319, 251 (2002) . H Aoki, Phys. Rev. Lett. 712110H. Aoki et al., Phys. Rev. Lett. 71, 2110 (1993) . Y Aoki, J. Magn. and Magn. Mat. 177Y. Aoki et al., J. Magn. and Magn. Mat. 177-181, 271 (1998) . H P Van Der Meulen, Phys. Rev. B. 44814H. P. van der Meulen et al., Phys. Rev. B 44, 814 (1991) . C Paulsen, J. Low Temp. Phys. 81317C. Paulsen et al., J. Low Temp. Phys. 81, 317 (1990) . R Bel, Phys. Rev. Lett. 92217002R. Bel et al., Phys. Rev. Lett. 92, 217002 (2004) . Y Nakajima, J. Phys. Soc. Jpn. 735Y. Nakajima et al., J. Phys. Soc. Jpn, 73, 5 (2004) . M F Hundley, Phys. Rev. B. 7035113M. F. Hundley et al., Phys. Rev. B 70, 035113 (2004) . T Takeuchi, J. Phys. Soc. Jpn. 70877T. Takeuchi et al., J. Phys. Soc. Jpn, 70, 877 (2001) . J S Kim, Phys. Rev. B. 65174520J. S. Kim et al., Phys. Rev. B 65, 174520 (2002) . E C Palm, Physica B. 587E. C. Palm et al., Physica B 329-333, 587 (2003) . C Capan, Phys. Rev. B. 70180502C. Capan et al., Phys. Rev. B, 70, 180502(R) (2004) . H Shishido, J. Phys. Soc. Jpn. 71162H. Shishido et al., J. Phys. Soc. Jpn. 71, 162 (2002) . I Sheikin, Phys. Rev. B. 6794420I. Sheikin et al., Phys. Rev. B 67, 094420 (2003) . D Hall, Phys. Rev. B. 64212508D. Hall et al., Phys. Rev. B 64, 212508 (2001) . R Settai, J. Phys.: Condens. Matter. 13627R. Settai et al., J. Phys.: Condens. Matter 13, L627 (2001) . A Mccollam, S R Julian, P M C Rourke, D Aoki, J Flouquet, Phys. Rev. Lett. 94186401A. McCollam, S.R. Julian, P. M. C. Rourke, D. Aoki and J. Flouquet, Phys. Rev. Lett. 94, 186401 (2005) . W Tokiwa, J. Magn. and Magn. Mat. 87W. Tokiwa et al., J. Magn. and Magn. Mat. 272-276, E87 (2004) . P Gegenwart, J. Low Temp. Phys. 1333P. Gegenwart et al., J. Low Temp. Phys. 133, 3 (2004) . H Shishido, R Settai, H Harima, Y Ōnuki, H. Shishido, R. Settai, H. Harima, and Y.Ōnuki, unpub- lished
[]
[ "An exact string representation of 3d SU(2) lattice Yang-Mills theory", "An exact string representation of 3d SU(2) lattice Yang-Mills theory" ]
[ "Florian Conrady ", "Igor Khavkine ", "\nPhysics Department\nDepartment of Applied Mathematics\nInstitute for Gravitational Physics and Geometry\nPenn State University\nUniversity ParkPennsylvaniaU.S.A\n", "\nUniversity of Western Ontario\nLondonOntarioCanada\n" ]
[ "Physics Department\nDepartment of Applied Mathematics\nInstitute for Gravitational Physics and Geometry\nPenn State University\nUniversity ParkPennsylvaniaU.S.A", "University of Western Ontario\nLondonOntarioCanada" ]
[]
We show that 3d SU(2) lattice Yang-Mills theory can be cast in the form of an exact string representation. The derivation starts from the exact dual (or spin foam) representation of the lattice gauge theory. We prove that every dual configuration (or spin foam) can be equivalently described as a self-avoiding worldsheet of strings on a framing of the lattice. Using this correspondence, we translate the partition function into a sum over closed worldsheets that are weighted with explicit amplitudes. The expectation value of two Polaykov loops with spin j becomes a sum over worldsheets that are bounded by 2j strings along a framing of the loops. * Electronic address: [email protected] † Electronic address: [email protected]
null
[ "https://arxiv.org/pdf/0706.3423v1.pdf" ]
119,611,085
0706.3423
1630239c3009fbcfea2256c0ca95a070de5a7baf
An exact string representation of 3d SU(2) lattice Yang-Mills theory 23 Jun 2007 Florian Conrady Igor Khavkine Physics Department Department of Applied Mathematics Institute for Gravitational Physics and Geometry Penn State University University ParkPennsylvaniaU.S.A University of Western Ontario LondonOntarioCanada An exact string representation of 3d SU(2) lattice Yang-Mills theory 23 Jun 2007 We show that 3d SU(2) lattice Yang-Mills theory can be cast in the form of an exact string representation. The derivation starts from the exact dual (or spin foam) representation of the lattice gauge theory. We prove that every dual configuration (or spin foam) can be equivalently described as a self-avoiding worldsheet of strings on a framing of the lattice. Using this correspondence, we translate the partition function into a sum over closed worldsheets that are weighted with explicit amplitudes. The expectation value of two Polaykov loops with spin j becomes a sum over worldsheets that are bounded by 2j strings along a framing of the loops. * Electronic address: [email protected] † Electronic address: [email protected] I. INTRODUCTION It has been a long-standing conjecture that gauge theory has a dual or effective description in terms of string-like degrees of freedom (see e.g. [1,2,3,4] for a review). The idea made its first appearance in the 60's when dual resonance models of hadron scattering were interpreted in terms of strings [5,6,7,8,9]. It reappeared again, when Wilson introduced the strong-coupling expansion and argued that confinement is an effect of flux lines between quarks [10]. This motivated attempts to find an exact or effective string representation of Yang-Mills theory, and led to the study of loop equations [11,12], and to lattice and continuum models of the Nambu-Goto string (see e.g. [13,14,15] and refs. in [16]). Then, critical string theory was introduced [17,18,19], and developed further into superstring theory, with the broader aim of unifying gauge theory, matter and gravity. Another aspect of the gauge-string duality was revealed when 't Hooft analyzed the large N limit of perturbative Yang-Mills theory [20]. In the context of string theory, the idea was revived more recently by the AdS-CFT correspondence: by the conjecture that a supersymmetric conformal Yang-Mills theory has an equivalent description in terms of superstrings in an AdS spacetime [21,22,23,24]. Conceptually, the present paper is close to Wilson's original approach, where flux lines arise as diagrams of a strongcoupling expansion. There are different versions of the strong-coupling expansion that have different convergence properties. Here, we are concerned with the "resummed" expansion that is convergent for any coupling [25,26]: it results from an expansion of plaquette actions into a basis of characters, and from a subsequent integration over the connection. Thus, the sum over graphs is not an expansion in powers of β, but rather a dual representation that is equivalent to the original lattice gauge theory [27,28,29,30]. For this reason, we try to avoid the adjective "strong-coupling" and call the graphs instead spin foams [30]. Originally, this name was introduced for SU(2) [31], but it is also used for general gauge groups. In the case of SU(2), one obtains a sum over spin assignments to the lattice that satisfy certain spin coupling conditions. Each admissible configuration is a spin foam. To some extent, the concept of spin foams already embodies the idea of an exact gauge-string duality: spin foams can be considered as branched surfaces that are worldsheets of flux lines (see sec. 6.3 in [32] and [33]). Due to the branching and the labelling with representations, these surfaces are not worldsheets as in string theory, however. The new element of this paper is the following: we show that in 3 dimensions spin foams of SU (2) can be decomposed into worldsheets that do not branch and carry no representation label. They can be regarded as worldsheets of strings in the fundamental representation. To carry out this decomposition, we have to apply two modifications to the lattice: the cubic lattice is replaced by a tesselation by cubes and truncated rhombic dodecahedra. This ensures that at every edge exactly three faces intersect. In the second step, the 2-skeleton of this lattice is framed (or thickened). The thickening allows us to replace each spin assignment j f to a face by 2j f sheets of a surface. We show that these sheets can be connected to form a worldsheet in the thickened complex. Moreover, by imposing suitable restrictions on the worldsheets, we can establish a bijection between spin foams and worldsheets. Once this bijection is given, it is simple to rewrite exact sums over spin foams as exact sums over worldsheets. The boundary conditions depend on the observable that is computed by the spin foam sum. In the case of a Wilson loop in the representation j, the sum extends over worldsheets that are bounded by 2j closed strings. In this paper, we derive the sum over worldsheets explicitly for two Polyakov loops of spin j that run parallel through the lattice. The paper is organized as follows: in section II we set our conventions for spin foams and their boundaries (so-called spin networks). Then, we specify 3d SU(2) lattice Yang-Mills theory with the heat kernel action (sec. III). In section IV, we describe the dual transform of the partition function and of the expectation value of two Polyakov loops. The central part of the paper is section V, where we introduce worldsheets on the framed lattice, and prove the bijection between worldsheets and spin foams. In the final section, we formulate both the partition function and the expectation value of the Polyakov loops as exact sums over worldsheets with explicit amplitude factors. II. SPIN FOAMS AND SPIN NETWORKS In this section, we set our conventions for spin foams and spin networks of SU (2). Spin networks formalize the concept of flux line, and spin foams can be regarded as worldsheets of these flux lines. In this paper, spin foams will live on 3-complexes where at each interior edge exactly three faces meet. Spin networks will only lie on the boundary of this complex. For this reason, we do not need to consider the most general concept of spin foam and spin network that could occur and restrict ourselves to the following definition. Let Λ be a complex where at each interior edge exactly three faces meet. A spin foam F on Λ is given by an assignment of a spin j f to every face f of Λ such that at every interior edge e of Λ the triangle inequality is satisfied by the three adjacent spins. Dually, the spin foam can be described as a configuration on the dual complex Λ * : then, the spin foam F is specified by spins j e on edges of Λ * , where for every triangle of Λ * , the spins on the edges of the triangle satisfy the triangle inequality. We define a spin network S on the boundary ∂Λ as an assignment of spins j e to edges in the boundary ∂Λ such that for every vertex in the boundary the adjacent spins satisfy the triangle inequality. A particularly simple example of a spin network is a non-selfintersecting loop C that carries a spin label j. We denote such a spin network by (C, j). Each spin foam on Λ induces a spin network on the boundary ∂Λ, which we call the boundary ∂F of F . III. SU(2) LATTICE YANG-MILLS THEORY IN 3 DIMENSIONS The partition function of 3-dimensional SU(2) lattice Yang-Mills theory is defined by a path integral over SU(2)valued link (or edge) variables U e on a cubic lattice κ: Z = e⊂κ dU e exp − f S f (U f )(1) The face (or plaquette) action S f depends on the holonomy U f around the face. As in paper I, we choose S f to be the heat kernel action (for more details on the definition, see [34]). The heat kernel action has a particularly simple expansion in terms of characters, namely, exp − S f (U f ) = j (2j + 1) e − 2 β j(j+1) χ j (U f ) .(2) The coupling factor β is related to the gauge coupling g via β = 4 ag 2 + 1 3 .(3) The expectation value of a Wilson loop C in the representation j is tr j U C = e⊂κ dU e tr j U C exp − f S f (U f ) .(4) U C denotes the holonomy along the loop C. IV. SPIN FOAM REPRESENTATION A. Partition function In general, there are several, equivalent ways of writing down a sum over spin foams. Here, we will use a scheme by Anishetty, Cheluvaraja, Sharatchandra and Mathur [27], where the amplitude is expressed in terms of 6j-symbols 1 . In the paper by Anishetty et al., spin foams are described by spin assignments j e to edges of a triangulation T . For the purpose of the present paper, it is convenient to go to the dual picture where spin foams are spin assignments j f to faces of the dual T * . Let us call this latticeκ. It is given by a tesselation of the 3-dimensional space by cubes and truncated rhombic dodecahedra (see Fig. 1). The complexκ contains two types of faces: square faces that correspond to faces of the original cubic lattice κ, and hexagonal faces that connect pairs of square faces. At each edge ofκ, exactly three faces meet, and at each vertex we have six intersecting faces. We will be slightly sloppy with our notation and write f ⊂ κ to denote the square faces ofκ. After the dual transformation, the partition function (1) is expressed as a sum over closed spin foams F onκ, where each spin foam carries a certain weight: Z = F | ∂F =∅   f ⊂κ (2j f + 1)   v⊂κ A v   f ⊂κ (−1) 2j f e − 2 β j f (j f +1)   .(5) In the amplitude, every face contributes with the dimension 2j f + 1 of the representation j f . In addition, square faces give an exponential of the Casimir and a sign factor (−1) 2j f . For each vertex ofκ, we get the value of a so-called tetrahedral spin network as a factor: The edges of the tetrahedral spin network correspond to faces of the spin foam surrounding the vertex v, and the vertices of the spin network correspond to the edges where these faces meet (see Fig. 1). The value of the spin network is equal to a 6j-symbol, where the spins j 1 , j 2 and j 3 are read off from any vertex of the tetrahedron. A v = j1 j2 j3 j4 j5 j6 = j 1 j 2 j 3 j 4 j 5 j 6 (6) B. Polaykov loops The dual transformation can be also applied to expectation values of observables such as Wilson loops or products of them. When the dual transform of such loops is computed, the explicit form of amplitudes depends on the geometry of the loops. For a rectangular Wilson loop, it was explicitly determined by Diakonov & Petrov [28]. In ref. [36], one of us derived the dual amplitude for Polyakov loops. In the following, we will consider the example of Polyakov loops, since everywhere along the loops the amplitude has the same structure. In the case of a rectangular Wilson loop, one has to distinguish between the straight part and the corners of the loop. We let the Polyakov loops C 1 and C 2 run along zig-zag paths through the lattice κ and adopt boundary conditions that identify lattice points on opposing ends of diagonals 2 (see Fig. 2). As before, we introduce a tesselationκ, where square faces correspond to faces of the original lattice, and hexagonal faces connect pairs of such faces. To describe the spin foam sum for the Polyakov loops, we need to modify this lattice. This happens in several steps: first we remove all 3-cells, so that we obtain the 2-skeleton ofκ. Inκ the Polyakov loops C 1 and C 2 correspond to two closed sequences of hexagons. Imagine that we draw a closed loop within each sequence that connects the centers of neighbouring hexagons (see Fig. 3). For each pair of neighbouring hexagons, we also add an edge that connects their centers directly, i.e. in a straight line outside the 2-complex. Each such edge forms a triangle with the edges inside the hexagons. We include these triangular faces in the complex, and call the resulting 2-complex againκ. Its boundary consists of two loops which we denote byC 1 andC 2 respectively. Using this complex, we can describe the spin foam sum of the two Polyakov loops as follows. It is given by tr j U C1 tr j U C2 = 1 Z F | ∂F =(C1∪C2,j)   f ⊂κ (2j f + 1)   v⊂κ A v   f ⊂κ (−1) 2j f e − 2 β j f (j f +1)   .(7) The difference to (5) consists of the modification of the complex and the boundary condition ∂F = (C 1 ∪C 2 , j). The boundary condition ∂F = (C 1 ∪C 2 , j) requires that the spin on the loop edges is j. The attachement of triangles alongC 1 ∪C 2 creates two types of new vertices in the complex: vertices in the middle of hexagons alongC 1 ∪C 2 , and Figure 3: Modification of the complexκ: the effect of the Polyakov loops can be described by inserting additional faces. j 1 j 2 j 3 j 5 j ′ 1 j ′ 3 j vertices in the middle of the boundary edge between such hexagons. In the first case, the vertex amplitude is trivial, i.e. A v = 1 .(8) To the second type of vertex we associate a tetrahedral spin network whose edges and vertices correspond to faces and edges around this vertex: A v = (−1) j3−j ′ 3 (−1) j1−j ′ 1 (−1) j1+j3+j2+j j1 j2 j3 j ′ 1 j ′ 3 j (9) = (−1) j3−j ′ 3 (−1) j1−j ′ 1 (−1) j1+j3+j2+j j 1 j 3 j 2 j ′ 3 j ′ 1 j .(10) The spins j 1 , j 2 and j 3 are read off from one of the two vertices not adjacent to j: if the edge with spin j is drawn at the top (as in Fig. 3), this vertex is on the left side of j in the direction of passage of the Polaykov loop, i.e. on the left side in the direction from j 3 , j ′ 3 towards j 1 , j ′ 1 . V. WORLDSHEET INTERPRETATION OF SPIN FOAMS A. Definition of worldsheets To arrive at the worldsheet interpretation of spin foams, we have to apply a further modification to the complexκ. We "frame"κ, so that it becomes a 3-complex. Under this framing each 2-cell f ofκ is turned into a 3-cell f ′ that has the topology of f × (0, 1). Neighbouring cells are connected as in Fig. 4 and Fig. 6a. The resulting 3-complex is called κ ′ . The precise metric properties of κ ′ do not matter as long as it has the required cell structure. The framing ofκ induces also a framing of the boundaryκ. Each 1-cell e ⊂ ∂κ is thickened into a 2-cell e ′ that has the topology of a half-open strip [0, 1] × (0, 1). Note that the boundary ∂e ′ of e ′ is disconnected. When we speak of the boundary ∂κ ′ of κ ′ , we mean the union of all such framed edges e ′ : they form two ribbons-the framed version of the two loops C 1 andC 2 (see Fig. 3). Consider a compact embedded surface 3 S in κ ′ whose boundary lies in ∂κ ′ . Take a framed 3-cell f ′ in κ ′ and determine the intersection S ∩ ∂f ′ of the surface with the cell boundary ∂f ′ . In general, this intersection can be empty or consist of loops, lines and points. The cell boundary ∂f ′ has the topology of an open annulus, so there are two types of loops: loops that are contractible in ∂f ′ and loops that are not. Let us assume that for any f ′ ⊂ κ ′ , the intersection S ∩ ∂f ′ contains only loops of the non-contractible kind. We count the number of such loops in ∂f ′ and call it N f . Obviously, this number does not change if we apply a homeomorphism to S that is connected to the identity and maps cell boundaries ∂f ′ onto themselves. In this limited sense, the numbers N f , f ⊂κ, are topological invariants. Moreover, they satisfy constraints. To see this, consider a triple f 1 , f 2 , f 3 of faces that intersect along an edge e of κ. Correspondingly, we have three framed faces f ′ 1 , f ′ 2 , f ′ 3 of κ ′ that intersect along 2-cells e ′ 12 , e ′ 23 , e ′ 31 (see Fig. 4). The surface S ⊂ κ ′ induces non-contractible loops within the boundaries ∂f ′ 1 , ∂f ′ 2 , ∂f ′ 3 (see Fig. 5). Clearly, each loop in a boundary ∂f ′ i borders exactly one loop from another boundary ∂f ′ j , i = j. This pairing of loops implies that the numbers N f1 , N f2 , N f3 satisfy the triangle inequality |N f1 − N f2 | ≤ N f3 ≤ N f1 + N f2 .(11) If we write j f = N f /2, this is precisely the spin coupling constraint that defines a spin foam F with spins j f . We see therefore that the numbers N f define spin foams F onκ! We will show, in fact, that for every spin foam F there is a surface S whose loop numbers are given by F , and if we restrict the surfaces suitably there is a bijection between surfaces in κ ′ and spin foams onκ. On the boundary this relation induces a correspondence between curves on ∂κ ′ and spin networks on ∂κ. We will first define a suitable class of surfaces and curves, and then prove that the bijection holds. Motivated by the well-known conjectures about gauge-string dualities, we call the surfaces and curves worldsheets and strings. Equivalence relations will be furnished by homeomorphisms h : Λ → Λ on n-complexes Λ, n = 2, 3, that 1. map boundaries ∂c of n-cells c onto themselves, and 2. are connected to the identity through homeomorphisms with property 1. Let Homeo(Λ) denote the set of such restricted homeomorphisms. Definition A string γ on κ ′ is an embedded, not necessarily connected, compact closed curve in the boundary of κ ′ where for each 2-cell c of ∂κ ′ the intersection γ ∩ c consists of lines and each line intersects ∂c in two end points that are not contractible in ∂c. We consider two strings γ and γ ′ as equivalent if they are related by a homeomorphism h ∈ Homeo(∂κ ′ ). Definition A worldsheet w on κ ′ is an embedded, not necessarily connected, compact surface in κ ′ such that (i) the boundary ∂w is a string on ∂κ ′ , and (ii) for each 3-cell f ′ of κ ′ the intersection w ∩ f ′ consists of disks and each disk intersects ∂f ′ in a loop that is non-contractible in ∂f ′ . We consider two worldsheets w and w ′ as equivalent if they are related by a homeomorphism h ∈ Homeo(κ ′ ). B. Correspondence between spin foams and worldsheets Since the boundary of κ ′ has the topology of S 1 ∪ S 1 , the correspondence between strings on ∂κ ′ and spin networks on ∂κ is rather trivial. It is clear from the definition that a string on ∂κ ′ is a union of N 1 disjoint loops along C 1 × (0, 1) and N 2 disjoint loops alongC 1 × (0, 1). We denote this string by γC 1,N1 ∪ γC 2,N2 . On the other hand, the only possible spin networks are given by the loops (C 1 , j 1 ) ∪ (C 2 , j 2 ) with spin j 1 and j 2 . Therefore, a one-to-one correspondence is set up by asssociating the string γC 1,2j1 ∪ γC 2,2j2 to the spin network (C 1 , j 1 ) ∪ (C 2 , j 2 ). Let us now choose fixed values for the spins j 1 and j 2 . Denote the set of all spin foams F s.t. ∂F = (C 1 , j 1 )∪(C 2 , j 2 ) by F , and let W stand for the set of worldsheets s.t. ∂w = γC 1,2j1 ∪ γC 2,2j2 . Proposition V.1 There is a bijection f : F → W between spin foams F onκ s.t. ∂F = (C 1 , j 1 ) ∪ (C 2 , j 2 ) and worldsheets w on κ ′ s.t. ∂w = γC 1,2j1 ∪ γC 2,2j2 . Proof We start by constructing a map f : F → W . Then, we will show that f is injective and surjective. Let F be a spin foam in F . Consider the vertices v of κ ′ where six 3-cells intersect. Denote the set of these vertices as V ′ . Around each vertex v ∈ V ′ we choose a closed ball B ǫ (v) of radius ǫ. The intersection of the balls with cells of κ ′ defines a new, finer complex that we call κ ′ ± . We can view this complex as the union of two complexes κ ′ + and κ ′ − , where κ ′ + results from κ ′ ± by removing the interior of all balls B ǫ (v): κ ′ + = κ ′ ± \ v∈V ′ B • ǫ (v)(12) κ ′ − , on the other hand, is the subcomplex of κ ′ ± that remains when we keep the balls B ǫ (v) and delete the rest. Every 3-cell f ′ of κ ′ is a union f ′ = f ′ + ∪ i f ′ −i (13) where f ′ + is a 3-cell of κ ′ + and the f ′ −i , i = 1, . . . , n, are 3-cells in κ ′ − . In order to construct the worldsheet corresponding to the spin foam F , we will first build a surface in the complex κ ′ + . In the second step, we will also fill the balls B ǫ (v) with surfaces, so that the union of all surfaces gives a worldsheet in κ ′ . Consider an arbitrary face f ofκ with spin j f determined by the spin foam F . The corresponding 3-cell f ′ + in κ ′ + has the topology of a closed 3-ball with two punctures. Its boundary ∂f ′ + is an open annulus. In each such 3-cell f ′ + we place N f = 2j f disjoint closed disks whose boundary is given by non-contractible loops in ∂f ′ + . Along every edge e in the interior of κ ′ + three 3-cells f ′ +1 , f ′ +2 , f ′ +3 intersect. Due to the spin coupling conditions, the numbers N f ′ +1 , N f ′ +2 , N f ′ +3 satisfy the triangle inequality |N f ′ +1 − N f ′ +2 | ≤ N f ′ +3 ≤ N f ′ +1 + N f ′ +2 .(14) This implies that we can rearrange the disks in such a way that their boundary edges are pairwise congruent at the shared boundaries of the cells f ′ +1 , f ′ +2 , f ′ +3 . We repeat this procedure for every edge e ⊂ κ ′ + where three 3-cells meet, and thereby obtain a compact embedded surface w + in κ ′ + . Up to homeomorphisms h ∈ Homeo(κ ′ + ), this surface is uniquely determined by our procedure. We now explain how we fill the "holes" in κ ′ + , so that we get a surface in the entire complex κ ′ . Each ball B ǫ (v) defines a subcomplex of κ ′ − as depicted in Fig. 6a. It consists of six 3-cells c 1 , . . . , c 6 . The boundary ∂c i of each cell is topologically an open annulus, and subdivided into five 2-cells. Four of these 2-cells are shared with neighouring 3-cells c j , j = i, and one of them lies in the boundary ∂B ǫ (v) of the ball. We call the former type of 2-cell internal, and the latter one external. To fill this complex with surfaces, it is helpful to use another, topologically equivalent complex that is shown in Fig. 6b: the interior of the ball B ǫ (v) corresponds to the interior of a tetrahedron and the boundary ∂B ǫ (v) is projected onto one of the four triangles. This triangle has three punctures. For every ball B ǫ (v), the surface w + induces an embedded closed curve γ v along the boundary ∂B ǫ (v). The curve consists of n loops l i , i.e. γ v = l 1 ∪ · · · ∪ l n . In the alternative representation of Fig. 6b, the curve appears as a set of embedded loops in the bottom triangle that wind around the three punctures (see Fig. 7). To create the surface in B ǫ (v), we will cover the n loops by n disks in B ǫ (v). This will be done in such a way that condition (ii) for worldsheets is satisfied. Consider a single 3-cell c i in the ball B ǫ (v), and the one external 2-cell in its boundary ∂c i . The intersection of the curve γ v with this 2-cell gives a number of lines e ik , k = 1, . . . K i . Each of the two end points of a line e ik is located on a 1-cell shared by an external and an internal 2-cell of ∂c i . Let us now draw a line from one of the end points through the internal 2-cell to the vertex in the center of B ǫ (v). The same is done for the second end point. Together with the original line e ik , these lines form a loop in the cell boundary ∂c i . We fill this loop with a disk d ik in c i , so that the intersection d ik ∩ c i is again the loop. We repeat this procedure for every line e ik in the cell c i , and then in every cell c i . Along the boundary between neighbouring 3-cells, we glue the disks together: when a line e ik is connected to another line e i ′ k ′ , i = i ′ , the corresponding disks are glued together along the internal 2-cell ∂c i ∩ ∂c i ′ . This can be done in such way that the resulting total surface intersects only in one point: at the vertex in the center of B ǫ (v), like a stack of sheets that are pinched together. Let us call this surface w − . Observe that w − satisfies property (ii) in the subcomplex B ǫ (v). Due to the way we have placed disks outside of B ǫ (v), every line e ik connects 1-cells of ∂B ǫ (v) that are disconnected. As a result, each loop d ik ∩c i is non-contractible in ∂c i . To arrive at an embedded surface, we need to remove the point of degeneracy at the center of the ball B ǫ (v). We do so by moving the different parts of w − slightly apart, and in such a way that no new components are created in the intersections w − ∩ c i . The latter ensures that the new surface w − still has property (ii). Up to homeomorphisms h ∈ Homeo(B ǫ (v)) which leave γ v invariant, w − is the unique embedded surface that is bounded by γ v and meets condition (ii). After filling each ball B ǫ (v) with such a surface w − , we take the union of the surfaces w − with w + . This gives us an embedded compact surface w in κ ′ . Let us check if w meets requirement (i) and (ii) of the definition of a worldsheet. Due to the arrangement of disks in 3-cells c of κ ′ + , the induced loops in the boundary ∂c never connect 1-cells that are connected in ∂c. This means, in particular, that the induced curve in the boundary ∂κ ′ consists of lines in 2-cells c, where each line connects two disconnected 1-cells of ∂c. Therefore, the boundary of each line cannot be contracted in ∂c, and the surface w has property (i). How about property (ii)? The surface has the desired property for the cells of κ ′ + , and we showed the same for each subcomplex B ǫ (v). It is clear from this that w has property (ii) in κ ′ . We conclude that w is a worldsheet on κ ′ . The whole construction defines a map f : F → W from spin foams to worldsheets. Next we prove that f is injective and surjective. Let F and F ′ be two different spin foams. There must be a face f ⊂κ for which N f = N ′ f . This implies that the corresponding worldsheets w and w ′ are different, since they have different invariants under the homeomorphisms h ∈ Homeo(κ ′ ). Thus, f is injective. To check surjectivity, consider an arbitrary worldsheet w ∈ W . Within each 3-cell c of κ ′ , the worldsheet induces N f disks that are bounded by non-contractible loops in ∂c. The numbers N f define a spin foam F with spins j f = N f /2. From F we construct another worldsheet w ′ = f (F ). Provided the balls B ǫ (v) are chosen small enough, the intersections w ∩ κ + and w ′ ∩ κ + are related by a homeomorphism h ∈ Homeo(κ ′ + ). Inside the balls B ǫ (v), the worldsheet w ′ consists of disks that are bounded by loops in ∂B ǫ (v). Up to homeomorphisms h v ∈ Homeo(B ǫ (v)) that leave γ v invariant, there is precisely one way to cover these loops by disks in B ǫ (v) such that property (ii) is met. For sufficiently small ǫ, the intersection w ∩ B ǫ (v) has property (ii) as well, and must be related to w ′ ∩ B ǫ (v) by a homeomorphism h v ∈ Homeo(B ǫ (v)). Thus, there is a homeomorphism h ∈ Homeo(κ ′ ) that relates w and w ′ , and w ′ = f (F ) = w. This shows that f is surjective and completes the proof. VI. STRING REPRESENTATION OF 3D SU(2) LATTICE YANG-MILLS THEORY By using the correspondence between spin foams and worldsheets, we can now translate the exact dual representations (5) and (7) into exact string representations of 3d SU(2) Yang-Mills theory. The string representation is defined on a complex κ ′ that arises from a framing of the 2-skeleton of a tesselationκ by cubes and truncated rhombic dodecahedra (see Fig. 1, Fig. 4 and Fig. 6a). Under the framing, faces f of the 2-skeleton become 3-cells f ′ of the framed complex κ ′ . Vertices v turn into vertices v ′ ⊂ κ ′ , where six framed 3-cells f ′ intersect. The set of these vertices v ′ is denoted by V ′ . The 3-cells f ′ of κ ′ belong to two groups: 3-cells f ′ that originate from square faces f of κ (and correspond to faces f ∈ κ), and those arising from hexagonal faces inκ. Worldsheets and strings are defined as certain surfaces and curves in the framed complex κ ′ (see sec. V). With these conventions, the partition function is given by a sum over closed worldsheets: Z = w | ∂w=∅   f ′ ⊂κ ′ (N f ′ + 1)   v ′ ⊂V ′ A v ′ ({N f ′ /2})   f ⊂κ (−1) N f ′ e − 1 2β N f ′ (N f ′ +2)  (15) The amplitude has three contributions: every framed face contributes with a factor N f ′ + 1, where N f ′ is the number of components of the worldsheet in f ′ . In addition, square faces give an exponential and a sign factor (−1) N f ′ . For each vertex v ∈ V ′ , we get a 6j-symbol A v ′ ({N f ′ /2}) = N f ′ 1 /2 N f ′ 2 /2 N f ′ 3 /2 N f ′ 4 /2 N f ′ 5 /2 N f ′ 6 /2(16) where the f ′ i are the six 3-cells that intersect at v ′ . For the expectation value of two Polyakov loops (C 1 , j) and (C 2 , j) (as defined in section sec. IV B), an additional modification of the 2-skeleton was required: we insert a sequence of triangles along two loopsC 1 andC 2 (see Fig. 3). Under the framing, the two loops become ribbons. The expectation value of the Polyakov loops is equal to a sum over worldsheets that are bounded by 2j strings along the first ribbonC 1 × (0, 1) and by 2j strings along the second ribbonC 2 × (0, 1). Denoting these strings as γC 1,2j ∪ γC 2,2j , the sum takes the form tr j U C1 tr j U C2 = w | ∂w=γC 1 ,2j ∪γC 2 ,2j   f ′ ⊂κ ′ (N f ′ + 1)   v ′ ⊂V ′ A v ′ ({N f ′ /2})   f ⊂κ (−1) N f ′ e − 1 2β N f ′ (N f ′ +2)   . (17) The difference to (5) consists of the modification of the complex and the boundary condition ∂w = γC 1,2j ∪ γC 2,2j . The attachement of triangles alongC 1 ∪C 2 creates two types of new vertices in κ ′ : vertices in the middle of framed hexagons along the ribbons, and vertices in the middle of the boundary between such hexagons. In the first case, the vertex amplitude is trivial, i.e. A v ′ = 1 .(18) To the second type of vertex we attribute a factor A v ′ = (−1) (N f ′ 3 −N ′ f ′ 3 )/2 (−1) (N f ′ 1 −N ′ f ′ 1 )/2 (−1) (N f ′ 1 +N f ′ 3 +N f ′ 2 +2j)/2    N f ′ 1 /2 N f ′ 3 /2 N f ′ 2 /2 N ′ f ′ 3 /2 N ′ f ′ 1 /2 j   (19) The labelling is analogous to the labelling by spins in eq. (10). In this string representation, N -ality dependence and string breaking take on a very concrete form. For spin j = 1/2, the boundary string consists of two loops γC 1,1 and γC 2,1 : one along the ribbonC 1 × (0, 1) and the other one along the ribbonC 2 × (0, 1). Since every worldsheet has to be bounded by the string γC 1,1 ∪ γC 2,1 , there is necessarily a connected component of the worldsheet that connects the boundary strings γC 1,1 and γC 2,1 . The string between quarks is "unbroken". When we go to j = 1, on the other hand, we have a pair γC 1,2 of strings alongC 1 × (0, 1) and a pair γC 1,2 of strings alongC 2 × (0, 1). In this case, the four single strings can be either connected by two surfaces that go across the distance between the Polaykov loops, or each pair is connected to itself by a tube-like surface. In the latter case, the string between quarks is "broken". As we go to higher spins, the worldsheet can consist of several extended surfaces, several tube-like surfaces or a mixture of both. VII. DISCUSSION In this paper, we showed that 3d SU(2) lattice Yang-Mills theory can be cast in the form of an exact string representation. Our starting point was the exact dual (or spin foam) representation of the lattice gauge theory. We demonstrated that spin foams can be equivalently described as self-avoiding worldsheets of strings on a framed lattice. This lattice arose in two steps: we replaced the original cubic lattice by a tesselation, where at every edge only three faces intersect. Then, we took the 2-skeleton of this complex, and framed (or thickened) it by choosing an open neighbourhood of it in R 3 . We proved that there is a bijection between a subset of surfaces in the framed complex and spin foams in the unframed complex. This allowed us to translate the partition function from a sum over spin foams into a sum over closed worldsheets. The expectation value of two Polyakov loops with spin j became a sum over worldsheets that are bounded by 2j strings along each loop. To our knowledge, this is the first example of an exact and fully explicit string representation of SU(2) lattice Yang-Mills theory in three dimensions 4 . Not surprisingly, it differs from a simple Nambu-Goto string. When a worldsheet does not run more than once through faces (i.e. when N f ′ ≤ 1), the 6j-symbols in the amplitude become trivial and the exponent in (15) is proportional to the area of the worldsheet. In these cases, the weighting resembles that of the Nambu-Goto string. In general, however, a worldsheet intersects several times with the same cell, and then we have an interaction due to nonlinear dependences on N f ′ . That is, in addition to interactions by merging and splitting, there is an interaction of directly neighouring strings. Note that this does not preclude the possibility that a Nambu-Goto string gives a good effective description in special cases or regimes. It is interesting to compare this result to the AdS-CFT correspondence, where the gauge-string duality is constructed by completely different methods. One should also observe the difference between our "non-abelian" worldsheets and the surfaces that arise in abelian lattice gauge theory. In the case of U(1), the theory can be transformed to a sum over closed 2-chains, and in this sense one has a sum over surfaces. The worldsheets of our string representation are of the same type as long as N f ′ ≤ 1. When the occupation number increases, however, the surfaces can be "jammed" against each other along faces without being "added" like abelian 2-chains. At a practical level, the present worldsheet picture could be useful for analyzing the dual representation. It could be helpful, for example, when thinking about "large" moves in Monte Carlo simulations [35]: by inserting an entire worldsheet into a given worldsheet, one can create a non-local change in spin foams that is compatible with the spin-coupling conditions. A possible shortcoming of the present work is the restriction on the shape of surfaces. It was needed in order to establish the bijection between worldsheets and spin foams. From a mathematical perspective, it would be more elegant to admit arbitrary compact self-avoiding surfaces, and to characterize spin foams as certain topological invariants. We hope to obtain such a characterization in future work. Figure 1 : 1Tesselation of R 3 by cubes and truncated rhombic dodecahedra. Figure 2 : 2Zig-zag path of the Polyakov loops C1 and C2 in a 2d slice of the lattice κ. The arrows indicate how lattice points are identified. Figure 4 : 4Under the framing three faces ofκ along an edge become three 3-cells that intersect along 2-cells. Figure 5 : 5A surface S induces loops in the boundary of 3-cells of κ ′ . Figure 6 : 6(a) A closed ball Bǫ(v) in κ ′ around a vertex v where six framed cells meet. The cells of κ ′ induce a cell structure in the ball. The resulting cell complex is topologically equivalent to the complex in Fig. 6b. (b) A tetrahedron in R 3 with an open triangle at the bottom, and triangles removed on the three other sides. The boundary of the ball Bǫ(v) is mapped onto the bottom triangle. Solid lines delineate the boundaries between 3-cells in the interior of the tetrahedron. The three thick dots indicate punctures. The three missing triangles in the boundary form a fourth puncture. Figure 7 : 7Example of an induced loop in the boundary of the ball Bǫ(v). Recently, the same result was obtained very efficiently by the use of Kauffman-Lins spin networks[35]. The use of zig-zag paths is not essential for the result of this paper. We choose these paths for convenience, since in this case the amplitudes are already known from ref.[36]. The embedding implies, in particular, that surface does not intersect with itself. In the case of 2d QCD, an exact string representation was found by Gross and Taylor[37,38]. AcknowledgmentsWe thank Wade Cherrington, Dan Christensen, Alejandro Perez and Carlo Rovelli for discussions. This work was supported in part by the NSF grant PHY-0456913 and the Eberly research funds. Gauge fields and strings. A M Polyakov, HarwoodA.M. Polyakov. Gauge fields and strings. Harwood, 1987. Confinement and liberation. A M Polyakov, hep-th/0407209A.M. Polyakov. Confinement and liberation. 2004, hep-th/0407209. Confining strings. A M Polyakov, hep-th/9607049Nucl. Phys. 486A.M. Polyakov. Confining strings. Nucl. Phys., B486:23-33, 1997, hep-th/9607049. String representation and nonperturbative properties of gauge theories. D Antonov, hep-th/9909209Surveys High Energ. Phys. 14D. Antonov. String representation and nonperturbative properties of gauge theories. Surveys High Energ. Phys., 14:265- 355, 2000, hep-th/9909209. Broken symmetries: selected papers of Y. Nambu. World Scientific. T. Eguchi and K. NishijimaT. Eguchi and K. Nishijima, editors. Broken symmetries: selected papers of Y. Nambu. World Scientific, Singapore, 1995. Dual symmetric theory of hadrons. 1. L Susskind, Nuovo Cim. 69L. Susskind. Dual symmetric theory of hadrons. 1. Nuovo Cim., A69:457-496, 1970. Dual-symmetric theory of hadrons. 2. baryons. G Frye, C W Lee, L Susskind, Nuovo Cim. 69G. Frye, C.W. Lee, and L. Susskind. Dual-symmetric theory of hadrons. 2. baryons. Nuovo Cim., A69:497-507, 1970. An analog model for KSV theory. D B Fairlie, H B Nielsen, Nucl. Phys. 20D.B. Fairlie and H.B. Nielsen. An analog model for KSV theory. Nucl. Phys., B20:637-651, 1970. String theory: The early years. J H Schwarz, hep-th/0007118J.H. Schwarz. String theory: The early years. 2000, hep-th/0007118. Confinement of quarks. K G Wilson, Phys. Rev. 10K.G. Wilson. Confinement of quarks. Phys. Rev., D10:2445-2459, 1974. Gauge fields as rings of glue. A M Polyakov, Nucl. Phys. 164A.M. Polyakov. Gauge fields as rings of glue. Nucl. Phys., B164:171-188, 1980. QCD: Fermi string theory. A A , Nucl. Phys. 189253A.A. Migdal. QCD: Fermi string theory. Nucl. Phys., B189:253, 1981. Anomalies of the free loop wave equation in the WKB approximation. M Lüscher, K Symanzik, P Weisz, Nucl. Phys. 173365M. Lüscher, K. Symanzik, and P. Weisz. Anomalies of the free loop wave equation in the WKB approximation. Nucl. Phys., B173:365, 1980. The static potential in string models. O Alvarez, Phys. Rev. 24440O. Alvarez. The static potential in string models. Phys. Rev., D24:440, 1981. The exact q anti-q potential in Nambu string theory. J F Arvis, Phys. Lett. 127106J.F. Arvis. The exact q anti-q potential in Nambu string theory. Phys. Lett., B127:106, 1983. Quantum geometry. A statistical field theory approach. J Ambjorn, B Durhuus, T Jonsson, Cambridge Monogr. Math. Phys. J. Ambjorn, B. Durhuus, and T. Jonsson. Quantum geometry. A statistical field theory approach. Cambridge Monogr. Math. Phys., Cambridge, 1997. A locally supersymmetric and reparametrization invariant action for the spinning string. L Brink, P Di Vecchia, P S Howe, Phys. Lett. 65L. Brink, P. Di Vecchia, and P.S. Howe. A locally supersymmetric and reparametrization invariant action for the spinning string. Phys. Lett., B65:471-474, 1976. A Lagrangian formulation of the classical and quantum dynamics of spinning particles. L Brink, P Di Vecchia, P S Howe, Nucl. Phys. 11876L. Brink, P. Di Vecchia, and P.S. Howe. A Lagrangian formulation of the classical and quantum dynamics of spinning particles. Nucl. Phys., B118:76, 1977. Quantum geometry of bosonic strings. A M Polyakov, Phys. Lett. 103A.M. Polyakov. Quantum geometry of bosonic strings. Phys. Lett., B103:207-210, 1981. A planar diagram theory for strong interactions. G Hooft, Nucl. Phys. 72461G. 't Hooft. A planar diagram theory for strong interactions. Nucl. Phys., B72:461, 1974. The large N limit of superconformal field theories and supergravity. J M Maldacena, hep-th/9711200Adv. Theor. Math. Phys. 2J.M. Maldacena. The large N limit of superconformal field theories and supergravity. Adv. Theor. Math. Phys., 2:231-252, 1998, hep-th/9711200. Gauge theory correlators from non-critical string theory. S S Gubser, I R Klebanov, A M Polyakov, hep-th/9802109Phys. Lett. 428S.S. Gubser, I.R. Klebanov, and A.M. Polyakov. Gauge theory correlators from non-critical string theory. Phys. Lett., B428:105-114, 1998, hep-th/9802109. Anti-de sitter space and holography. E Witten, hep-th/9802150Adv. Theor. Math. Phys. 2E. Witten. Anti-de sitter space and holography. Adv. Theor. Math. Phys., 2:253-291, 1998, hep-th/9802150. Large N field theories, string theory and gravity. O Aharony, S S Gubser, J M Maldacena, H Ooguri, Y Oz, hep-th/9905111Phys. Rept. 323O. Aharony, S.S. Gubser, J.M. Maldacena, H. Ooguri, and Y. Oz. Large N field theories, string theory and gravity. Phys. Rept., 323:183-386, 2000, hep-th/9905111. High temperature expansions for the free energy of vortices, respectively the string tension in lattice gauge theories. G Münster, Nucl. Phys. 18023G. Münster. High temperature expansions for the free energy of vortices, respectively the string tension in lattice gauge theories. Nucl. Phys., B180:23, 1981. Strong coupling and mean field methods in lattice gauge theories. J.-M Drouffe, J.-B Zuber, Phys. Rept. 1021J.-M. Drouffe and J.-B. Zuber. Strong coupling and mean field methods in lattice gauge theories. Phys. Rept., 102:1, 1983. Dual of three-dimensional pure SU(2) lattice gauge theory and the Ponzano-Regge model. R Anishetty, S Cheluvaraja, H S Sharatchandra, M Mathur, hep-lat/9210024Phys. Lett. 314R. Anishetty, S. Cheluvaraja, H.S. Sharatchandra, and M. Mathur. Dual of three-dimensional pure SU(2) lattice gauge theory and the Ponzano-Regge model. Phys. Lett., B314:387-390, 1993, hep-lat/9210024. Yang-Mills theory in three dimensions as quantum gravity theory. D Diakonov, V Petrov, hep-th/9912268J. Exp. Theor. Phys. 91D. Diakonov and V. Petrov. Yang-Mills theory in three dimensions as quantum gravity theory. J. Exp. Theor. Phys., 91:873-893, 2000, hep-th/9912268. Duals of nonabelian gauge theories in d-dimensions. I G Halliday, P Suranyi, hep-lat/9412110Phys. Lett. 350I.G. Halliday and P. Suranyi. Duals of nonabelian gauge theories in d-dimensions. Phys. Lett., B350:189-196, 1995, hep-lat/9412110. The dual of pure non-abelian lattice gauge theory as a spin foam model. R Oeckl, H Pfeiffer, hep-th/0008095Nucl. Phys. 598R. Oeckl and H. Pfeiffer. The dual of pure non-abelian lattice gauge theory as a spin foam model. Nucl. Phys., B598:400- 426, 2001, hep-th/0008095. Spin foam models. J C Baez, gr-qc/9709052Class. Quant. Grav. 15J.C. Baez. Spin foam models. Class. Quant. Grav., 15:1827-1858, 1998, gr-qc/9709052. From Brownian motion to renormalization and lattice gauge theory. C Itzykson, J M Drouffe, Statistical field theory. CambridgeCambridge University Press1C. Itzykson and J. M. Drouffe. Statistical field theory. Vol. 1: From Brownian motion to renormalization and lattice gauge theory. Cambridge University Press, Cambridge, 1989. Geometric spin foams, Yang-Mills theory and background-independent models. F Conrady, gr-qc/0504059F. Conrady. Geometric spin foams, Yang-Mills theory and background-independent models. 2005, gr-qc/0504059. The action of SU(N) lattice gauge theory in terms of the heat kernel on the group manifold. P Menotti, E Onofri, Nucl. Phys. 190288P. Menotti and E. Onofri. The action of SU(N) lattice gauge theory in terms of the heat kernel on the group manifold. Nucl. Phys., B190:288, 1981. Dual computations of non-abelian Yang-Mills on the lattice. J W Cherrington, D Christensen, I Khavkine, hep-lat/07052629J.W. Cherrington, D. Christensen, and I. Khavkine. Dual computations of non-abelian Yang-Mills on the lattice. 2007, hep-lat/07052629. Dual representation of Polyakov loop in 3d SU(2) lattice Yang-Mills theory. F Conrady, arXiv:0706.3422hep-latF. Conrady. Dual representation of Polyakov loop in 3d SU(2) lattice Yang-Mills theory. 2007, arXiv:0706.3422 [hep-lat]. Two-dimensional QCD is a string theory. D J Gross, W Taylor, hep-th/9301068Nucl. Phys. 400D.J. Gross and W. Taylor. Two-dimensional QCD is a string theory. Nucl. Phys., B400:181-210, 1993, hep-th/9301068. Two-dimensional QCD and strings. D J Gross, W Taylor, hep-th/9311072D.J. Gross and W. Taylor. Two-dimensional QCD and strings. 1993, hep-th/9311072.
[]
[ "Dust evolution in protoplanetary discs and the formation of planetesimals What have we learned from laboratory experiments?", "Dust evolution in protoplanetary discs and the formation of planetesimals What have we learned from laboratory experiments?" ]
[ "Jürgen Blum " ]
[]
[]
After 25 years of laboratory research on protoplanetary dust agglomeration, a consistent picture of the various processes that involve colliding dust aggregates has emerged. Besides sticking, bouncing and fragmentation, other effects, like, e.g., erosion or mass transfer, have now been extensively studied. Coagulation simulations consistently show that µm-sized dust grains can grow to mm-to cm-sized aggregates before they encounter the bouncing barrier, whereas sub-µm-sized water-ice particles can directly grow to planetesimal sizes. For siliceous materials, other processes have to be responsible for turning the dust aggregates into planetesimals. In this article, these processes are discussed, the physical properties of the emerging dusty or icy planetesimals are presented and compared to empirical evidence from within and without the Solar System. In conclusion, the formation of planetesimals by a gravitational collapse of dust "pebbles" seems the most likely.
10.1007/s11214-018-0486-5
[ "https://arxiv.org/pdf/1802.00221v1.pdf" ]
59,522,435
1802.00221
f778b60cce2d43e1e753b3f67b67c412ae30abde
Dust evolution in protoplanetary discs and the formation of planetesimals What have we learned from laboratory experiments? 1 Feb 2018 Jürgen Blum Dust evolution in protoplanetary discs and the formation of planetesimals What have we learned from laboratory experiments? 1 Feb 2018Received: date / Accepted: datemy journal manuscript No. (will be inserted by the editor)Protoplanetary dust · Planetesimals · Planet formation After 25 years of laboratory research on protoplanetary dust agglomeration, a consistent picture of the various processes that involve colliding dust aggregates has emerged. Besides sticking, bouncing and fragmentation, other effects, like, e.g., erosion or mass transfer, have now been extensively studied. Coagulation simulations consistently show that µm-sized dust grains can grow to mm-to cm-sized aggregates before they encounter the bouncing barrier, whereas sub-µm-sized water-ice particles can directly grow to planetesimal sizes. For siliceous materials, other processes have to be responsible for turning the dust aggregates into planetesimals. In this article, these processes are discussed, the physical properties of the emerging dusty or icy planetesimals are presented and compared to empirical evidence from within and without the Solar System. In conclusion, the formation of planetesimals by a gravitational collapse of dust "pebbles" seems the most likely. Introduction It is generally assumed that dust in protoplanetary discs (PPDs) grows from initial sizes of ∼ 0.1 − 1 µm to planetesimals. The latter are objects of sizes in the range ∼ 1 − 1000 km and possess sufficient gravitational attraction to allow further accretion of more material in subsequent collisions so that they finally end up in planetary embryos and planets. For bodies smaller than planetesimals, gravitational accretion is not possible in two-body collisions, because typical collision velocities exceed by far their gravitational escape speed. Thus, planetesimals play an essential intermediate role in planet formation. Mutual collisions among protoplanetary dust particles are important for our understanding of the build-up of larger bodies. Based upon the pioneering work of Weidenschilling (1977), Weidenschilling and Cuzzi (1993) present a comprehensive overview of relative particle motion in the disc. For individual dust grains or small aggregates with diameters smaller than ∼ 10 − 100 µm, Brownian motion is the dominating source for collisions. Typical collision velocities in this regime are < ∼ 1 mm s −1 . Aggregates larger than ∼ 10 − 100 µm are subject to systematic drift motions relative to the gas, which causes relative velocities, and, thus, collisions, among particles of different sizes. One cause for the relative drift between dust and gas lies in the sub-Keplerian orbital velocity of the pressure-supported gas, which typically rotates around the central star slower than with Keplerian velocity by ∆v ∼ c 2 /v K , with c and v K being the local sound speed of the gas and the Keplerian velocity, respectively. As the dust aggregates do not feel the pressure support, their equilibrium orbital velocity would be Keplerian in the absence of a gaseous nebula. However, in the presence of the gas, small dust aggregates with gas-dust coupling times smaller than the orbital time scale are forced by the gas to move with sub-Keplerian velocity and, thus, feel a net inward-directed force, which causes a radial drift velocity that increases with increasing aggregate size. Following Weidenschilling and Cuzzi (1993), these radial drift velocities reach a maximum value of ∼ 60 m s −1 for bodies with ∼ 1 m diameter, which limits the lifetimes of these bodies to ∼ 100 years at 1 au. In the particular nebula model used by Weidenschilling and Cuzzi (1993), the gas-grain coupling time of m-sized aggregates equals the orbital time scale at 1 au. Aggregates larger than about 1 m travel essentially with Keplerian orbital speeds so that they feel a steady headwind of the slower-orbiting gas. As the frictional effect of this headwind depends on the surface-to-mass ratio of the dust aggregates, the resulting inward drift velocity decreases with increasing aggregate size in this size regime. Aggregates with different dust-gas coupling times (i.e. different sizes for a given porosity) thus differentially drift in the radial direction, which causes collisions among these particles. For the same reason, also in the azimuthal (i.e. orbital-velocity) direction, dust aggregates possess relative velocities to one another, which also leads to collisions. An example of the resulting collision velocities can be seen in Fig. 1. Another cause for systematic drift of dust particles relative to the gas is the sedimentary motion of the particles towards the disc midplane. Also here, dust aggregates with larger surface-to-mass ratios sediment slower than those with smaller ones so that collisions among aggregates with different sizes result. This effect is greater for higher elevations above the midplane, due to a higher downward-directed force component and lower gas densities. Naturally, dust particles in the nebula midplane are not affected by sedimentation so that a quantification requires knowledge about the position above the midplane. Finally, gas turbulence plays also an important role in the evolution of dust aggregates, because it also causes dust particles to collide. Due to the nature of the turbulence, it causes stochastic particle motion, particularly (but not exclusively) for larger dust aggregates, depending also on the strength of the turbulence. Thus, also dust aggregates of identical aerodynamic properties (or sizes) can collide. Dust particles condense as refractory materials (oxides, metals, silicates) in the hot inner disc, semi-volatile materials (carbonaceous and organic matter) in the middle disc, and ices (H 2 O, CO 2 , CO, NH 3 , CH 4 ) in the cold outer disc, respectively. Due to the findings by the Rosetta mission, particularly that the cometary nucleus possesses a high D/H ratio (Altwegg et al. 2015) and contains a high abundance of molecular O 2 (Bieler et al. 2015), it is also possible that the water ice in the outer parts of the solar nebula never reached temperatures high enough to vaporise it so that cometary water ice might be a pre-solar component of the condensed matter in the Solar System. Formation models of protoplanetary discs predicted this (Visser et al. 2009). Over the past 25 years, a number of experimental investigations have tried to shed light on the outcomes of protoplanetary dust collisions. This paper summarizes the knowledge gained by these laboratory and microgravity experiments. Moreover, the possible growth mechanisms towards planetesimals, resulting from the collision models, will be presented. Finally, predictions about planetesimal properties from the corresponding growth models will be compared with empirical evidence. Outcomes of laboratory experiments The systematic search for a collision model of protoplanetary dust started with the work by Blum and Münch (1993) who showed that collisions among mm-sized dust aggregates consisting of µm-or nm-sized silicate grains do not result in sticking, but rather in bouncing (for impact velocities between 0.15 and ∼ 1 m s −1 ) or fragmentation (for impact velocities > ∼ 1 m s −1 ). Since then, a considerable number of papers have been published that expanded the parameter space to dust (aggregate) sizes between ∼ 1 µm and ∼ 10 cm and the whole range of size ratios between projectile and target, and impact velocities between ∼ 10 −3 m s −1 and ∼ 100 m s −1 , for silicate monomer grains of ∼ 1 µm diameter. The review by Blum and Wurm (2008) summarizes the state of the art about 10 years ago. then used all available information at that time to derive the first comprehensive collision model for silicate aggregates. Since then, numerous new experiments have been performed, expanding, for instance, the parameter space to water ice as well as CO 2 ice (Musiolik et al. 2016a) and CO 2 − H 2 O ice mixtures (Musiolik et al. 2016b). In the following, I will summarize our knowledge about the possible collisional outcomes and their dependence on projectile and target size as well as grain material and monomer size. 2.1 General collisional outcomes I will start with the results for aggregates consisting of µm-sized SiO 2 grains, because for this material, the body of empirical evidence is satisfactory. Outcomes for similar-sized collision partners. When the size ratio between the colliding dust aggregates is not too different from unity, the following general collision outcomes have been identified: -Sticking. When the collision energy is small compared to the van-der-Waals binding energy of the colliding dust aggregates, sticking occurs inevitably if the collisions are at least partially inelastic (Dominik and Tielens 1997). For higher impact energies, the degree of inelasticity determines the fate of the colliding dust aggregates. identified three processes that lead to the complete transfer of both colliding bodies into a more massive dust aggregate, i.e., hit-and-stick behaviour for very small impact velocities, sticking with deformation/compaction of the aggregates, and deep penetration of a somewhat smaller projectile into a larger target aggregate. The supporting laboratory and microgravity experiments were performed by ; ; ; ; Krause and Blum (2004); Langkowski et al. (2008); Weidling et al. (2012); Kothe et al. (2013); Weidling and Blum (2015); Brisset et al. (2016Brisset et al. ( , 2017; Whizin et al. (2017). When colliding dust particles or aggregates are in the hit-and-stick regime (for which the impact energy is insufficient to cause rolling of the dust grains upon impact, see Dominik and Tielens (1997)), the corresponding aggregates develop a fractal morphology (Dominik and Tielens 1997;Kempf et al. 1999;Krause and Blum 2004;Blum 2006), with a fractal dimension in the range D f ≈ 1.1 . . . 1.9, depending on the gas pressure if the collisions are caused by Brownian motion (Paszun and Dominik 2006). For higher impact speeds, the growing aggregates become more compact (Dominik and Tielens 1997;Paszun and Dominik 2009;Wada et al. 2008), but possibly remain fractal with fractal dimensions of D f ≈ 2.5 (Wada et al. 2008). A general description of the outcomes in hierarchical coagulation is provided by Dominik et al. (2016). -Bouncing. When the amount of energy dissipation during the collision is insufficient to allow sticking, but still not large enough to disrupt the colliding bodies (see below), the collisions result in bouncing. Bouncing was found in a variety of experimental investigations (Blum and Münch 1993;Heißelmann et al. 2007;Weidling et al. 2012;Kothe et al. 2013;Landeck 2016;Brisset et al. 2016Brisset et al. , 2017. Although bouncing might seem to be important only for halting growth or stretching growth time scales, it turned out that bouncing collisions lead to the gradual compaction of dust aggregates. Weidling et al. (2009) experimentally found that the equilibrium filling factor (or packing density) of bouncing aggregates is ∼ 0.36. -Fragmentation. Experiments have also shown that collisions at high speeds lead to the fragmentation of the colliding dust aggregates (Blum and Münch 1993;Beitz et al. 2011;Schräpler et al. 2012;Deckers and Teiser 2013;. Recently, Bukhari analysed the influence of the aggregate size, size ratio and impact velocity on the outcome of fragmenting collisions. They could show that the mass ratio of the largest fragment to the original aggregate mass decreases with increasing impact velocity and that also the slope of the power law of the fragment size distribution is velocity dependent. On top of that, the onset of fragmentation, i.e. the transition velocity from bouncing to fragmentation, increases for decreasing aggregate mass. -Abrasion. In recent microgravity experiments on low-velocity collisions in a manyparticle system, it was discovered that cm-sized dust aggregates suffer a gradual mass loss, although their collision velocities were smaller than the threshold velocity for fragmentation (Kothe 2016). Abrasion was not observed for velocities below ∼ 0.1 m s −1 and then gradually increased in strength with increasing velocity. The preliminary results by Kothe (2016) indicate that the abrasion efficiency is relatively weak so that aggregates need on the order of 1000 collisions before they are completely destroyed. Outcomes when small projectiles hit large targets. When a rather small projectile dust aggregate hits a much larger target aggregate, the following additional collision outcomes may occur: -Mass transfer. Above the fragmentation limit of the small projectile aggregate, part of its mass is permanently transferred to the target aggregate so that the latter gains mass. In the past years, many experimental investigations have studied mass transfer (Wurm et al. 2005b;Teiser and Wurm 2009a,b;Beitz et al. 2011;Meisner et al. 2013;Deckers and Teiser 2014;. Typical efficiencies of the mass-transfer process are between a few and ∼ 50 per cent and the deposited mass is compressed to a volume filling factor of typically 0.3-0.4 Meisner et al. 2013). -Cratering. Experiments have shown that in the same impact-velocity range as mass transfer, but for larger projectile aggregate, the target agglomerates lose mass by cratering (Wurm et al. 2005a;Paraskov et al. 2007). Similar to the classical high-velocity-impact cratering effect, the impinging projectile excavates more mass than it transfers to the target, so that the target's mass budget is negative. Cratering can be regarded as a transition process to fragmentation of the target. Bukhari have shown that the (relative) mass loss of the target with increasing impact energy of the projectile is a continuous function of the impact energy. The amount of excavated crater mass per impact can be substantial, with up to 35 times the projectile mass, depending on the strength of the target material (Wurm et al. 2005a;Paraskov et al. 2007). -Erosion. Again for similar impact velocities, but much smaller projectile aggregates or dust monomers, the effect of erosion has been identified in the laboratory (Schräpler and Blum 2011;Schräpler et al. 2018) and by numerical simulations (Seizinger et al. 2013;Krijt et al. 2015). The erosion efficiency increases with increasing impact velocity and decreases with increasing impactor mass, so that there is a smooth transition to cratering. Seizinger et al. (2013) argue that the erosion efficiency of monomers and small aggregates is higher than that of large aggregate projectiles, because some of the initially eroded mass gets pushed into the target aggregate again and is, thus, re-accreted by the target in impacts with large aggregates. Erosion has also been observed for impacts of µm-sized ice particles into ice agglomerates . The detailed physics of most of the before-mentioned collisional outcomes, i.e., their dependence on collision velocity, size and size ratio of the dust aggregates, their porosity or monomer-particle size have been compiled in the reviews by Blum and Wurm (2008) and . presented the first complete collision model, including all effects known at the time. They distinguished in their model collisions between compact and highly porous dust aggregates and also differentiated between collisions among similar-sized dust aggregates and those between aggregates of very different sizes, because some of the outcomes only occur above or below a certain size ratio between projectile and target aggregate. Short-versions of this collision model have been developed by, e.g., Windmark et al. (2012b) and Birnstiel et al. (2016). The latest experimental findings since the model will soon be incorporated into a forthcoming dust-aggregate collisions model (see Kothe (2016) for a preliminary version). An example of this model is shown in Fig. 1 where on the axes the sizes of the colliding dust aggregates are shown and the coloured regions denote the collisional outcomes. It must be mentioned that the contours and their boundaries can considerably vary with (i) distance to the central star, (ii) PPD model, (iii) turbulence strength, (iv) monomer material, and (v) monomer size (distribution), respectively (see Sect. 2.2). Influence of material and monomer size The parameter-space coverage for dust aggregates consisting of micrometresized silica monomer grains is quite advanced so that the resulting collision model may be considered relatively reliable. However, the situation is much worse if we consider other abundant materials or nanometre-size constituents of the aggregates. While the latter case can be scaled using dust-collision models, e.g. by Dominik and Tielens (1997); Wada et al. (2008); Lorek et al. (2017), other grain materials require new experimental results at realistic temperatures. Some progress in this field has been made for micrometre-sized water-ice grains and aggregates thereof. Experiments by indicate that the sticking-to-bouncing threshold of icy monomer grains is a factor ∼ 10 higher than the corresponding value for silica grain. The same factor of ∼ 10 can be found for the bouncing-erosion threshold . It must be mentioned that also found that the stickiness of small water-ice grains heavily depends on temperature. While for temperatures below ∼ 200 K the sticking threshold is constant at ∼ 10 m s −1 , its value increases steadily for temperatures above ∼ 200 K. It has recently be shown by Gärtner et al. (2017) that the reason for this behaviour is the increase in thickness of a diffuse surface layer around the ice particles for temperatures exceeding ∼ 200 K. Thus, laboratory experiments on the study of protoplanetary ice aggregation have to be performed at low ambient temperatures and vacuum conditions (Gärtner et al. 2017). That the material properties may have an enormous impact on the growth behaviour of protoplanetary dust has long been speculated. ; performed low-velocity impact experiments with "interstel-lar organic matter analogs". They found that in the right temperature regime (around ∼ 250 K), this material possesses an enhanced stickiness. However, for lower and higher temperatures, the material was either too hard or not viscous enough to allow for sufficient sticking. Flynn et al. (2013) found that individual monomer grains of a chondritic porous interplanetary dust particle of possible cometary origin were coated with a ∼ 100 nm thick organic mantle. They estimated the tensile strength of the coated µm-sized grains to be at least several hundred Pa, comparable to or even higher than the values measured for silica particles. Thus, we may expect that organic coatings may aid planetesimal formation through sticking collisions. However, much more work in this field is required before the role of organic matter has been solved. Besides water ice, there is only little experimental data on the collision outcome of micrometre-sized particles and their aggregates consisting of other materials. Recently, first laboratory experiments on the collision properties of CO 2 aggregates have been reported by Musiolik et al. (2016a). It seems that their sticking threshold does not differ from that of SiO 2 . Pathways to planetesimals Based on the above described collision model for protoplanetary dust aggregates, three pathways to planetesimals seem to be feasible, which will be described, including their pros and cons, in the following. 3.1 Formation of planetesimals by the gentle gravitational collapse of a concentrated cloud of dust "pebbles" If the majority of the dust grains consist of siliceous material, Monte-Carlo simulations of the protoplanetary aggregation process have shown that after an initial process of fractal growth, sticking and bouncing collisions lead to the formation of rather compact mm-to cm-sized aggregates with volume filling factors of ∼ 0.36 within ∼ 10 4 orbital time scales ). Further growth is inhibited by the bouncing barrier and no other growth process (i.e., mass transfer) can be reached. It turned out that the maximum aggregate size depends on (i) the PPD model, with a minimum mass solar nebula model resulting in the biggest aggregates , (ii) the distance to the central star and (iii) the composition of the dust (Lorek et al. 2017). Increasing stellar distances and increasing dust-to-ice ratios result is smaller maximum aggregate sizes (Lorek et al. 2017). Moreover, the aggregates are fluffier further away from the central star, for smaller monomer grains and if the dust-to-ice ratio is low (Lorek et al. 2017). In the literature, the dust aggregates in this stage of growth are termed "pebbles". The presence of mm-to dm-sized "pebbles" (and partially the absence of larger particles) was confirmed in a number of PPD observations (van Boekel et al. 2004;D'Alessio et al. 2006;Natta et al. 2007;Birnstiel et al. 2010;Pérez et al. 2012;Trotta et al. 2013;Testi et al. 2014;Pérez et al. 2015;Tazzari et al. 2016;Liu et al. 2017). Under the conditions that 1. the Stokes numbers St of the "pebbles" are in the range St ∼ 10 −3 . . . 5 ), 2. the "metallicity" of the PPD is larger than a Stokes-number-dependent threshold value, and 3. the local dust-to-gas mass ratio is above unity, the streaming instability (SI, see below and Youdin and Goodman (2005) for the original work) can further concentrate the "pebble" cloud until a gravitational collapse leads to the formation of planetesimals. Here, the Stokes number is defined by St = τ f Ω, with τ f and Ω being the gas-dust friction time and the orbital frequency, respectively. The "metallicity" is defined by the vertically-integrated dust-to-gas ratio. A relation between the minimum required "metallicity" and the Stokes number of the aggregates can be found in . An absolute minimum "metallicity" of ∼ 0.015 is required for St ∼ 0.1 to allow the SI to work. The SI is a collective-particle effect in which the frictional feedback from the solid to the gaseous component of the disc is taken into account. In contrast to a single "pebble", a highly-concentrated region of "pebbles" with a local dust-to-gas mass ratio above unity has two major effects: (i) Due to the mutual aerodynamic shielding, the effective cross section of the region is much smaller than for all dust "pebbles" combined. This means that the SI region experiences less headwind and, thus, radially drifts much slower and azimuthally much faster (see Sect. 1). By this, it can catch up with all individual "pebbles" on its orbit (and with those that reach its orbit during their radial inward drift), which are then incorporated into the SI region by which its mass grows. (ii) Owing to the large mass concentration, the SI region accelerates the ambient gas, which further reduces friction and yields a local minimum in radial drift. Numerical simulations have shown that this can lead to the gravitational collapse of the "pebble" cloud (Johansen et al. 2007). Lorek et al. (2017) showed that the known collision properties of dust aggregates indeed lead to the formation of "pebbles" with sufficiently large Stokes numbers. Numerical work on the SI has resulted in predictions of the mass-frequency distribution function of planetesimals, which can be fitted by a power law with exponent −1.6 ± 0.1 (Simon et al. 2016;Schäfer et al. 2017;Simon et al. 2017). Benefits The formation of planetesimals by the SI avoids any problems that arise for dust aggregates larger than the "pebble" size, i.e., the bouncing barrier for aggregate sizes > ∼ mm-cm , the fragmentation barrier for impact velocities > ∼ 1 m s −1 (Bukhari , the erosion barrier (Schräpler et al. 2018), or the drift barrier (Weidenschilling 1977). On top of that, the SI is a fast process. As soon as the conditions (see above) are met, the gravitational collapse occurs on the order of tens to a few thousand orbital time scales (Johansen et al. 2007;). Problems As already stated above, the SI requires a minimum "metallicity" and relatively high spatial pre-concentration of the "pebbles". As it is difficult to reach the "optimal" Stokes number for the SI (St ∼ 0.1), they can reach a "minimal" Stokes number of St ∼ 1.5 × 10 −3 , which requires a "metallicity" of 0.03 or higher (Lorek et al. 2017). In turn, such a high "metallicity" requires a partial dissipation of the gaseous PPD of initial solar "metallicity" so that planetesimal formation can only occur at later stages of the disc phase. To reach the required pre-concentration of the "pebbles", various ideas have been suggested, including concentration inside or between turbulence eddies or concentration in pressure bumps (see, e.g., ). Formation of planetesimals by collisional growth An alternative approach to planetesimal formation has been described by Windmark et al. (2012a,b); Garaud et al. (2013); Booth et al. (2018). After reaching the bouncing barrier, the gap to fragmentation, mass transfer or cratering (see Fig. 1) is overcome by, e.g., assuming a velocity distribution of the dust aggregates. Once new small projectile aggregates are formed in the destructive processes, mass transfer allows some of the dust aggregates to grow to planetesimal sizes. Benefits The processes of fragmentation, cratering, and mass transfer have been empirically proven to exist for dust aggregates so that they can be regarded as robust. Assumptions of velocity distributions seem reasonable, because turbulenceinduced velocities have a stochastic nature and variations in the mass density of aggregates will lead to variations in drift velocities for aggregates of the same mass. At 1 au, 100-m-sized aggregates may form within 1 . . . 5 × 10 4 years (Windmark et al. 2012a;Garaud et al. 2013). Problems Although the time scales for growth to the 100-metre level are reasonably short at 1 au, they are much longer further out in the disc, and the maximum aggregate sizes are thus considerably smaller. Garaud et al. (2013) showed that at 30 au, even after 6 × 10 5 years, the maximum aggregate size is only a few metres. Typically, the growth time scales are so long that radial drift (outside dust traps) limits the maximum size achievable (Booth et al. 2018). Recently, Schräpler et al. (2018) showed that even under the most favourable conditions of growth everywhere in the parameter space shown in Fig. 1, except for the part marked "erosion", no aggregates larger than ∼ 0.1 m can form, due to the overwhelming and self-supporting effect of erosion. Formation of planetesimals consisting of sub-micrometre-sized water-ice particles The biggest obstacles to direct collisional growth to planetesimals are (i) the low velocity for the transition from sticking to bouncing, which limits the "pebble" size before the bouncing barrier is hit, and (ii) the low collision energies required for compaction, which results in fast radial drift time scales. To overcome both, smaller and stickier monomer grains can be used (Wada et al. 2009;Okuzumi et al. 2012;Kataoka et al. 2013). In their planetesimal-formation model, Kataoka et al. (2013) assume that the material is dominated by the relatively sticky water ice and that the ice grains possess radii of 0.1 µm, for which compaction is harder to achieve than for 1 µm-sized grains. They calculate the collisional growth path of the resulting ice aggregates at 5 au and 8 au. Following an initial fractal growth phase, which leads to minimal mass densities of ∼ 10 −5 g cm −3 , the aggregates then are compacted in mutual collisions, by the ram pressure of the gas and due to self-gravity. At the end of the simulation, the icy aggregates possess radii of ∼ 10 km and mass densities of ∼ 0.1 g cm −3 . Benefits Due to the choice of very small ice particles with high restructuring impedance and high collision threshold, the bouncing barrier is never reached. The growth time scales to planetesimals are very short (∼ 10 4 years), due to the high porosity and, thus, high capture cross section of the fluffy aggregates (Krijt et al. 2015). As a consequence, radial drift is negligible. Results have been confirmed by Krijt et al. (2015) and Lorek et al. (2017). Problems Relaxing one of the assumptions (high stickiness, high resistance to compaction) does not lead to planetesimal sizes, and the resulting (smaller) aggregates suffer considerable radial drifts. The role of erosion for the survival of icy agglomerates was studied by Krijt et al. (2015), who found that erosion can limit growth if the erosion threshold is smaller than typical relative velocities between small and large aggregates. Empirical data for sub-micron-sized ice particles are still missing. On top of that, the collisional physics of highly fluffy (ice) aggregates has not been studied so far so that judging the collisional outcomes is difficult. Properties of planetesimal formed in the different scenarios Based on our knowledge about the collision physics of dust and ice aggregates, all of the three pathways to planetesimals (see Sect. 3) are in principle feasible (but mind their individual problems, as stated in Sect. 3). Obviously, we need more laboratory experiments and refined collision models to assess whether or not any of the three formation models is really capable of predicting the formation of planetesimals. Alternatively, we may consider the predictions of the diverse models about the physical properties of the resulting planetesimals to better assess the likeliness of the respective planetesimal-formation scenario. Several distinct physical quantities can be estimated with which the three models can be distinguished. These comprise the size of the resulting planetesimals, the volume filling factor (i.e., the fraction of total planetesimal volume actually occupied by matter), the tensile strength (i.e., the internal cohesion of the material), the collisional strength (i.e. the energy required to fragment the colliding bodies such that the biggest surviving mass equals half the original mass), the Knudsen diffusivity (i.e., the resistance to gas flow), and the thermal conductivity, respectively. Table 1 compiles estimates of these values for the three formation models and expands on earlier approaches by . Please mind that the values shown in Table 1 for the gravitational-collapse model are only valid for small planetesimals, because only then the "pebbles" survive the gravitational collapse intact (Wahlberg Jansson and Johansen 2014; Wahlberg Jansson et al. 2017). However, the approach used by Wahlberg Jansson and Johansen (2014); Wahlberg Jansson et al. (2017) ignores feedback of the collapsing dust particles to the gas cloud. A more rigorous treatment of the collapsing twophase-flow problem was performed by Shariff and Cuzzi (2015), but the implications to the fate of the "pebbles" have not been considered yet. In any event, the average lithostatic pressure of a body with R = 50 km in radius and an average mass density ρ = 1000 kg m −3 is p = 4 15 πGρ 2 R 2 = 1.4 × 10 5 Pa, with G being the gravitational constant, which exceeds the crushing strength of the pebbles. Thus, for planetesimals larger than ∼ 10 km in this formation model, the physical values should approach those of the mass-transfer model. A word of caution is necessary before we use Table 1 for a comparison between the various formation scenarios. While the gravitational-collapse and the mass-transfer model are physically distinct, this is only partly the case for the mass-transfer and the icy-agglomerates model. Both scenarios rely on the intrinsic "stickiness" of the grains, but vary only in the material and size of the dust/ice grains and, thus, potentially in the region of the protoplanetary disc where they can be applied. It is evident from Table 1 that in principle some of the listed parameters are sensitive enough to be used to distinguish between the various formation models. Exceptions might be the thermal conductivity, for which the uncertainties and overlaps of the different models are too large, and the volume filling factor for large planetesimals, which is affected by lithostatic compression. Uncertainties in the quantification of the thermal conductivity, the tensile (2017) strength and the fragmentation energy arise from their strong dependencies on material properties and temperature. The latter becomes of utmost importance for the thermal conductivity of planetesimals formed through a gravitational collapse, because here radiative heat transport, which is intrinsically strongly temperature dependent, may dominate over conduction . Empirical evidence We have seen above that the three formation models of planetesimals allow predictions, which can be compared to empirical evidence in our own Solar System or in extrasolar planetary system. Below, several criteria are listed, which allow to draw conclusions on the formation process of planetesimals. 1. Size-frequency distribution in the asteroid and Kuiper belt. Both, the asteroid belt and the Kuiper belt exhibit a "knee" in the sizefrequency distribution around ∼ 100 km. Asteroids smaller than that size are collisional fragments, whereas larger bodies are assumed to be primordial planetesimals. This may indicate that planetesimals were (on average) born big (Morbidelli et al. 2009), but Weidenschilling (2011) pointed out that the current size distribution in the asteroid belt can also be reproduced with sub-km-sized planetesimals. 2. Debris discs. Debris discs are thought to represent the dusty end of a collision cascade among planetesimals. Thus, modelling the collision processes and fitting them to the observed debris-disc brightnesses may provide information about the planetesimal properties in these extrasolar pre-planetary systems. Recently, Krivov et al. (2018) modelled the collisional cascades within debris discs for both, the gravitational-collapse and the mass-transfer scenario of planetesimal formation. As a matter of fact, Krivov et al. (2018) found agreement between observed and predicted debris-disc brightnesses for both models. However, due to the different collisional strengths (see Table 1), the number of small (sub-km-sized) bodies in the two scenarios is very different. Although these bodies are unobservable in debris discs, our own Solar System might provide a clue to whether the mass-transfer or the gravitational-collapse model more likely represent the true formation scenario. Krivov et al. (2018) point out that only the latter is in agreement with the observed low number of sub-km-sized bodies in the Solar System. 3. Fractal particles in comet 67P. The discovery of fractal particles in comet 67P-Churyumov-Gerasimenko (hereafter 67P) by two Rosetta instruments (Fulle et al. 2015(Fulle et al. , 2016bMannel et al. 2016) can only be explained if the comet consists of larger entities between which the primordial fractal dust aggregates are captured . In the mass-transfer model, the relatively high impact velocities would certainly destroy the pebbles and render them into a compact configuration. The same is true for the icy-planetesimal formation model, which does not provide sufficiently large (cm-sized) void spaces to store the fractal particles until today. 4. Physical properties of comet 67P. It was argued earlier that comets can only be formed through the gravitationalcollapse process, because their dust activity requires a gas pressure below the dry dust layer that exceeds the sum of cohesion and gravitational force of the dust Keller 1994, 1996;Skorov and Blum 2012;Gundlach and Blum 2016). With the Rosetta mission, a much more detailed investigation on the make-up of cometary nuclei became possible. Recently, showed that observations of the cometary properties by various Rosetta and Philae instruments (sub-surface and surface temperatures, size-frequency distribution of surface and coma dust, tensile strength) point towards comet 67P consisting of "pebbles" with 3-6 mm radii. 5. General properties of comets. also showed that a formation of comet 67P by the gravitational collapse of a "pebble" cloud correctly predicts the porosity and continued dust activity of the comet. As already discussed before (see Table 3 in ), comets are highly porous objects. Experiments revealed that bouncing dust "pebbles" possess a volume filling factor of ∼ 0.36 (Weidling et al. 2009;. Concentrating these dust aggregates by a (gentle) gravitational collapse results in a packing density of ∼ 0.6 (Onoda and Liniger 1990). Com-bining these two filling factors results in the overall packing density of ∼ 0.22, which is compatible with estimates for comet 67P (Kofman et al. 2015;Pätzold et al. 2016;Fulle et al. 2016a). The two alternative models fail to predict this value (see Table 1). To sustain gas and dust activity of comets over many apparitions, a process is required that quasi-continuously emits gas and dust in the massproportion present inside the comet nucleus. To achieve this, the subsurface gas pressure of the sublimating ices has to overcome the tensile strength of the surface material Keller 1994, 1996;Skorov and Blum 2012;Gundlach and Blum 2016). Only the hierarchical architecture of the comet following its formation through the gravitational collapse of a cloud of "pebbles" predicts sufficiently small tensile strengths (see Table 1) to allow dust activity for realistic temperatures of the sub-surface ices (Skorov and Blum 2012;Gundlach and Blum 2016). As soon as the gas pressure exceeds the tensile strength of the overlaying dust layer, the latter is torn off and transported away. Due to the now free access to the ambient vacuum, the increased local outgassing rate decreases the ice temperature so that ice sublimation slows down and the (local) pressure becomes insufficient to continue dust emission, until a sufficiently thick dust layer has formed again, whereupon the process repeats. 6. Evidence for "pebbles" in other comets. The flyby of the EPOXI spacecraft by comet 103P/Hartley revealed "pebbles" of the right size to have formed the comet by the streaming instability and a subsequent gravitational collapse (Kretke and Levison 2015). Moreover, comets leave trails of dust along their orbits with particle sizes large enough to not be subjected to considerable radiation pressure. A recent analysis of the trail profiles of comet 67P indicates dust "pebbles" of millimetre sizes to be the dominant constituents of the trail (Soja et al. 2015), in agreement with earlier estimates by ; Fulle et al. (2010). Conclusion In this paper, I showed in Sect. 2 that the current state-of-the-art of preplanetary dust-aggregate collisions is such that the initial dust growth by sticking collisions leads to mm-to cm-sized aggregates ("pebbles") for siliceous materials and monomer sizes of ∼ 1 µm. At this stage, inter-aggregate bouncing becomes and important effect, which decreases the porosity of the "pebbles" until they reach filling factors of ∼ 0.36. Two scenarios can explain the formation of planetesimals out of these "pebbles": (i) the gravitational-collapse scenario, which requires a high spatial concentration of the "pebbles" (e.g., by the streaming instability) and a subsequent gravitational collapse, or (ii) the mass-transfer scenario, in which the aggregates further grow by high-velocity collisions between small projectile ag-gregates and larger target aggregates; here, the projectiles disintegrate upon impact and transfer part of their mass to the target. Pros and cons of the two models are discussed in Sect. 3. For sub-micrometre-sized water-ice monomers, a third scenario for the formation of planetesimals is possible, which relies on the high sticking and restructuring thresholds of very small ice grains. Also for this model, pros and cons are discussed in Sect. 3. The planetesimals resulting from the three formation models possess very different properties, which were shown in Table 1 in Sect. 4. These properties in principle allow a comparison to "real" planetesimals. Such a comparison was made in Sect. 5 where I compiled pieces of evidence that suggest that planetesimals consist of macroscopic dust "pebbles". The "knee" in the asteroid size distribution, the paucity of sub-km-sized bodies in the Solar System, the discovery of primordial fractal particles on comet 67P as well as many general physical properties of comets support the conjecture that planetesimals formed by a smooth gravitational collapse of dust "pebbles", following a spatial concentration of dust aggregates by the streaming instability. Although the Rosetta mission to comet 67P has provided us with many new insights into cometary nuclei, we are far away from a comprehensive and closed picture of their formation, although, as stated above, the gravitational collapse of a "pebble" cloud, seems to be a viable assumption. First of all, it is unclear whether the gravitational collapse can lead to km-sized objects at all or produces initially much larger bodies. In the latter case, cometary nuclei must be collisional fragments, which seem to have morphologically and compositionally survived the break-up of their parent bodies quite intact. However, even if planetesimals form small (and by whatever process), it is unclear how they could have survived without undergoing catastrophic collisions (Morbidelli and Rickman 2015). Clearly, more work on the formation and (collisional) evolution of planetesimals needs to be done. Fig. 1 1Example of the latest collision model under development at TU Braunschweig for SiO 2 monomer particles with 0.75 µm radius, a minimum mass solar nebula model at 1 AU in the midplane, and a turbulence strength of α = 10 −3 , respectively. Collisional outcomes are marked in colour and labelled. Dashed contours mark the mean collision velocities in units of m s −1 . Image credit:Kothe and Blum (in prep.). , [ 2 ] 2Windmark et al. (2012b), [3] Windmark et al. (2012a), [4] Garaud et al. (2013), [5] Kataoka et al. (2013), [6] Weidling et al. (2009), [7] Zsom et al. (2010), [8] Kothe et al. (2010), [9] Skorov and Blum (2012), [10] Blum et al. (2014), [11] Blum et al. (2006), [12] Krivov et al. (2018), [13] Gundlach et al. (2011), [14] Gundlach and Blum (2012) * For planetesimals with R > ∼ 10 − 50 km Table 1 1Comparison between the three formation scenarios of planetesimals described in Sect. 3.Gravitational Mass Icy collapse transfer agglomerates (Sect. 3.1) (Sect. 3.2) (Sect. 3.3) Acknowledgements I thank the Deutsche Forschungsgemeinschaft and the Deutsches Zentrum für Luft-und Raumfahrt for continuous support. I thank Stefan Kothe for providing me withFig. 1. I also thank Stu Weidenschilling for his constructive suggestions during the review process. Finally, I thank ISSI for inviting me to a wonderful and fruitful workshop. The dust trail of Comet 67P/Churyumov-Gerasimenko between. J Agarwal, M Müller, W T Reach, M V Sykes, H Boehnhardt, E Grün, 10.1016/j.icarus.2010.01.003Icarus. 207J. Agarwal, M. Müller, W.T. Reach, M.V. Sykes, H. Boehnhardt, E. Grün, The dust trail of Comet 67P/Churyumov-Gerasimenko between 2004 and 2006. Icarus 207, 992-1012 (2010). doi:10.1016/j.icarus.2010.01.003 . K Altwegg, H Balsiger, A Bar-Nun, J J Berthelier, A Bieler, P Bochsler, C Briois, U Calmonte, M Combi, J Keyser, P Eberhardt, B Fiethe, S Fuselier, S Gasc, T I Gombosi, K C Hansen, M Hässig, A Jäckel, E Kopp, A Korth, L Leroy, U Mall, B Marty, O Mousis, E Neefs, T Owen, H Rème, M Rubin, T Sémon, C.-Y Tzou, H Waite, P Wurz, 67p/Churyumov-Gerasimenko, 10.1126/science.1261952Science. 347271261952K. Altwegg, H. Balsiger, A. Bar-Nun, J.J. Berthelier, A. Bieler, P. Bochsler, C. Briois, U. Calmonte, M. Combi, J. De Keyser, P. Eberhardt, B. Fiethe, S. Fuselier, S. Gasc, T.I. Gombosi, K.C. Hansen, M. Hässig, A. Jäckel, E. Kopp, A. Korth, L. LeRoy, U. Mall, B. Marty, O. Mousis, E. Neefs, T. Owen, H. Rème, M. Rubin, T. Sémon, C.-Y. Tzou, H. Waite, P. Wurz, 67P/Churyumov-Gerasimenko, a Jupiter family comet with a high D/H ratio. Science 347(27), 1261952 (2015). doi:10.1126/science.1261952 Low-velocity Collisions of Centimeter-sized Dust Aggregates. E Beitz, C Güttler, J Blum, T Meisner, J Teiser, G Wurm, 10.1088/0004-637X/736/1/34Astrophys. J. 73634E. Beitz, C. Güttler, J. Blum, T. Meisner, J. Teiser, G. Wurm, Low-velocity Collisions of Centimeter-sized Dust Aggregates. Astrophys. J. 736, 34 (2011). doi:10.1088/0004- 637X/736/1/34 . A Bieler, K Altwegg, H Balsiger, A Bar-Nun, J.-J Berthelier, P Bochsler, C Briois, U Calmonte, M Combi, J Keyser, E F Van Dishoeck, B Fiethe, S A Fuselier, S Gasc, T I Gombosi, K C Hansen, M Hässig, A Jäckel, E Kopp, A Korth, L Le Roy, U Mall, R Maggiolo, B Marty, O Mousis, T Owen, H Rème, M Rubin, T Sémon, C.-Y , A. Bieler, K. Altwegg, H. Balsiger, A. Bar-Nun, J.-J. Berthelier, P. Bochsler, C. Briois, U. Calmonte, M. Combi, J. de Keyser, E.F. van Dishoeck, B. Fiethe, S.A. Fuselier, S. Gasc, T.I. Gombosi, K.C. Hansen, M. Hässig, A. Jäckel, E. Kopp, A. Korth, L. Le Roy, U. Mall, R. Maggiolo, B. Marty, O. Mousis, T. Owen, H. Rème, M. Rubin, T. Sémon, C.-Y. Abundant molecular oxygen in the coma of comet 67P/Churyumov-Gerasimenko. J H Tzou, C Waite, P Walsh, Wurz, 10.1038/nature15707Nature. 526Tzou, J.H. Waite, C. Walsh, P. Wurz, Abundant molecular oxygen in the coma of comet 67P/Churyumov-Gerasimenko. Nature 526, 678-681 (2015). doi:10.1038/nature15707 Dust Evolution and the Formation of Planetesimals. T Birnstiel, M Fang, A Johansen, 10.1007/s11214-016-0256-1Space Sci. Rev. 205T. Birnstiel, M. Fang, A. Johansen, Dust Evolution and the Formation of Planetesimals. Space Sci. Rev. 205, 41-75 (2016). doi:10.1007/s11214-016-0256-1 Testing the theory of grain growth and fragmentation by millimeter observations of protoplanetary disks. T Birnstiel, L Ricci, F Trotta, C P Dullemond, A Natta, L Testi, C Dominik, T Henning, C W Ormel, A Zsom, 10.1051/0004-6361/201014893Astron. Astrophys. 51614T. Birnstiel, L. Ricci, F. Trotta, C.P. Dullemond, A. Natta, L. Testi, C. Dominik, T. Hen- ning, C.W. Ormel, A. Zsom, Testing the theory of grain growth and fragmentation by millimeter observations of protoplanetary disks. Astron. Astrophys. 516, 14 (2010). doi:10.1051/0004-6361/201014893 Dust agglomeration. J Blum, 10.1080/00018730601095039Advances in Physics. 55J. Blum, Dust agglomeration. Advances in Physics 55, 881-947 (2006). doi:10.1080/00018730601095039 Experimental investigations on aggregate-aggregate collisions in the early solar nebula. J Blum, M Münch, 10.1006/icar.1993.1163Icarus. 106151J. Blum, M. Münch, Experimental investigations on aggregate-aggregate collisions in the early solar nebula. Icarus 106, 151 (1993). doi:10.1006/icar.1993.1163 Experiments on Sticking, Restructuring, and Fragmentation of Preplanetary Dust Aggregates. J Blum, G Wurm, 10.1006/icar.1999.6234Icarus. 143J. Blum, G. Wurm, Experiments on Sticking, Restructuring, and Fragmentation of Preplan- etary Dust Aggregates. Icarus 143, 138-146 (2000). doi:10.1006/icar.1999.6234 The Growth Mechanisms of Macroscopic Bodies in Protoplanetary Disks. J Blum, G Wurm, 10.1146/annurev.astro.46.060407.145152ARA&A. 46J. Blum, G. Wurm, The Growth Mechanisms of Macroscopic Bodies in Protoplanetary Disks. ARA&A 46, 21-56 (2008). doi:10.1146/annurev.astro.46.060407.145152 Aspects of Laboratory Dust Aggregation with Relevance to the Formation of Planetesimals. J Blum, G Wurm, T Poppe, L.-O Heim, 10.1023/A:1006386417473Earth Moon and Planets. 80J. Blum, G. Wurm, T. Poppe, L.-O. Heim, Aspects of Laboratory Dust Aggregation with Relevance to the Formation of Planetesimals. Earth Moon and Planets 80, 285-309 (1998). doi:10.1023/A:1006386417473 J Blum, G Wurm, S Kempf, T Poppe, H Klahr, T Kozasa, M Rott, T Henning, J Dorschner, R Schräpler, H U Keller, W J Markiewicz, I Mann, B A Gustafson, F Giovane, D Neuhaus, H Fechtig, E Grün, B Feuerbacher, H Kochan, L Ratke, A El Goresy, G Morfill, S J Weidenschilling, G Schwehm, K Metzler, W.-H Ip, 10.1103/PhysRevLett.85.2426Growth and Form of Planetary Seedlings: Results from a Microgravity Aggregation Experiment. 85J. Blum, G. Wurm, S. Kempf, T. Poppe, H. Klahr, T. Kozasa, M. Rott, T. Henning, J. Dorschner, R. Schräpler, H.U. Keller, W.J. Markiewicz, I. Mann, B.A. Gustafson, F. Giovane, D. Neuhaus, H. Fechtig, E. Grün, B. Feuerbacher, H. Kochan, L. Ratke, A. El Goresy, G. Morfill, S.J. Weidenschilling, G. Schwehm, K. Metzler, W.-H. Ip, Growth and Form of Planetary Seedlings: Results from a Microgravity Aggregation Experiment. Physical Review Letters 85, 2426-2429 (2000). doi:10.1103/PhysRevLett.85.2426 The Physics of Protoplanetesimal Dust Agglomerates. I. Mechanical Properties and Relations to Primitive Bodies in the Solar System. J Blum, R Schräpler, B J R Davidsson, J M Trigo-Rodríguez, 10.1086/508017Astrophys. J. 652J. Blum, R. Schräpler, B.J.R. Davidsson, J.M. Trigo-Rodríguez, The Physics of Protoplan- etesimal Dust Agglomerates. I. Mechanical Properties and Relations to Primitive Bodies in the Solar System. Astrophys. J. 652, 1768-1781 (2006). doi:10.1086/508017 Comets formed in solar-nebula instabilities! -An experimental and modeling attempt to relate the activity of comets to their formation process. J Blum, B Gundlach, S Mühle, J M Trigo-Rodriguez, 10.1016/j.icarus.2014.03.016Icarus. 235J. Blum, B. Gundlach, S. Mühle, J.M. Trigo-Rodriguez, Comets formed in solar-nebula instabilities! -An experimental and modeling attempt to relate the activity of comets to their formation process. Icarus 235, 156-169 (2014). doi:10.1016/j.icarus.2014.03.016 Evidence for the formation of comet 67P/Churyumov-Gerasimenko through gravitational collapse of a bound clump of pebbles. J Blum, B Gundlach, M Krause, M Fulle, A Johansen, J Agarwal, I Borstel, X Shi, X Hu, M S Bentley, F Capaccioni, L Colangeli, V Della Corte, N Fougere, S F Green, S Ivanovski, T Mannel, S Merouane, A Migliorini, A Rotundi, R Schmied, C Snodgrass, 10.1093/mnras/stx2741MNRAS. 469J. Blum, B. Gundlach, M. Krause, M. Fulle, A. Johansen, J. Agarwal, I. von Borstel, X. Shi, X. Hu, M.S. Bentley, F. Capaccioni, L. Colangeli, V. Della Corte, N. Fougere, S.F. Green, S. Ivanovski, T. Mannel, S. Merouane, A. Migliorini, A. Rotundi, R. Schmied, C. Snodgrass, Evidence for the formation of comet 67P/Churyumov-Gerasimenko through gravitational collapse of a bound clump of pebbles. MNRAS 469, 755-773 (2017). doi:10.1093/mnras/stx2741 Breakthrough revisited: investigating the requirements for growth of dust beyond the bouncing barrier. R A Booth, F Meru, M H Lee, C J Clarke, 10.1093/mnras/stx3084MNRAS. 475R.A. Booth, F. Meru, M.H. Lee, C.J. Clarke, Breakthrough revisited: investigating the requirements for growth of dust beyond the bouncing barrier. MNRAS 475, 167-180 (2018). doi:10.1093/mnras/stx3084 Experimental study of a multi-particle system on a suborbital rocket. J Brisset, D Heißelmann, S Kothe, R Weidling, J Blum, 10.1051/0004-6361/201527288Astron. Astrophys. 5933Submillimetre-sized dust aggregate collision and growth propertiesJ. Brisset, D. Heißelmann, S. Kothe, R. Weidling, J. Blum, Submillimetre-sized dust aggre- gate collision and growth properties. Experimental study of a multi-particle system on a suborbital rocket. Astron. Astrophys. 593, 3 (2016). doi:10.1051/0004-6361/201527288 Low-velocity collision behaviour of clusters composed of sub-millimetre sized dust aggregates. J Brisset, D Heißelmann, S Kothe, R Weidling, J Blum, 10.1051/0004-6361/201630345Astron. Astrophys. 60366J. Brisset, D. Heißelmann, S. Kothe, R. Weidling, J. Blum, Low-velocity collision behaviour of clusters composed of sub-millimetre sized dust aggregates. Astron. Astrophys. 603, 66 (2017). doi:10.1051/0004-6361/201630345 The Role of Pebble Fragmentation in Planetesimal Formation. I. Experimental Study. M Syed, J Blum, K Jansson, A Johansen, 10.3847/1538-4357/834/2/145Astrophys. J. 834145M. Bukhari Syed, J. Blum, K. Wahlberg Jansson, A. Johansen, The Role of Pebble Frag- mentation in Planetesimal Formation. I. Experimental Study. Astrophys. J. 834, 145 (2017). doi:10.3847/1538-4357/834/2/145 Effects of Dust Growth and Settling in T Tauri Disks. P D&apos;alessio, N Calvet, L Hartmann, R Franco-Hernández, H Servín, 10.1086/498861Astrophys. J. 638P. D'Alessio, N. Calvet, L. Hartmann, R. Franco-Hernández, H. Servín, Effects of Dust Growth and Settling in T Tauri Disks. Astrophys. J. 638, 314-335 (2006). doi:10.1086/498861 Colliding Decimeter Dust. J Deckers, J Teiser, 10.1088/0004-637X/769/2/151Astrophys. J. 769151J. Deckers, J. Teiser, Colliding Decimeter Dust. Astrophys. J. 769, 151 (2013). doi:10.1088/0004-637X/769/2/151 Macroscopic Dust in Protoplanetary Disks -from Growth to Destruction. J Deckers, J Teiser, 10.1088/0004-637X/796/2/99Astrophys. J. 79699J. Deckers, J. Teiser, Macroscopic Dust in Protoplanetary Disks -from Growth to Destruc- tion. Astrophys. J. 796, 99 (2014). doi:10.1088/0004-637X/796/2/99 The Physics of Dust Coagulation and the Structure of Dust Aggregates in Space. C Dominik, A G G M Tielens, 10.1086/303996Astrophys. J. 480C. Dominik, A.G.G.M. Tielens, The Physics of Dust Coagulation and the Structure of Dust Aggregates in Space. Astrophys. J. 480, 647-673 (1997). doi:10.1086/303996 The structure of dust aggregates in hierarchical coagulation. C Dominik, D Paszun, H Borel, ArXiv e-printsC. Dominik, D. Paszun, H. Borel, The structure of dust aggregates in hierarchical coagula- tion. ArXiv e-prints (2016) Organic grain coatings in primitive interplanetary dust particles: Implications for grain sticking in the Solar Nebula. G J Flynn, S Wirick, L P Keller, 10.5047/eps.2013.05.007Earth, Planets, and Space. 65G.J. Flynn, S. Wirick, L.P. Keller, Organic grain coatings in primitive interplanetary dust particles: Implications for grain sticking in the Solar Nebula. Earth, Planets, and Space 65, 1159-1166 (2013). doi:10.5047/eps.2013.05.007 Fractal dust constrains the collisional history of comets. M Fulle, J Blum, 10.1093/mnras/stx971MNRAS. 469M. Fulle, J. Blum, Fractal dust constrains the collisional history of comets. MNRAS 469, 39-44 (2017). doi:10.1093/mnras/stx971 Comet 67P/Churyumov-Gerasimenko: the GIADA dust environment model of the Rosetta mission target. M Fulle, L Colangeli, J Agarwal, A Aronica, V Della Corte, F Esposito, E Grün, M Ishiguro, R Ligustri, J J Lopez Moreno, E Epifani, G Milani, F Moreno, P Palumbo, J Gómez, A Rotundi, 10.1051/0004-6361/201014928Astron. Astrophys. 52263M. Fulle, L. Colangeli, J. Agarwal, A. Aronica, V. Della Corte, F. Esposito, E. Grün, M. Ishiguro, R. Ligustri, J.J. Lopez Moreno, E. Mazzotta Epifani, G. Milani, F. Moreno, P. Palumbo, J. Rodríguez Gómez, A. Rotundi, Comet 67P/Churyumov-Gerasimenko: the GIADA dust environment model of the Rosetta mission target. Astron. Astrophys. 522, 63 (2010). doi:10.1051/0004-6361/201014928 Density and Charge of Pristine Fluffy Particles from Comet 67P/Churyumov-Gerasimenko. M Fulle, V Della Corte, A Rotundi, P Weissman, A Juhasz, K Szego, R Sordini, M Ferrari, S Ivanovski, F Lucarelli, M Accolla, S Merouane, V Zakharov, E Epifani, J J López-Moreno, J Rodríguez, L Colangeli, P Palumbo, E Grün, M Hilchenbach, E Bussoletti, F Esposito, S F Green, P L Lamy, J A M Mcdonnell, V Mennella, A Molina, R Morales, F Moreno, J L Ortiz, E Palomba, R Rodrigo, J C Zarnecki, M Cosi, F Giovane, B Gustafson, M L Herranz, J M Jerónimo, M R Leese, A C López-Jiménez, N Altobelli, 10.1088/2041-8205/802/1/L12Astrophys. J. Lett. 80212M. Fulle, V. Della Corte, A. Rotundi, P. Weissman, A. Juhasz, K. Szego, R. Sordini, M. Ferrari, S. Ivanovski, F. Lucarelli, M. Accolla, S. Merouane, V. Zakharov, E. Maz- zotta Epifani, J.J. López-Moreno, J. Rodríguez, L. Colangeli, P. Palumbo, E. Grün, M. Hilchenbach, E. Bussoletti, F. Esposito, S.F. Green, P.L. Lamy, J.A.M. McDonnell, V. Mennella, A. Molina, R. Morales, F. Moreno, J.L. Ortiz, E. Palomba, R. Rodrigo, J.C. Zarnecki, M. Cosi, F. Giovane, B. Gustafson, M.L. Herranz, J.M. Jerónimo, M.R. Leese, A.C. López-Jiménez, N. Altobelli, Density and Charge of Pristine Fluffy Par- ticles from Comet 67P/Churyumov-Gerasimenko. Astrophys. J. Lett. 802, 12 (2015). doi:10.1088/2041-8205/802/1/L12 Comet 67P/Churyumov-Gerasimenko preserved the pebbles that formed planetesimals. M Fulle, V Della Corte, A Rotundi, F J M Rietmeijer, S F Green, P Weissman, M Accolla, L Colangeli, M Ferrari, S Ivanovski, J J Lopez-Moreno, E M Epifani, R Morales, J L Ortiz, E Palomba, P Palumbo, J Rodriguez, R Sordini, V Zakharov, 10.1093/mnras/stw2299MNRAS. 462M. Fulle, V. Della Corte, A. Rotundi, F.J.M. Rietmeijer, S.F. Green, P. Weissman, M. Accolla, L. Colangeli, M. Ferrari, S. Ivanovski, J.J. Lopez-Moreno, E.M. Epifani, R. Morales, J.L. Ortiz, E. Palomba, P. Palumbo, J. Rodriguez, R. Sordini, V. Zakharov, Comet 67P/Churyumov-Gerasimenko preserved the pebbles that formed planetesimals. MNRAS 462, 132-137 (2016a). doi:10.1093/mnras/stw2299 Unexpected and significant findings in comet 67P/Churyumov-Gerasimenko: an interdisciplinary view. M Fulle, N Altobelli, B Buratti, M Choukroun, M Fulchignoni, E Grün, M G G T Taylor, P Weissman, 10.1093/mnras/stw1663MNRAS. 462M. Fulle, N. Altobelli, B. Buratti, M. Choukroun, M. Fulchignoni, E. Grün, M.G.G.T. Taylor, P. Weissman, Unexpected and significant findings in comet 67P/Churyumov-Gerasimenko: an interdisciplinary view. MNRAS 462, 2-8 (2016b). doi:10.1093/mnras/stw1663 From Dust to Planetesimals: An Improved Model for Collisional Growth in Protoplanetary Disks. P Garaud, F Meru, M Galvagni, C Olczak, 10.1088/0004-637X/764/2/146Astrophys. J. 764146P. Garaud, F. Meru, M. Galvagni, C. Olczak, From Dust to Planetesimals: An Improved Model for Collisional Growth in Protoplanetary Disks. Astrophys. J. 764, 146 (2013). doi:10.1088/0004-637X/764/2/146 Micrometer-sized Water Ice Particles for Planetary Science Experiments: Influence of Surface Structure on Collisional Properties. S Gärtner, B Gundlach, T F Headen, J Ratte, J Oesert, S N Gorb, T G A Youngs, D T Bowron, J Blum, H J Fraser, 10.3847/1538-4357/aa8c7fAstrophys. J. 84896S. Gärtner, B. Gundlach, T.F. Headen, J. Ratte, J. Oesert, S.N. Gorb, T.G.A. Youngs, D.T. Bowron, J. Blum, H.J. Fraser, Micrometer-sized Water Ice Particles for Planetary Sci- ence Experiments: Influence of Surface Structure on Collisional Properties. Astrophys. J. 848, 96 (2017). doi:10.3847/1538-4357/aa8c7f Outgassing of icy bodies in the Solar System -II: Heat transport in dry, porous surface dust layers. B Gundlach, J Blum, 10.1016/j.icarus.2012.03.013Icarus. 219B. Gundlach, J. Blum, Outgassing of icy bodies in the Solar System -II: Heat transport in dry, porous surface dust layers. Icarus 219, 618-629 (2012). doi:10.1016/j.icarus.2012.03.013 The Stickiness of Micrometer-sized Water-ice Particles. B Gundlach, J Blum, 10.1088/0004-637X/798/1/34Astrophys. J. 79834B. Gundlach, J. Blum, The Stickiness of Micrometer-sized Water-ice Particles. Astrophys. J. 798, 34 (2015). doi:10.1088/0004-637X/798/1/34 Why are Jupiter-family comets active and asteroids in cometary-like orbits inactive. B Gundlach, J Blum, 10.1051/0004-6361/201527260Astron. Astrophys. 589111How hydrostatic compression leads to inactivityB. Gundlach, J. Blum, Why are Jupiter-family comets active and asteroids in cometary-like orbits inactive?. How hydrostatic compression leads to inactivity. Astron. Astrophys. 589, 111 (2016). doi:10.1051/0004-6361/201527260 Outgassing of icy bodies in the Solar System -I. The sublimation of hexagonal water ice through dust layers. B Gundlach, Y V Skorov, J Blum, 10.1016/j.icarus.2011.03.022Icarus. 213B. Gundlach, Y.V. Skorov, J. Blum, Outgassing of icy bodies in the Solar System -I. The sublimation of hexagonal water ice through dust layers. Icarus 213, 710-719 (2011). doi:10.1016/j.icarus.2011.03.022 What drives the dust activity of comet 67P/Churyumov-Gerasimenko?. B Gundlach, J Blum, H U Keller, Y V Skorov, 10.1051/0004-6361/201525828Astron. Astrophys. 58312B. Gundlach, J. Blum, H.U. Keller, Y.V. Skorov, What drives the dust activity of comet 67P/Churyumov-Gerasimenko? Astron. Astrophys. 583, 12 (2015). doi:10.1051/0004- 6361/201525828 The outcome of protoplanetary dust growth: pebbles, boulders, or planetesimals?. I. Mapping the zoo of laboratory collision experiments. C Güttler, J Blum, A Zsom, C W Ormel, C P Dullemond, 10.1051/0004-6361/200912852Astron. Astrophys. 51356C. Güttler, J. Blum, A. Zsom, C.W. Ormel, C.P. Dullemond, The outcome of protoplanetary dust growth: pebbles, boulders, or planetesimals?. I. Mapping the zoo of laboratory colli- sion experiments. Astron. Astrophys. 513, 56 (2010). doi:10.1051/0004-6361/200912852 Experimental Studies on the Aggregation Properties of Ice and Dust in Planet-Forming Regions. D Heißelmann, J Blum, H J Fraser, IAC-07-A2.1.02Proc. 58th Int. 58th IntParisAstronautical CongressD. Heißelmann, J. Blum, H.J. Fraser, Experimental Studies on the Aggregation Properties of Ice and Dust in Planet-Forming Regions, in Proc. 58th Int. Astronautical Congress 2007 (Paris: Int. Astronautical Federation), IAC-07-A2.1.02, 2007 Rapid planetesimal formation in turbulent circumstellar disks. A Johansen, J S Oishi, M.-M. Mac Low, H Klahr, T Henning, A Youdin, 10.1038/nature06086Nature. 448A. Johansen, J.S. Oishi, M.-M. Mac Low, H. Klahr, T. Henning, A. Youdin, Rapid plan- etesimal formation in turbulent circumstellar disks. Nature 448, 1022-1025 (2007). doi:10.1038/nature06086 A Johansen, J Blum, H Tanaka, C Ormel, M Bizzarro, H Rickman, The Multifaceted Planetesimal Formation Process. Protostars and Planets VI. A. Johansen, J. Blum, H. Tanaka, C. Ormel, M. Bizzarro, H. Rickman, The Multifaceted Planetesimal Formation Process. Protostars and Planets VI, 547-570 (2014) Fluffy dust forms icy planetesimals by static compression. A Kataoka, H Tanaka, S Okuzumi, K Wada, 10.1051/0004-6361/201322151Astron. Astrophys. 5574A. Kataoka, H. Tanaka, S. Okuzumi, K. Wada, Fluffy dust forms icy planetesimals by static compression. Astron. Astrophys. 557, 4 (2013). doi:10.1051/0004-6361/201322151 N-Particle-Simulations of Dust Growth. I. Growth Driven by Brownian Motion. S Kempf, S Pfalzner, T K Henning, 10.1006/icar.1999.6171Icarus. 141S. Kempf, S. Pfalzner, T.K. Henning, N-Particle-Simulations of Dust Growth. I. Growth Driven by Brownian Motion. Icarus 141, 388-398 (1999). doi:10.1006/icar.1999.6171 Properties of the 67P/Churyumov-Gerasimenko interior revealed by CONSERT radar. W Kofman, A Herique, Y Barbin, J.-P Barriot, V Ciarletti, S Clifford, P Edenhofer, C Elachi, C Eyraud, J.-P Goutail, E Heggy, L Jorda, J Lasue, A.-C Levasseur-Regourd, E Nielsen, P Pasquero, F Preusker, P Puget, D Plettemeier, Y Rogez, H Sierks, C Statz, H Svedhem, I Williams, S Zine, J Van Zyl, 10.1126/science.aab0639Science. 3492639W. Kofman, A. Herique, Y. Barbin, J.-P. Barriot, V. Ciarletti, S. Clifford, P. Edenhofer, C. Elachi, C. Eyraud, J.-P. Goutail, E. Heggy, L. Jorda, J. Lasue, A.-C. Levasseur- Regourd, E. Nielsen, P. Pasquero, F. Preusker, P. Puget, D. Plettemeier, Y. Rogez, H. Sierks, C. Statz, H. Svedhem, I. Williams, S. Zine, J. Van Zyl, Properties of the 67P/Churyumov-Gerasimenko interior revealed by CONSERT radar. Science 349(2), 0639 (2015). doi:10.1126/science.aab0639 Mikrogravitationsexperimente zur Entwicklung eines empirischen Stoßmodells fr protoplanetare Staubagglomerate. S Kothe, TU BraunschweigPh.D. thesisS. Kothe, Mikrogravitationsexperimente zur Entwicklung eines empirischen Stoßmodells fr protoplanetare Staubagglomerate. Ph.D. thesis, TU Braunschweig (2016) The Physics of Protoplanetesimal Dust Agglomerates. V. Multiple Impacts of Dusty Agglomerates at Velocities Above the Fragmentation Threshold. S Kothe, C Güttler, J Blum, 10.1088/0004-637X/725/1/1242Astrophys. J. 725S. Kothe, C. Güttler, J. Blum, The Physics of Protoplanetesimal Dust Agglomerates. V. Mul- tiple Impacts of Dusty Agglomerates at Velocities Above the Fragmentation Threshold. Astrophys. J. 725, 1242-1251 (2010). doi:10.1088/0004-637X/725/1/1242 Free collisions in a microgravity many-particle experiment. III. The collision behavior of sub-millimeter-sized dust aggregates. S Kothe, J Blum, R Weidling, C Güttler, 10.1016/j.icarus.2013.02.034Icarus. 225S. Kothe, J. Blum, R. Weidling, C. Güttler, Free collisions in a microgravity many-particle experiment. III. The collision behavior of sub-millimeter-sized dust aggregates. Icarus 225, 75-85 (2013). doi:10.1016/j.icarus.2013.02.034 Rapid Growth of Asteroids Owing to Very Sticky Interstellar Organic Grains. A Kouchi, T Kudo, H Nakano, M Arakawa, N Watanabe, S Sirono, M Higa, N Maeno, 10.1086/339618Astrophys. J. Lett. 566A. Kouchi, T. Kudo, H. Nakano, M. Arakawa, N. Watanabe, S.-i. Sirono, M. Higa, N. Maeno, Rapid Growth of Asteroids Owing to Very Sticky Interstellar Organic Grains. Astrophys. J. Lett. 566, 121-124 (2002). doi:10.1086/339618 Growth and Form of Planetary Seedlings: Results from a Sounding Rocket Microgravity Aggregation Experiment. M Krause, J Blum, 10.1103/PhysRevLett.93.021103Physical Review Letters. 93221103M. Krause, J. Blum, Growth and Form of Planetary Seedlings: Results from a Sounding Rocket Microgravity Aggregation Experiment. Physical Review Letters 93(2), 021103 (2004). doi:10.1103/PhysRevLett.93.021103 Evidence for pebbles in comets. K A Kretke, H F Levison, 10.1016/j.icarus.2015.08.017Icarus. 262K.A. Kretke, H.F. Levison, Evidence for pebbles in comets. Icarus 262, 9-13 (2015). doi:10.1016/j.icarus.2015.08.017 Erosion and the limits to planetesimal growth. S Krijt, C W Ormel, C Dominik, A G G M Tielens, 10.1051/0004-6361/201425222Astron. Astrophys. 57483S. Krijt, C.W. Ormel, C. Dominik, A.G.G.M. Tielens, Erosion and the limits to planetesimal growth. Astron. Astrophys. 574, 83 (2015). doi:10.1051/0004-6361/201425222 Debris disc constraints on planetesimal formation. A V Krivov, A Ide, T Löhne, A Johansen, J Blum, 10.1093/mnras/stx2932MNRAS. 474A.V. Krivov, A. Ide, T. Löhne, A. Johansen, J. Blum, Debris disc constraints on planetesimal formation. MNRAS 474, 2564-2575 (2018). doi:10.1093/mnras/stx2932 The role of sticky interstellar organic material in the formation of asteroids. T Kudo, A Kouchi, M Arakawa, H Nakano, 10.1111/j.1945-5100.2002.tb01178.xMeteoritics and Planetary Science. 37T. Kudo, A. Kouchi, M. Arakawa, H. Nakano, The role of sticky interstellar organic material in the formation of asteroids. Meteoritics and Planetary Science 37, 1975-1983 (2002). doi:10.1111/j.1945-5100.2002.tb01178.x The formation of cometary surface crusts. E Kührt, H U Keller, 10.1006/icar.1994.1080Icarus. 109E. Kührt, H.U. Keller, The formation of cometary surface crusts. Icarus 109, 121-132 (1994). doi:10.1006/icar.1994.1080 On the Importance of Dust in Cometary Nuclei. E K Kührt, H U Keller, 10.1007/BF00117506Earth Moon and Planets. 72E.K. Kührt, H.U. Keller, On the Importance of Dust in Cometary Nuclei. Earth Moon and Planets 72, 79-89 (1996). doi:10.1007/BF00117506 Stöße zwischen hochporösen Staubagglomeraten im Laborfallturm. A Landeck, TU BraunschweigBachelor ThesisA. Landeck, Stöße zwischen hochporösen Staubagglomeraten im Laborfallturm. Bachelor Thesis, TU Braunschweig (2016) The Physics of Protoplanetesimal Dust Agglomerates. II. Low-Velocity Collision Properties. D Langkowski, J Teiser, J Blum, 10.1086/525841Astrophys. J. 675D. Langkowski, J. Teiser, J. Blum, The Physics of Protoplanetesimal Dust Agglom- erates. II. Low-Velocity Collision Properties. Astrophys. J. 675, 764-776 (2008). doi:10.1086/525841 The properties of the inner disk around HL Tau: Multi-wavelength modeling of the dust emission. Y Liu, T Henning, C Carrasco-González, C J Chandler, H Linz, T Birnstiel, R Van Boekel, L M Pérez, M Flock, L Testi, L F Rodríguez, R Galván-Madrid, 10.1051/0004-6361/201629786Astron. Astrophys. 60774Y. Liu, T. Henning, C. Carrasco-González, C.J. Chandler, H. Linz, T. Birnstiel, R. van Boekel, L.M. Pérez, M. Flock, L. Testi, L.F. Rodríguez, R. Galván-Madrid, The proper- ties of the inner disk around HL Tau: Multi-wavelength modeling of the dust emission. Astron. Astrophys. 607, 74 (2017). doi:10.1051/0004-6361/201629786 Comet formation in collapsing pebble clouds. Formation of dust/ice-mixed pebbles in the solar nebula. S Lorek, P Lacerda, J Blum, Astron. Astrophys. submitted toS. Lorek, P. Lacerda, J. Blum, Comet formation in collapsing pebble clouds. Formation of dust/ice-mixed pebbles in the solar nebula. submitted to Astron. Astrophys.(2017) Fractal cometary dust -a window into the early Solar system. T Mannel, M S Bentley, R Schmied, H Jeszenszky, A C Levasseur-Regourd, J Romstedt, K Torkar, 10.1093/mnras/stw2898MNRAS. 462T. Mannel, M.S. Bentley, R. Schmied, H. Jeszenszky, A.C. Levasseur-Regourd, J. Romstedt, K. Torkar, Fractal cometary dust -a window into the early Solar system. MNRAS 462, 304-311 (2016). doi:10.1093/mnras/stw2898 Preplanetary scavengers: Growing tall in dust collisions. T Meisner, G Wurm, J Teiser, M Schywek, 10.1051/0004-6361/201322083Astron. Astrophys. 559123T. Meisner, G. Wurm, J. Teiser, M. Schywek, Preplanetary scavengers: Growing tall in dust collisions. Astron. Astrophys. 559, 123 (2013). doi:10.1051/0004-6361/201322083 Comets as collisional fragments of a primordial planetesimal disk. A Morbidelli, H Rickman, 10.1051/0004-6361/201526116Astron. Astrophys. 58343A. Morbidelli, H. Rickman, Comets as collisional fragments of a primordial planetesimal disk. Astron. Astrophys. 583, 43 (2015). doi:10.1051/0004-6361/201526116 Asteroids were born big. A Morbidelli, W F Bottke, D Nesvorný, H F Levison, 10.1016/j.icarus.2009.07.011Icarus. 204A. Morbidelli, W.F. Bottke, D. Nesvorný, H.F. Levison, Asteroids were born big. Icarus 204, 558-573 (2009). doi:10.1016/j.icarus.2009.07.011 Collisions of CO 2 Ice Grains in Planet Formation. G Musiolik, J Teiser, T Jankowski, G Wurm, 10.3847/0004-637X/818/1/16Astrophys. J. 81816G. Musiolik, J. Teiser, T. Jankowski, G. Wurm, Collisions of CO 2 Ice Grains in Planet Formation. Astrophys. J. 818, 16 (2016a). doi:10.3847/0004-637X/818/1/16 Ice Grain Collisions in Comparison: CO2, H2O, and Their Mixtures. G Musiolik, J Teiser, T Jankowski, G Wurm, 10.3847/0004-637X/827/1/63Astrophys. J. 82763G. Musiolik, J. Teiser, T. Jankowski, G. Wurm, Ice Grain Collisions in Comparison: CO2, H2O, and Their Mixtures. Astrophys. J. 827, 63 (2016b). doi:10.3847/0004- 637X/827/1/63 A Natta, L Testi, N Calvet, T Henning, R Waters, D Wilner, Dust in Protoplanetary Disks: Properties and Evolution. Protostars and Planets V. A. Natta, L. Testi, N. Calvet, T. Henning, R. Waters, D. Wilner, Dust in Protoplanetary Disks: Properties and Evolution. Protostars and Planets V, 767-781 (2007) Rapid Coagulation of Porous Dust Aggregates outside the Snow Line: A Pathway to Successful Icy Planetesimal Formation. S Okuzumi, H Tanaka, H Kobayashi, K Wada, 10.1088/0004-637X/752/2/106Astrophys. J. 752106S. Okuzumi, H. Tanaka, H. Kobayashi, K. Wada, Rapid Coagulation of Porous Dust Ag- gregates outside the Snow Line: A Pathway to Successful Icy Planetesimal Formation. Astrophys. J. 752, 106 (2012). doi:10.1088/0004-637X/752/2/106 Random loose packings of uniform spheres and the dilatancy onset. G Y Onoda, E G Liniger, 10.1103/PhysRevLett.64.2727Physical Review Letters. 64G.Y. Onoda, E.G. Liniger, Random loose packings of uniform spheres and the dilatancy onset. Physical Review Letters 64, 2727-2730 (1990). doi:10.1103/PhysRevLett.64.2727 Impacts into weak dust targets under microgravity and the formation of planetesimals. G B Paraskov, G Wurm, O Krauss, 10.1016/j.icarus.2007.05.008Icarus. 191G.B. Paraskov, G. Wurm, O. Krauss, Impacts into weak dust targets under mi- crogravity and the formation of planetesimals. Icarus 191, 779-789 (2007). doi:10.1016/j.icarus.2007.05.008 The influence of grain rotation on the structure of dust aggregates. D Paszun, C Dominik, 10.1016/j.icarus.2005.12.018Icarus. 182D. Paszun, C. Dominik, The influence of grain rotation on the structure of dust aggregates. Icarus 182, 274-280 (2006). doi:10.1016/j.icarus.2005.12.018 Collisional evolution of dust aggregates. From compaction to catastrophic destruction. D Paszun, C Dominik, 10.1051/0004-6361/200810682Astron. Astrophys. 507D. Paszun, C. Dominik, Collisional evolution of dust aggregates. From compaction to catastrophic destruction. Astron. Astrophys. 507, 1023-1040 (2009). doi:10.1051/0004- 6361/200810682 A homogeneous nucleus for comet 67P/Churyumov-Gerasimenko from its gravity field. M Pätzold, T Andert, M Hahn, S W Asmar, J.-P Barriot, M K Bird, B Häusler, K Peter, S Tellmann, E Grün, P R Weissman, H Sierks, L Jorda, R Gaskell, F Preusker, F Scholten, 10.1038/nature16535Nature. 530M. Pätzold, T. Andert, M. Hahn, S.W. Asmar, J.-P. Barriot, M.K. Bird, B. Häusler, K. Peter, S. Tellmann, E. Grün, P.R. Weissman, H. Sierks, L. Jorda, R. Gaskell, F. Preusker, F. Scholten, A homogeneous nucleus for comet 67P/Churyumov-Gerasimenko from its gravity field. Nature 530, 63-65 (2016). doi:10.1038/nature16535 Constraints on the Radial Variation of Grain Growth in the AS 209 Circumstellar Disk. L M Pérez, J M Carpenter, C J Chandler, A Isella, S M Andrews, L Ricci, N Calvet, S A Corder, A T Deller, C P Dullemond, J S Greaves, R J Harris, T Henning, W Kwon, J Lazio, H Linz, L G Mundy, A I Sargent, S Storm, L Testi, D J Wilner, 10.1088/2041-8205/760/1/L17Astrophys. J. Lett. 76017L.M. Pérez, J.M. Carpenter, C.J. Chandler, A. Isella, S.M. Andrews, L. Ricci, N. Calvet, S.A. Corder, A.T. Deller, C.P. Dullemond, J.S. Greaves, R.J. Harris, T. Henning, W. Kwon, J. Lazio, H. Linz, L.G. Mundy, A.I. Sargent, S. Storm, L. Testi, D.J. Wilner, Constraints on the Radial Variation of Grain Growth in the AS 209 Circumstellar Disk. Astrophys. J. Lett. 760, 17 (2012). doi:10.1088/2041-8205/760/1/L17 Grain Growth in the Circumstellar Disks of the Young Stars CY Tau and DoAr 25. L M Pérez, C J Chandler, A Isella, J M Carpenter, S M Andrews, N Calvet, S A Corder, A T Deller, C P Dullemond, J S Greaves, R J Harris, T Henning, W Kwon, J Lazio, H Linz, L G Mundy, L Ricci, A I Sargent, S Storm, M Tazzari, L Testi, D J Wilner, 10.1088/0004-637X/813/1/41Astrophys. J. 81341L.M. Pérez, C.J. Chandler, A. Isella, J.M. Carpenter, S.M. Andrews, N. Calvet, S.A. Corder, A.T. Deller, C.P. Dullemond, J.S. Greaves, R.J. Harris, T. Henning, W. Kwon, J. Lazio, H. Linz, L.G. Mundy, L. Ricci, A.I. Sargent, S. Storm, M. Tazzari, L. Testi, D.J. Wilner, Grain Growth in the Circumstellar Disks of the Young Stars CY Tau and DoAr 25. Astrophys. J. 813, 41 (2015). doi:10.1088/0004-637X/813/1/41 Dust grain growth in ρ-Ophiuchi protoplanetary disks. L Ricci, L Testi, A Natta, K J Brooks, 10.1051/0004-6361/201015039Astron. Astrophys. 52166L. Ricci, L. Testi, A. Natta, K.J. Brooks, Dust grain growth in ρ-Ophiuchi protoplanetary disks. Astron. Astrophys. 521, 66 (2010). doi:10.1051/0004-6361/201015039 Initial mass function of planetesimals formed by the streaming instability. U Schäfer, C.-C Yang, A Johansen, 10.1051/0004-6361/201629561Astron. Astrophys. 59769U. Schäfer, C.-C. Yang, A. Johansen, Initial mass function of planetesimals formed by the streaming instability. Astron. Astrophys. 597, 69 (2017). doi:10.1051/0004- 6361/201629561 The Physics of Protoplanetesimal Dust Agglomerates. VI. Erosion of Large Aggregates as a Source of Micrometer-sized Particles. R Schräpler, J Blum, 10.1088/0004-637X/734/2/108Astrophys. J. 734108R. Schräpler, J. Blum, The Physics of Protoplanetesimal Dust Agglomerates. VI. Erosion of Large Aggregates as a Source of Micrometer-sized Particles. Astrophys. J. 734, 108 (2011). doi:10.1088/0004-637X/734/2/108 The Physics of Protoplanetesimal Dust Agglomerates. VII. The Low-velocity Collision Behavior of Large Dust Agglomerates. Astrophys. R Schräpler, J Blum, A Seizinger, W Kley, 10.1088/0004-637X/758/1/35J. 75835R. Schräpler, J. Blum, A. Seizinger, W. Kley, The Physics of Protoplanetesimal Dust Ag- glomerates. VII. The Low-velocity Collision Behavior of Large Dust Agglomerates. As- trophys. J. 758, 35 (2012). doi:10.1088/0004-637X/758/1/35 The physics of protoplanetesimal dust agglomerates. X. High-velocity collisions between small and large dust agglomerates as growth barrier. R Schräpler, J Blum, S Krijt, J.-H Raabe, Astrophys. J. in pressR. Schräpler, J. Blum, S. Krijt, J.-H. Raabe, The physics of protoplanetesimal dust agglom- erates. X. High-velocity collisions between small and large dust agglomerates as growth barrier. Astrophys. J., in press (2018) Erosion of dust aggregates. A Seizinger, S Krijt, W Kley, 10.1051/0004-6361/201322773Astron. Astrophys. 56045A. Seizinger, S. Krijt, W. Kley, Erosion of dust aggregates. Astron. Astrophys. 560, 45 (2013). doi:10.1051/0004-6361/201322773 The Spherically Symmetric Gravitational Collapse of a Clump of Solids in a Gas. K Shariff, J N Cuzzi, 10.1088/0004-637X/805/1/42Astrophys. J. 80542K. Shariff, J.N. Cuzzi, The Spherically Symmetric Gravitational Collapse of a Clump of Solids in a Gas. Astrophys. J. 805, 42 (2015). doi:10.1088/0004-637X/805/1/42 The Mass and Size Distribution of Planetesimals Formed by the Streaming Instability. I. The Role of Self-gravity. J B Simon, P J Armitage, R Li, A N Youdin, 10.3847/0004-637X/822/1/55Astrophys. J. 82255J.B. Simon, P.J. Armitage, R. Li, A.N. Youdin, The Mass and Size Distribution of Plan- etesimals Formed by the Streaming Instability. I. The Role of Self-gravity. Astrophys. J. 822, 55 (2016). doi:10.3847/0004-637X/822/1/55 Evidence for Universality in the Initial Planetesimal Mass Function. J B Simon, P J Armitage, A N Youdin, R Li, 10.3847/2041-8213/aa8c79Astrophys. J. Lett. 84712J.B. Simon, P.J. Armitage, A.N. Youdin, R. Li, Evidence for Universality in the Initial Planetesimal Mass Function. Astrophys. J. Lett. 847, 12 (2017). doi:10.3847/2041- 8213/aa8c79 Dust release and tensile strength of the non-volatile layer of cometary nuclei. Y Skorov, J Blum, 10.1016/j.icarus.2012.01.012Icarus. 221Y. Skorov, J. Blum, Dust release and tensile strength of the non-volatile layer of cometary nuclei. Icarus 221, 1-11 (2012). doi:10.1016/j.icarus.2012.01.012 Characteristics of the dust trail of 67P/Churyumov-Gerasimenko: an application of the IMEX model. R H Soja, M Sommer, J Herzog, J Agarwal, J Rodmann, R Srama, J Vaubaillon, P Strub, A Hornig, L Bausch, E Grün, 10.1051/0004-6361/201526184Astron. Astrophys. 58318R.H. Soja, M. Sommer, J. Herzog, J. Agarwal, J. Rodmann, R. Srama, J. Vaubail- lon, P. Strub, A. Hornig, L. Bausch, E. Grün, Characteristics of the dust trail of 67P/Churyumov-Gerasimenko: an application of the IMEX model. Astron. Astrophys. 583, 18 (2015). doi:10.1051/0004-6361/201526184 Multiwavelength analysis for interferometric (sub-)mm observations of protoplanetary disks. Radial constraints on the dust properties and the disk structure. M Tazzari, L Testi, B Ercolano, A Natta, A Isella, C J Chandler, L M Pérez, S Andrews, D J Wilner, L Ricci, T Henning, H Linz, W Kwon, S A Corder, C P Dullemond, J M Carpenter, A I Sargent, L Mundy, S Storm, N Calvet, J A Greaves, J Lazio, A T Deller, 10.1051/0004-6361/201527423Astron. Astrophys. 58853M. Tazzari, L. Testi, B. Ercolano, A. Natta, A. Isella, C.J. Chandler, L.M. Pérez, S. Andrews, D.J. Wilner, L. Ricci, T. Henning, H. Linz, W. Kwon, S.A. Corder, C.P. Dullemond, J.M. Carpenter, A.I. Sargent, L. Mundy, S. Storm, N. Calvet, J.A. Greaves, J. Lazio, A.T. Deller, Multiwavelength analysis for interferometric (sub-)mm observations of pro- toplanetary disks. Radial constraints on the dust properties and the disk structure. Astron. Astrophys. 588, 53 (2016). doi:10.1051/0004-6361/201527423 Decimetre dust aggregates in protoplanetary discs. J Teiser, G Wurm, 10.1051/0004-6361/200912027Astron. Astrophys. 505J. Teiser, G. Wurm, Decimetre dust aggregates in protoplanetary discs. Astron. Astrophys. 505, 351-359 (2009a). doi:10.1051/0004-6361/200912027 High-velocity dust collisions: forming planetesimals in a fragmentation cascade with final accretion. J Teiser, G Wurm ; X, J Teiser, M Küpper, G Wurm, 10.1111/j.1365-2966.2008.14289doi:10.1016/j.icarus.2011.07.036Icarus. 393MNRASJ. Teiser, G. Wurm, High-velocity dust collisions: forming planetesimals in a fragmenta- tion cascade with final accretion. MNRAS 393, 1584-1594 (2009b). doi:10.1111/j.1365- 2966.2008.14289.x J. Teiser, M. Küpper, G. Wurm, Impact angle influence in high velocity dust collisions during planetesimal formation. Icarus 215, 596-598 (2011). doi:10.1016/j.icarus.2011.07.036 L Testi, T Birnstiel, L Ricci, S Andrews, J Blum, J Carpenter, C Dominik, A Isella, A Natta, J P Williams, D J Wilner, Dust Evolution in Protoplanetary Disks. Protostars and Planets VI. L. Testi, T. Birnstiel, L. Ricci, S. Andrews, J. Blum, J. Carpenter, C. Dominik, A. Isella, A. Natta, J.P. Williams, D.J. Wilner, Dust Evolution in Protoplanetary Disks. Protostars and Planets VI, 339-361 (2014) Constraints on the radial distribution of the dust properties in the CQ Tauri protoplanetary disk. F Trotta, L Testi, A Natta, A Isella, L Ricci, 10.1051/0004-6361/201321896Astron. Astrophys. 55864F. Trotta, L. Testi, A. Natta, A. Isella, L. Ricci, Constraints on the radial distribution of the dust properties in the CQ Tauri protoplanetary disk. Astron. Astrophys. 558, 64 (2013). doi:10.1051/0004-6361/201321896 The building blocks of planets within the 'terrestrial' region of protoplanetary disks. R Van Boekel, M Min, C Leinert, L B F M Waters, A Richichi, O Chesneau, C Dominik, W Jaffe, A Dutrey, U Graser, T Henning, J Jong, R Köhler, A Koter, B Lopez, F Malbet, S Morel, F Paresce, G Perrin, T Preibisch, F Przygodda, M Schöller, M Wittkowski, 10.1038/nature03088Nature. 432R. van Boekel, M. Min, C. Leinert, L.B.F.M. Waters, A. Richichi, O. Chesneau, C. Dominik, W. Jaffe, A. Dutrey, U. Graser, T. Henning, J. de Jong, R. Köhler, A. de Koter, B. Lopez, F. Malbet, S. Morel, F. Paresce, G. Perrin, T. Preibisch, F. Przygodda, M. Schöller, M. Wittkowski, The building blocks of planets within the 'terrestrial' region of protoplanetary disks. Nature 432, 479-482 (2004). doi:10.1038/nature03088 The chemical history of molecules in circumstellar disks. I. Ices. R Visser, E F Van Dishoeck, S D Doty, C P Dullemond, 10.1051/0004-6361/200810846Astron. Astrophys. 495R. Visser, E.F. van Dishoeck, S.D. Doty, C.P. Dullemond, The chemical history of molecules in circumstellar disks. I. Ices. Astron. Astrophys. 495, 881-897 (2009). doi:10.1051/0004- 6361/200810846 Numerical Simulation of Dust Aggregate Collisions. II. Compression and Disruption of Three-Dimensional Aggregates in Head-on Collisions. K Wada, H Tanaka, T Suyama, H Kimura, T Yamamoto, 10.1086/529511Astrophys. J. 677K. Wada, H. Tanaka, T. Suyama, H. Kimura, T. Yamamoto, Numerical Simulation of Dust Aggregate Collisions. II. Compression and Disruption of Three-Dimensional Aggregates in Head-on Collisions. Astrophys. J. 677, 1296-1308 (2008). doi:10.1086/529511 Collisional Growth Conditions for Dust Aggregates. K Wada, H Tanaka, T Suyama, H Kimura, T Yamamoto, 10.1088/0004-637X/702/2/1490Astrophys. J. 702K. Wada, H. Tanaka, T. Suyama, H. Kimura, T. Yamamoto, Collisional Growth Con- ditions for Dust Aggregates. Astrophys. J. 702, 1490-1501 (2009). doi:10.1088/0004- 637X/702/2/1490 Formation of pebble-pile planetesimals. K Wahlberg Jansson, A Johansen, 10.1051/0004-6361/201424369Astron. Astrophys. 57047K. Wahlberg Jansson, A. Johansen, Formation of pebble-pile planetesimals. Astron. Astro- phys. 570, 47 (2014). doi:10.1051/0004-6361/201424369 The Role of Pebble Fragmentation in Planetesimal Formation. II. Numerical Simulations. K Wahlberg Jansson, A Johansen, M Syed, J Blum, 10.3847/1538-4357/835/1/109Astrophys. J. 835109K. Wahlberg Jansson, A. Johansen, M. Bukhari Syed, J. Blum, The Role of Pebble Frag- mentation in Planetesimal Formation. II. Numerical Simulations. Astrophys. J. 835, 109 (2017). doi:10.3847/1538-4357/835/1/109 Aerodynamics of solid bodies in the solar nebula. S J Weidenschilling, 10.1093/mnras/180.1.57MNRAS. 180S.J. Weidenschilling, Aerodynamics of solid bodies in the solar nebula. MNRAS 180, 57-70 (1977). doi:10.1093/mnras/180.1.57 Initial sizes of planetesimals and accretion of the asteroids. S J Weidenschilling, 10.1016/j.icarus.2011.05.024Icarus. 214S.J. Weidenschilling, Initial sizes of planetesimals and accretion of the asteroids. Icarus 214, 671-684 (2011). doi:10.1016/j.icarus.2011.05.024 Formation of planetesimals in the solar nebula. S J Weidenschilling, J N Cuzzi, Protostars and Planets III. E.H. Levy, J.I. LunineS.J. Weidenschilling, J.N. Cuzzi, Formation of planetesimals in the solar nebula, in Proto- stars and Planets III, ed. by E.H. Levy, J.I. Lunine, 1993, pp. 1031-1060 Free collisions in a microgravity many-particle experiment. IV. -Three-dimensional analysis of collision properties. R Weidling, J Blum, 10.1016/j.icarus.2014.12.010Icarus. 253R. Weidling, J. Blum, Free collisions in a microgravity many-particle experiment. IV. -Three-dimensional analysis of collision properties. Icarus 253, 31-39 (2015). doi:10.1016/j.icarus.2014.12.010 Free collisions in a microgravity many-particle experiment. I. Dust aggregate sticking at low velocities. R Weidling, C Güttler, J Blum, 10.1016/j.icarus.2011.10.002Icarus. 218R. Weidling, C. Güttler, J. Blum, Free collisions in a microgravity many-particle ex- periment. I. Dust aggregate sticking at low velocities. Icarus 218, 688-700 (2012). doi:10.1016/j.icarus.2011.10.002 The Physics of Protoplanetesimal Dust Agglomerates. III. Compaction in Multiple Collisions. R Weidling, C Güttler, J Blum, F Brauer, 10.1088/0004-637X/696/2/2036Astrophys. J. 696R. Weidling, C. Güttler, J. Blum, F. Brauer, The Physics of Protoplanetesimal Dust Ag- glomerates. III. Compaction in Multiple Collisions. Astrophys. J. 696, 2036-2043 (2009). doi:10.1088/0004-637X/696/2/2036 The Physics of Protoplanetesimal Dust Agglomerates. VIII. Microgravity Collisions between Porous SiO 2 Aggregates and Loosely Bound Agglomerates. A D Whizin, J Blum, J E Colwell, 10.3847/1538-4357/836/1/94Astrophys. J. 83694A.D. Whizin, J. Blum, J.E. Colwell, The Physics of Protoplanetesimal Dust Agglomerates. VIII. Microgravity Collisions between Porous SiO 2 Aggregates and Loosely Bound Agglomerates. Astrophys. J. 836, 94 (2017). doi:10.3847/1538-4357/836/1/94 Breaking through: The effects of a velocity distribution on barriers to dust growth. F Windmark, T Birnstiel, C W Ormel, C P Dullemond, 10.1051/0004-6361/201220004Astron. Astrophys. 54416F. Windmark, T. Birnstiel, C.W. Ormel, C.P. Dullemond, Breaking through: The effects of a velocity distribution on barriers to dust growth. Astron. Astrophys. 544, 16 (2012a). doi:10.1051/0004-6361/201220004 Planetesimal formation by sweep-up: how the bouncing barrier can be beneficial to growth. F Windmark, T Birnstiel, C Güttler, J Blum, C P Dullemond, T Henning, 10.1051/0004-6361/201118475Astron. Astrophys. 54073F. Windmark, T. Birnstiel, C. Güttler, J. Blum, C.P. Dullemond, T. Henning, Planetesimal formation by sweep-up: how the bouncing barrier can be beneficial to growth. Astron. Astrophys. 540, 73 (2012b). doi:10.1051/0004-6361/201118475 Experiments on Preplanetary Dust Aggregation. G Wurm, J Blum, 10.1006/icar.1998.5891Icarus. 132G. Wurm, J. Blum, Experiments on Preplanetary Dust Aggregation. Icarus 132, 125-136 (1998). doi:10.1006/icar.1998.5891 Ejection of dust by elastic waves in collisions between millimeter-and centimeter-sized dust aggregates at 16.5 to 37.5 m/s impact velocities. G Wurm, G Paraskov, O Krauss, 10.1103/PhysRevE.71.021304Phys. Rev. E. 71221304G. Wurm, G. Paraskov, O. Krauss, Ejection of dust by elastic waves in collisions between millimeter-and centimeter-sized dust aggregates at 16.5 to 37.5 m/s impact velocities. Phys. Rev. E 71(2), 021304 (2005a). doi:10.1103/PhysRevE.71.021304 Growth of planetesimals by impacts at ∼25 m/s. G Wurm, G Paraskov, O Krauss, 10.1016/j.icarus.2005.04.002Icarus. 178G. Wurm, G. Paraskov, O. Krauss, Growth of planetesimals by impacts at ∼25 m/s. Icarus 178, 253-263 (2005b). doi:10.1016/j.icarus.2005.04.002 Concentrating small particles in protoplanetary disks through the streaming instability. C.-C Yang, A Johansen, D Carrera, 10.1051/0004-6361/201630106Astron. Astrophys. 60680C.-C. Yang, A. Johansen, D. Carrera, Concentrating small particles in protoplanetary disks through the streaming instability. Astron. Astrophys. 606, 80 (2017). doi:10.1051/0004- 6361/201630106 Streaming Instabilities in Protoplanetary Disks. A N Youdin, J Goodman, 10.1086/426895Astrophys. J. 620A.N. Youdin, J. Goodman, Streaming Instabilities in Protoplanetary Disks. Astrophys. J. 620, 459-469 (2005). doi:10.1086/426895 The outcome of protoplanetary dust growth: pebbles, boulders, or planetesimals? II. Introducing the bouncing barrier. A Zsom, C W Ormel, C Güttler, J Blum, C P Dullemond, 10.1051/0004-6361/200912976Astron. Astrophys. 51357A. Zsom, C.W. Ormel, C. Güttler, J. Blum, C.P. Dullemond, The outcome of protoplanetary dust growth: pebbles, boulders, or planetesimals? II. Introducing the bouncing barrier. Astron. Astrophys. 513, 57 (2010). doi:10.1051/0004-6361/200912976
[]
[ "The Mittag-Leffler function in the thinning theory for renewal processes", "The Mittag-Leffler function in the thinning theory for renewal processes" ]
[ "Rudolf Gorenflo \nFirst Mathematical Institute\nDepartment of Physics and Astronomy\nFree University of Berlin\nArnimallee 3D-14195BerlinGermany\n\nUniversity of Bologna\nVia Irnerio 46, I-40126BolognaItaly\n", "Francesco Mainardi [email protected] " ]
[ "First Mathematical Institute\nDepartment of Physics and Astronomy\nFree University of Berlin\nArnimallee 3D-14195BerlinGermany", "University of Bologna\nVia Irnerio 46, I-40126BolognaItaly" ]
[ "the journal Theory of Probability and Mathematical Statistics" ]
The main purpose of this note is to point out the relevance of the Mittag-Leffler probability distribution in the so-called thinning theory for a renewal process with a queue of power law type. This theory, formerly considered by Gnedenko and Kovalenko in 1968 without the explicit reference to the Mittag-Leffler function, was used by the authors in the theory of continuous random walk and consequently of fractional diffusion in a plenary lecture by the late Prof Gorenflo at a Seminar on Anomalous Transport held in Bad-Honnef in July 2006, published in a 2008 book. After recalling the basic theory of renewal processes including the standard and the fractional Poisson processes, here we have revised the original approach by Gnedenko and Kovalenko for convenience of the experts of stochastic processes who are not aware of the relevance of the Mittag-Leffler functions.
10.1090/tpms/1065
[ "https://arxiv.org/pdf/1808.06563v1.pdf" ]
119,624,762
1808.06563
ce917b85cf704990a7a1a3e3d2fc2e429793d8b8
The Mittag-Leffler function in the thinning theory for renewal processes Aug 2018. 2018 Rudolf Gorenflo First Mathematical Institute Department of Physics and Astronomy Free University of Berlin Arnimallee 3D-14195BerlinGermany University of Bologna Via Irnerio 46, I-40126BolognaItaly Francesco Mainardi [email protected] The Mittag-Leffler function in the thinning theory for renewal processes the journal Theory of Probability and Mathematical Statistics 981Aug 2018. 2018This note is devoted to the memory of the late Professor Rudolf Gorenflo, passed away on 20 October 2017 at the age of 87. It has been published in The work of F.M. has been carried out in the framework of the activities of the National Group of Mathematical Physics (GNFM, INdAM). 1Mittag-Leffler functionsThinning (Rarefaction)Renewal pro- cessesQueuing theoryPoisson process MSC 2000: 26A3333E1244A1060K05 60K25 The main purpose of this note is to point out the relevance of the Mittag-Leffler probability distribution in the so-called thinning theory for a renewal process with a queue of power law type. This theory, formerly considered by Gnedenko and Kovalenko in 1968 without the explicit reference to the Mittag-Leffler function, was used by the authors in the theory of continuous random walk and consequently of fractional diffusion in a plenary lecture by the late Prof Gorenflo at a Seminar on Anomalous Transport held in Bad-Honnef in July 2006, published in a 2008 book. After recalling the basic theory of renewal processes including the standard and the fractional Poisson processes, here we have revised the original approach by Gnedenko and Kovalenko for convenience of the experts of stochastic processes who are not aware of the relevance of the Mittag-Leffler functions. Introduction In this paper we outline the relevance of the functions of the Mittag-Leffler type in renewal processes and in particular in the thinning theory for longtime behaviour with a generic power law waiting time distribution. In Section 2 we first recall the definition of a generic renewal process with the related probability distribution functions. In Section 3 we discuss the most celebrated renewal process known as the Poisson process defined by an exponential probability density function. Its natural fractional generalization is hence discussed in Section 4 by introducing the so-called renewal process of the Mittag-Leffler type, commonly known as the Fractional Poisson process. Then in Section 5 we consider the thinning theory for a renewal process with a queue of power law type, thereby leaning the presentation of Gnedenko and Kovalenko in 1968 pointing out the key role of the Mittag-Leffler function. Finally, conclusions are drown in Section 6. Essentials of renewal theory The concept of renewal process has been developed as a stochastic model for describing the class of counting processes for which the times between successive events are independent identically distributed (iid) non-negative random variables, obeying a given probability law. These times are referred to as waiting times or inter-arrival times. For more details see e.g. the classical treatises by Khintchine [16], Cox [6], Gnedenko & Kovalenko [11], Feller [9], and the most recent books by Ross [24], by Beichelt [2], and by Mitov and Omey [20]. jut to cite the treatises that have mostly attracted our attention. For a renewal process having waiting times T 1 , T 2 , . . ., let t 0 = 0 , t k = k j=1 T j , k ≥ 1 . (2.1) That is t 1 = T 1 is the time of the first renewal, t 2 = T 1 + T 2 is the time of the second renewal and so on. In general t k denotes the kth renewal. The process is specified if we know the probability law for the waiting times. In this respect we introduce the probability density function (pdf ) φ(t) and the (cumulative) distribution function Φ(t) so defined: φ(t) := d dt Φ(t) , Φ(t) := P (T ≤ t) = t 0 φ(t ′ ) dt ′ . (2.2) When the non-negative random variable represents the lifetime of technical systems, it is common to refer to Φ(t) as to the failure probability and to Ψ(t) := P (T > t) = ∞ t φ(t ′ ) dt ′ = 1 − Φ(t) ,(2.3) as to the survival probability, because Φ(t) and Ψ(t) are the respective probabilities that the system does or does not fail in (0, T ]. A relevant quantity is the counting function N (t) that indeed defines the renewal process as N (t) := max {k|t k ≤ t, k = 0, 1, 2, . . .} ,(2.4) that represents the effective number of events before or at instant t. In particular we have Ψ(t) = P (N (t) = 0) . Continuing in the general theory we set F 1 (t) = Φ(t), f 1 (t) = φ(t), and in general F k (t) := P (t k = T 1 + . . . + T k ≤ t) , f k (t) = d dt F k (t) , k ≥ 1 , (2.5) thus F k (t) represents the probability that the sum of the first k waiting times is less or equal t and f k (t) its density. Then, for any fixed k ≥ 1 the normalization condition for F k (t) is fulfilled because lim t→∞ F k (t) = P (t k = T 1 + . . . + T k < ∞) = 1 . (2.6) In fact, the sum of k random variables each of which is finite with probability 1 is finite with probability 1 itself. By setting for consistency F 0 (t) ≡ 1 and f 0 (t) = δ(t), the Dirac delta function in the sense of Gel'fand and Shilov [10] 1 , we also note that for k ≥ 0 we have P (N (t) = k) := P (t k ≤ t , t k+1 > t) = t 0 f k (t ′ ) Ψ(t − t ′ ) dt ′ . (2.7) We now find it convenient to introduce the simplified * notation for the convolution between two causal well-behaved (generalized) functions f (t) and g(t) 1 We find it convenient to recall the formal representation of this generalized function in R + , t 0 f (t ′ ) g(t − t ′ ) dt ′ = (f * g) (t) = (g * f ) (t) = t 0 f (t − t ′ ) g(t ′ ) dt ′ .δ(t) := t −1 Γ(0) , t ≥ 0 . Being f k (t) the pdf of the sum of the k iid random variables T 1 , . . . , T k with pdf φ(t) , we easily recognize that f k (t) turns out to be the k-fold convolution of φ(t) with itself, f k (t) = φ * k (t) ,(2.8) so Eq. (2.7) simply reads: P (N (t) = k) = φ * k * Ψ (t) . (2.9) Because of the presence of convolutions a renewal process is suited for the Laplace transform method. Throughout this paper we will denote by f (s) the Laplace transform of a sufficiently well-behaved (generalized) function f (t) according to L {f (t); s} = f (s) = +∞ 0 e −st f (t) dt , s > s 0 , and for δ(t) consistently we will have δ(s) ≡ 1 . Note that for our purposes we agree to take s real. We recognize that (2.9) reads in the Laplace domain L{P (N (t) = k) ; s} = φ(s) k Ψ(s) ,(2.10) where, using (2.3), Ψ(s) = 1 − φ(s) s . (2.11) The Poisson process as a renewal process The most celebrated renewal process is the Poisson process characterized by a waiting time pdf of exponential type, φ(t) = λ e −λt , λ > 0 , t ≥ 0 . (3.1) The process has no memory being a Lévy process. The moments of waiting times of order 1, 2, . . . n turn out to be T = 1 λ , T 2 = 1 λ 2 , . . . , T n = 1 λ n , . . . ,(3.2) and the survival probability is Ψ(t) := P (T > t) = e −λt , t ≥ 0 . (3.3) We know that the probability that k events occur in the interval of length t is P (N (t) = k) = (λt) k k! e −λt , t ≥ 0 , k = 0, 1, 2, . . . . (3.4) The probability distribution related to the sum of k iid exponential random variables is known to be the so-called Erlang distribution (of order k). The corresponding density (the Erlang pdf ) is thus f k (t) = λ (λt) k−1 (k − 1)! e −λt , t ≥ 0 , k = 1, 2, . . . , (3.5) so that the Erlang distribution function of order k turns out to be F k (t) = t 0 f k (t ′ ) dt ′ = 1 − k−1 n=0 (λt) n n! e −λt = ∞ n=k (λt) n n! e −λt , t ≥ 0 . (3.6) In the limiting case k = 0 we recover f 0 (t) = δ(t), F 0 (t) ≡ 1, t ≥ 0. The formulas (3.4)-(3.6) can easily obtained by using the technique of the Laplace transform sketched in the previous section noting that for the Poisson process we have: φ(s) = λ λ + s , Ψ(s) = 1 λ + s ,(3.7) and for the Erlang distribution: f k (s) = [ φ(s)] k = λ k (λ + s) k , F k (s) = [ φ(s)] k s = λ k s(λ + s) k . (3.8) We also recall that the survival probability for the Poisson renewal process obeys the ordinary differential equation (of relaxation type) d dt Ψ(t) = −λΨ(t) , t ≥ 0 ; Ψ(0 + ) = 1 . (3.9) 4 The renewal process of the Mittag-Leffler type A "fractional" generalization of the Poisson renewal process is simply obtained by generalizing the differential equation (3.9) replacing there the first derivative with the integro-differential operator t D β * that is interpreted as the fractional derivative of order β in Caputo's sense. For a sufficiently well-behaved function f (t) (t ≥ 0) we define the Caputo time fractional derivative of order β with 0 < β < 1 through L t D β * f (t); s = s β f (s) − s β−1 f (0 + ) , f (0 + ) := lim t→0 + f (t) , (4.1) so that t D β * f (t) := 1 Γ(1 − β) t 0 f ′ (τ ) (t − τ ) β dτ , 0 < β < 1 . (4.2) Such operator has been referred to as the Caputo fractional derivative since it was introduced by Caputo in the late 1960's for modelling the energy dissipation in the rheology of the Earth, see [3,4]. Soon later this derivative was adopted by Caputo and Mainardi in the framework of the linear theory of viscoelasticity, see [5]. The reader should observe that the Caputo fractional derivative differs from the usual Riemann-Liouville (R-L) fractional derivative t D β f (t) := d dt 1 Γ(1 − β) t 0 f (τ ) dτ (t − τ ) β , 0 < β < 1 . (4.3) Following the approach by Mainardi et al. [19], we write, taking for simplicity λ = 1, t D β * Ψ(t) = −Ψ(t) , t > 0 , 0 < β ≤ 1 ; Ψ(0 + ) = 1 . (4.4) We also allow the limiting case β = 1 where all the results of the previous section (with λ = 1) are expected to be recovered. We also allow the limiting case β = 1 where all the results of the previous sub-section (with λ = 1) are expected to be recovered. In fact, taking λ = 1 simply means a normalized way of scaling the variable t. For our purpose we need to recall the Mittag-Leffler function as the natural "fractional" generalization of the exponential function, that characterizes the Poisson process. The Mittag-Leffler function of parameter β is defined in the complex plane by the power series E β (z) := ∞ n=0 z n Γ(β n + 1) , β > 0 , z ∈ C . (4.5) It turns out to be an entire function of order β which reduces for β = 1 to exp(z) . For detailed information on the Mittag-Leffler-type functions and their Laplace transforms the reader may consult e.g. [8,13,22] and the most recent monograph by Gorenflo et al. [12]. The solution of Eq. (4.4) is known to be, see e.g. [5,18] Ψ(t) = E β (−t β ) , t ≥ 0 , 0 < β ≤ 1 , (4.6) so φ(t) := − d dt Ψ(t) = − d dt E β (−t β ) , t ≥ 0 , 0 < β ≤ 1 . (4.7) Then, the corresponding Laplace transforms read Ψ(s) = s β−1 1 + s β , φ(s) = 1 1 + s β , 0 < β ≤ 1 . (4.8) Hereafter, we find it convenient to summarize the most relevant features of the functions Ψ(t) and φ(t) when 0 < β < 1 . We begin to quote their series expansions for t → 0 + and asymptotics for t → ∞, Ψ(t) = ∞ n=0 (−1) n t βn Γ(β n + 1) ∼ sin (βπ) π Γ(β) t β , t → ∞, 0 < β < 1,(4. 9) and φ(t) = 1 t 1−β ∞ n=0 (−1) n t βn Γ(β n + β) ∼ sin (βπ) π Γ(β + 1) t β+1 , t → ∞, 0 < β < 1 . (4.10) In contrast to the Poissonian case β = 1, in the case 0 < β < 1 for large t the functions Ψ(t) and φ(t) no longer decay exponentially but algebraically. As a consequence of the power-law asymptotics the fractional Poisson process for β < 1 turns be no longer Markovian as for β = 1 but of long-memory type. However, we recognize that for 0 < β < 1 both functions Ψ(t), φ(t) keep the "completely monotonic" character of the Poissonian case. Complete monotonicity of the functions Ψ(t) and φ(t) means (−1) n d n dt n Ψ(t) ≥ 0 , (−1) n d n dt n φ(t) ≥ 0 , n = 0, 1, 2, . . . , t ≥ 0 , (4.11) or equivalently, their representability as real Laplace transforms of nonnegative generalized functions (or measures), see e.g. [9]. For the generalizations of Eqs (3.4) and (3.5)-(3.6), characteristic of the Poisson and Erlang distributions respectively, we must point out the Laplace transform L{t β k E (k) β (−t β ); s} = k! s β−1 (1 + s β ) k+1 , β > 0 , k = 0, 1, 2, . . . , (4.12) with E (k) β (z) := d k dz k E β (z) , that can be deduced from the book by Podlubny, see (4.80) in [22]. Then, by using the Laplace transforms (4.8) and Eqs (4.6), (4.7), (4.12) in Eqs (2.8) and (2.9), we have the generalized Poisson distribution, P (N (t) = k) = t k β k! E (k) β (−t β ) , k = 0, 1, 2, . . . (4.13) and the generalized Erlang pdf (of order k ≥ 1), f k (t) = β t kβ−1 (k − 1)! E (k) β (−t β ) . (4.14) The generalized Erlang distribution function turns out to be F k (t) = t 0 f k (t ′ ) dt ′ = 1 − k−1 n=0 t nβ n! E (n) β (−t β ) = ∞ n=k t nβ n! E (n) β (−t β ) . (4.15) For readers' convenience we conclude this section citing other works dealing with the so-called fractional Poisson process from different point of view, see e.g. Laskin [17], Beghin and Orsingher [1]. The Mittag-Leffler distribution as limit for thinned renewal processes Now, we provide, in our notation, an outline of the thinning theory for renewal processes essentially following the 1968 approach by Gnedenko and Kovalenko [11]. Examples of thinning processes are provided in the the 2006 book by Beichelt [2], where we read For instance, a cosmic particle counter registers only α-particles and ignores other types of particles. Or, a reinsurance company is only interested in claims, the size of which exceeds, say, one million dollars. We must note that other authors, like Szántai [25,26] speak of rarefaction in place of thinning. Let us sketch here the essentials of this theory. Denoting by t n , n = 1, 2, 3, . . . the time instants of events of a renewal process, assuming 0 = t 0 < t 1 < t 2 < t 3 < . . ., with i.i.d. waiting times T 1 = t 1 , T k = t k − t k−1 for k ≥ 2, (generically denoted by T), thinning (or rarefaction) means that for each positive index k a decision is made: the event happening in the instant t k is deleted with probability p or it is maintained with probability q = 1−p, 0 < q < 1. This procedure produces a thinned or rarefied renewal process with fewer events (very few events if q is near zero, the case of particular interest) in a moderate span of time. To compensate for this loss we change the unit of time so that we still have not very few but still a moderate number of events in a moderate span of time. Such change of the unit of time is equivalent to rescaling the waiting time, multiplying it with a positive factor τ so that we have waiting times τ T 1 , τ T 2 , τ T 3 , . . ., and instants τ t 1 , τ t 2 , τ t 3 , . . ., in the rescaled process. In other words to bring the distant future into near sight we change the unit of time from 1 to 1/τ , 0 < τ ≪ 1. For the random waiting times T this means replacing T by τ T . Then, having very many events in a moderate span of time we compensate this compression by respeeding the whole process, actually slowing it down so that again we have a moderate number of events in a moderate span of time. Our intention is, vaguely speaking, to dispose on τ in relation to the rarefaction parameter q in such a way that for q near zero in some sense the "average" number of events per unit of time remains unchanged. In an asymptotic sense we will make these considerations precise. Denoting by F (t) = P (T ≤ t) the probability distribution function of the (original) waiting time T , by f (t) its density (f (t) is a generalized function generating a probability measure) so that F (t) = t 0 f (t ′ ) dt ′ , and analogously by F k (t) and f k (t) the distribution and density, respectively, of the sum of k waiting times, we have recursively f 1 (t) = f (t) , f k (t) = t 0 f k−1 (t − t ′ ) dF (t ′ ) , for k ≥ 2 .(1) Observing that after a maintained event the next one of the original process is kept with probability q but dropped in favor of the second-next with probability p q and, generally, n − 1 events are dropped in favor of the nth-next with probability p n−1 q, we get for the waiting time density of the thinned process the formula g q (t) = ∞ n=1 q p n−1 f n (t) .(2) With the modified waiting time τ T we have P (τ T ≤ t) = P (T ≤ t/τ ) = F (t/τ ) , hence the density f (t/τ )/τ , and analogously for the density of the sum of n waiting times f n (t/τ )/τ . The density of the waiting time of the rescaled (and thinned) process now turns out as g q,τ (t) = ∞ n=1 q p n−1 f n (t/τ )/τ .(3) In the Laplace domain we have f n (s) = f (s) n , hence (using p = 1 − q) g q (s) = ∞ n=1 q p n−1 f (s) n = q f (s) 1 − (1 − q) f (s) ,(4) from which by Laplace inversion we can, in principle, construct the waiting time density of the thinned process. By re-scaling we get g q,τ (s) = ∞ n=1 q p n−1 f (τ s) n = q f (τ s) 1 − (1 − q) f (τ s) .(5) Being interested in stronger and stronger thinning (infinite thinning) let us now consider a scale of processes with the parameters τ (of rescaling) and q (of thinning), with q tending to zero under a scaling relation q = q(τ ) yet to be specified. We have essentially two cases for the waiting time distribution: its expectation value (the first moment) is finite or infinite. In the first case we put µ = ∞ 0 t ′ f (t ′ ) dt ′ < ∞ , β = 1.(6) In the second case we assume a queue of power law type Ψ(t) := ∞ t f (t ′ ) dt ′ ∼ c β t −β , t → ∞, 0 < β < 1 .(7) Then, by the Tauberian theory (see e.g. [9,27]) the above conditions mean in the Laplace domain f (s) = 1 − µ s β + o s β , for s → 0 + ,(8) with a positive coefficient µ and 0 < β ≤ 1. The case β = 1 obviously corresponds to the situation with finite first moment, whereas the case 0 < β < 1 is related to a power law queue with c = µ Γ(β + 1) sin(βπ)/π . Now, passing to the limit of q → 0 of infinite thinning under the scaling relation q = µ τ β , 0 < β ≤ 1 ,(9) between the positive parameters q and τ , the Laplace transform of the rescaled density g q,τ (s) in (5.5) of the thinned process tends for fixed s to g(s) = 1 1 + s β , which corresponds to the Mittag-Leffler density g(t) = − d dt E β (−t β ) = φ M L (t) .(11) We note that in the literature the distribution (5.10) is known as positive Linnik distribution, see e.g. [7] and used by Pillai [21]to define the Mittag-Leffler distribution. Let us remark that Gnedenko and Kovalenko in 1968 obtained (5.10) as the Laplace transform of the limiting density but did not identify it as the Laplace transform of a Mittag-Leffler type function even if this Laplace transform was known since the late 1950's in the Bateman Handbook [8]. Observe once again that in the special case β = 1 we recover as the limiting process the Poisson process, as formerly shown in 1956 by Rényi [23]. Conclusions We have revised the basic theory of renewal processes including the standard and fractional Poisson processes. and in particular the thinning theory with a power law queue, in order to out the relevance of the Mittag-Leffler functions. These processes are essential in the theory of continuous time random walks, and, under re-scalng of time and space coordinates, in spacetime fractional diffusion processes, as shown e.g. in papers by Gorenflo and Mainardi, see the survey [15]. For the thinning theory we have followed the original approach based on Laplace transform by Gnedenko and Kovalenko, who, however, had ignored the Mittag-Leffler functions. In practice this note points out a further application of the functions of the Mittag-Leffler type in stochastic processes, not so well known. Iterated elastic Brownian motions and fractional diffusion equations. L Beghin, E Orsingher, Stochastic Processes and their Applications. 119L. Beghin and E. Orsingher, Iterated elastic Brownian motions and frac- tional diffusion equations, Stochastic Processes and their Applications 119 (2009), 1975-2003. F Beichelt, Stochastic Processes in Science, Engineering and Finance. Boca Raton, US.Chapman & Hall/CRCF. Beichelt, Stochastic Processes in Science, Engineering and Finance, Chapman & Hall/CRC', Boca Raton, US. (2006). Linear models of dissipation whose Q is almost frequency independent, Part II. M Caputo, Geophys. J. R. Astr. Soc. 13M. Caputo, Linear models of dissipation whose Q is almost frequency independent, Part II. Geophys. J. R. Astr. Soc. 13 (1967), 529-539. Elasticità e Dissipazione. M Caputo, Bologna, ZanichelliM. Caputo, Elasticità e Dissipazione. Bologna, Zanichelli (1969). Linear models of in anelastic solids. M Caputo, F Mainardi, Riv. Nuovo Cimento (Ser. II). 1M. Caputo and F. Mainardi, Linear models of in anelastic solids. Riv. Nuovo Cimento (Ser. II) 1 (1971), 161-198. Renewal Theory, 2-nd Edn. D R Cox, Methuen, LondonD.R. Cox, Renewal Theory, 2-nd Edn, Methuen, London (1967). A note on Linnik's distribution. L Devroye, Statistics & Probability Letters. 9L. Devroye, A note on Linnik's distribution, Statistics & Probability Let- ters 9 (1990), 305-306. Higher Transcendental Functions. A Erdélyi, W Magnus, F Oberhettinger, F G Tricomi, Miscellaneous Functions. New YorkMcGraw-Hill3A. Erdélyi, W. Magnus, F. Oberhettinger and F.G. Tricomi, Higher Transcendental Functions, Bateman Project, McGraw-Hill, New York, 1955, Vol 3. [Ch. 18: Miscellaneous Functions, pp. 206-227] W Feller, An Introduction to Probability Theory and its Applications. New YorkWiley21-st edn. 1966W. Feller, An Introduction to Probability Theory and its Applications, Vol. 2, 2-nd edn. Wiley, New York (1971) .[1-st edn. 1966] I M , G E Shilov, Generalized Functions. New York; MoscowNauka1English translation from the RussianI.M. Gel'fand and G.E. Shilov, Generalized Functions, Vol. 1, Academic Press, New York (1964). [English translation from the Russian (Nauka, Moscow, 1959)] Introduction to Queueing Theory, Israel Program for Scientific Translations. B Gnedenko, I Kovalenko, JerusalemTranslated from the RussianB.V Gnedenko and I.N Kovalenko, Introduction to Queueing Theory, Israel Program for Scientific Translations', Jerusalem (1968). (Translated from the Russian) R Gorenflo, A Kilbas, F Mainardi, S Rogosin, Mittag-Leffler Functions, Related topics and Applications Spinger. BerlinR. Gorenflo, A. Kilbas, F. Mainardi and S. Rogosin, Mittag-Leffler Functions, Related topics and Applications Spinger, Berlin (2014). Fractional calculus: integral and differential equations of fractional order. R Gorenflo, F Mainardi, Fractals and Fractional Calculus in Continuum Mechanics. A. Carpinteri and F. MainardiWienSpringer VerlagR. Gorenflo and F. Mainardi, Fractional calculus: integral and differ- ential equations of fractional order, in: A. Carpinteri and F. Mainardi (Editors), Fractals and Fractional Calculus in Continuum Mechanics, Springer Verlag, Wien (1997), pp. 223-276. Mittag-Leffler waiting time and fractional diffusion: mathematical aspects. R Gorenflo, F Mainardi, Anomalous Transport: Foundations and Applications. R. Klages, G. Radons, I.M. SololovWeinheim, GermanyWiley-VCH4Continuous time random walk. E-printR. Gorenflo and F. Mainardi, Anomalous Transport: Foundations and Applications, Ch 4: Continuous time random walk, Mittag-Leffler wait- ing time and fractional diffusion: mathematical aspects, (R. Klages, G. Radons, I.M. Sololov, eds.), Wiley-VCH, Weinheim, Germany, (2008), pp. 93-127. [E-print: http://arxiv.org/abs/0705.0797] Parametric Subordination in Fractional Diffusion Processes. R Gorenflo, F Mainardi, Fractional Dynamics. J. Klafter, S.C. Lim and R. MetzlerSingaporeWorld ScientificE-printR. Gorenflo and F. Mainardi, Parametric Subordination in Fractional Diffusion Processes, in J. Klafter, S.C. Lim and R. Metzler (Editors), Fractional Dynamics, World Scientific, Singapore (2012), Chapter 10, pp. 229-263 [E-print: http://arxiv.org/abs/1210.8414] A Ya, Khintchine, Mathematical Methods in the Theory of Queueing. LondonA. Ya. Khintchine, Mathematical Methods in the Theory of Queueing, Charles Griffin, London (1960). Fractional Poisson processes. N Laskin, Comm. Nonlinear Sci. Num. Sim. 8N. Laskin, Fractional Poisson processes, Comm. Nonlinear Sci. Num. Sim. 8 (2003), 201-213. Time-fractional derivatives in relaxation processes: a tutorial survey. F Mainardi, R Gorenflo, Fract. Calc. Appl. Anal. 10F. Mainardi and R. Gorenflo, Time-fractional derivatives in relaxation processes: a tutorial survey, Fract. Calc. Appl. Anal. 10 (2007), 269-308. A fractional generalization of the Poisson processes. F Mainardi, R Gorenflo, E Scalas, Vietnam Journal of Mathematics. 32F. Mainardi, R. Gorenflo and E. Scalas, A fractional generalization of the Poisson processes Vietnam Journal of Mathematics 32 SI (2004), 53-64. K V Mitov, E Omey, Renewal Processes. HeidelbergSpringerK.V. Mitov and E. Omey, Renewal Processes, Springer, Heidelberg (2014). On Mittag-Leffler functions and related distributions. R N Pillai, Ann. Inst. Statist. Math. 421R.N. Pillai, On Mittag-Leffler functions and related distributions, Ann. Inst. Statist. Math. 42 No 1 (1990), 157-161. I Podlubny, Fractional Differential Equations. New YorkAcademic PressI. Podlubny, Fractional Differential Equations, Academic Press, New York (1999). A characteristic of the Poisson stream. A Renyi, Proc. Math. Inst. Hungarica Acad. Sci. 14In HungarianA. Renyi, A characteristic of the Poisson stream, Proc. Math. Inst. Hungarica Acad. Sci. 1 (1956), no. 4, 563-570. [In Hungarian] Introduction to Probability Models, 6-th Edn. S M Ross, Academic PressNew YorkS.M. Ross, Introduction to Probability Models, 6-th Edn, Academic Press, New York (1997). Limiting distribution for the sums of random number of random variables concerning the rarefaction of recurrent events. T Szàntai, Studia Scientiarum Mathematicarum Hungarica. 6T. Szàntai, Limiting distribution for the sums of random number of random variables concerning the rarefaction of recurrent events, Studia Scientiarum Mathematicarum Hungarica 6 (1971), 443-452. On an invariance problem related to different rarefactions of recurrent event. T Szàntai, Studia Scientiarum Mathematicarum Hungarica. 6T. Szàntai, On an invariance problem related to different rarefactions of recurrent event, Studia Scientiarum Mathematicarum Hungarica 6 (1971), 453-456. The Laplace Transform. D V Widder, Princeton University PressPrincetonD.V. Widder, The Laplace Transform, Princeton University Press, Princeton (1946).
[]
[ "Nonlinear magnetization of graphene", "Nonlinear magnetization of graphene" ]
[ "Sergey Slizovskiy \nDepartment of Physics\nLoughborough University\nLE11 3TULoughboroughUK\n", "Joseph J Betouras \nDepartment of Physics\nLoughborough University\nLE11 3TULoughboroughUK\n" ]
[ "Department of Physics\nLoughborough University\nLE11 3TULoughboroughUK", "Department of Physics\nLoughborough University\nLE11 3TULoughboroughUK" ]
[]
We compute the magnetization of graphene in a magnetic field, taking into account for generality the possibility of a mass gap. We concentrate on the physical regime where quantum oscillations are not observed due to the effect of the temperature or disorder and show that the magnetization exhibits non-linear behaviour as a function of the applied field, reflecting the strong non-analyticity of the two-dimensional effective action of Dirac electrons. The necessary values of the magnetic field to observe this non-linearity vary from a few Teslas for very clean suspended samples to 20-30 Teslas for good samples on substrate. In the light of these calculations, we discuss the effects of disorder and interactions as well as the experimental conditions under which the predictions can be observed.
10.1103/physrevb.86.125440
[ "https://arxiv.org/pdf/1203.5044v5.pdf" ]
55,260,993
1203.5044
4b0aa652edafb037cd8bfccc70080bd2b7b13af8
Nonlinear magnetization of graphene 28 Sep 2012 Sergey Slizovskiy Department of Physics Loughborough University LE11 3TULoughboroughUK Joseph J Betouras Department of Physics Loughborough University LE11 3TULoughboroughUK Nonlinear magnetization of graphene 28 Sep 2012PACS numbers: 7570Ak , 7322Pr We compute the magnetization of graphene in a magnetic field, taking into account for generality the possibility of a mass gap. We concentrate on the physical regime where quantum oscillations are not observed due to the effect of the temperature or disorder and show that the magnetization exhibits non-linear behaviour as a function of the applied field, reflecting the strong non-analyticity of the two-dimensional effective action of Dirac electrons. The necessary values of the magnetic field to observe this non-linearity vary from a few Teslas for very clean suspended samples to 20-30 Teslas for good samples on substrate. In the light of these calculations, we discuss the effects of disorder and interactions as well as the experimental conditions under which the predictions can be observed. I. INTRODUCTION The physics of graphene has attracted a huge amount of interest since its discovery 1 due to its unique physical properties as well as its potential technological importance. The electronic properties and its behavior in a strong magnetic field have been the focus of wide activity as recent reviews summarize 2,3 . The Dirac-like spectrum of clean graphene has been established, see e.g. Ref. [4] and references therein. Moreover, the partition function of twodimensional massless systems are known to exhibit strong non-analytic behaviour as a function of external fields 5,6 . This kind of behaviour can lead, for example, to non-Ohmian conductivity due to Schwinger pair production at certain conditions 7,8 or signatures of quantum criticality 9 . In the case of conventional metals, the magnetism is a result of two contributions, coming from the spin (Pauli contribution) or the orbitals (Landau diamagnetism). In this work, we examine in detail the orbital magnetization of graphene in a magnetic field, including for generality the possibility of a mass gap, and show the appearance of a non-linear dependence on the applied field, as a consequence of non-quadratic fielddependence of the partition function. The Pauli magnetization of graphene is linear as a function of doping and much smaller than the orbital magnetization especially for low carrier densities (chemical potential close to 0) 10 . The possible nonlinearity due to the Pauli contribution, which is a known phenomenon, a consequence of the saturation of localized moments at higher fields, is discussed in the part of this work where the experimental verification is proposed. The nonlinearity happens as a result of the failure of the linear response approximation, which is applicable when there is a mass scale, larger than the applied perturbation. This scale controls the perturbative expansion. For the case of graphene this could be e.g. a mass gap, the temperature or the impurity scattering rate. Since the magnetic energy scale in graphene is known to be exceptionally large due its linear dispersion, even at moderate magnetic fields the magnetic energy scale becomes dominant, and therefore it violates the basic assumptions of the linear response regime. The non-linearity survives even at relatively small magnetic field for sufficiently clean samples, therefore it is not sufficient to compute only the zero field magnetic susceptibility. Typically, the observed non-linear effect of strong magnetic field is the magnetic oscillations, but these oscillations average out at elevated temperatures or due to impurities producing the linear magnetization. Here we focus on the regime, where the magnetic oscillations average out, but, nevertheless, a smooth non-linearity remains. In the following sections we discuss the effects of temperature, mass gap, disorder, as well as interaction effects at a phenomenological level and propose an experiment which can demonstrate these predictions. II. CLEAN GRAPHENE WITH POSSIBLE MASS GAP. At zero temperature, the grand canonical potential of clean graphene in a magnetic field when Landau levels (LLs) are formed, at zero chemical potential, with energy of the LLs E n = sign(n) α|nB| where α = 2 ev 2 F and n is an integer reads Ω vac = g s g v √ α|B| 3/2 C ζ(3/2) 4π − 0.1654 a |B|C with a = 0.142 nm being the distance between the carbon atoms and the degeneracy factor C = e 2π , g v = 2, g s = 2 accounts for spin and valley degeneracy, ζ is the zeta function: ζ(3/2) 4π = −ζ(−1/2) ≈ 0.2079. The subleading terms are lattice corrections which are discussed elsewhere 10,11 , the numerical factor corresponds to nearest-neighbour tight-binding model calculation and describes orbital paramagnetic contributions due to higher energy levels having non-Dirac dispersion. The corrections as a result of the deviation of the dispersion from the Diraclike appear either due to interaction effects -which we discuss below -or due to lattice effects when B ≫ 100 Tesla and are not discussed further here. The leading term, though, is non-analytic in the magnetic field and naturally leads to divergent diamagnetic susceptibility at the Dirac point. Such non-analyticities are typical not only in the effective action of massless systems but at quantum critical points [12][13][14][15] or even in corrections to Fermi liquid theory 16 . When temperature-averaged over many Landau levels (LLs), this non-analytic contribution cancels out, as first shown by McClure 17 . When the magnetic field overcomes the energy scale set up by the temperature (or impurity scattering), this nonlinearity can be observed. For generality and to compare energy scales, we take into account a mass gap ∆ in the dispersion relation, which can be experimentally created e.g. due to A-B sublattice asymmetry caused by SiC substrate or by regular deposition of impurities 20,21 . At zero magnetic field it was demonstrated that the pseudo-spin degree of freedom (due to valleys) produces diamagnetic susceptibility which is [22][23][24] : χ(ǫ) = −g s g v αC 12|∆| θ(|∆| − ǫ)(1) it is evident that the mass gap resolves the formal δ-function singularity in the susceptibility, but when the magnetic energy scale exceeds that of the gap, the non-analyticities of the free energy become again dominant. In a magnetic field B and in the presence of the mass gap, the spectrum at low energies becomes E n =0 = sign(n) α|nB| + ∆ 2 ; E 0 = (−1) v ∆ (2) where v = 0, 1 enumerates the two Dirac valleys. In the absence of the gap or when its size is smaller than the distance between the first LLs ∆ < α|B|, the linear expression (1) is not applicable and, as we will show, the correct result leads to the non-linear magnetization. The regularized free energy reads (at chemical potential µ = 0, temperature T = 0) Ω(∆, 0, 0)= −g s C|B| ∞ n=0 + ∞ n=1 αn|B| + ∆ 2 = reg −g s Cα 1/2 |B| 3/2 − ∆ 2 α|B| 1/2 + +2 ζ − 1 2 , ∆ 2 α|B| + 4 3 ∆ 2 α|B| 3/2(3) where we have regularized and subtracted the B = 0 expression, ζ is a generalized Hurwitz ζ-function. It also agrees with related formula in Ref. [22]. The above expression is exact in the limit of free electrons and has a strong-field expansion valid for δ 2 ≡ ∆ 2 α|B| 1 (when the gap is less or comparable to the distance between the first LLs), leading to the strong-field expansion of the magnetization (µ = 0, T = 0): M (∆, 0, 0) = g s C α|B| 3ζ(−1/2) + δ + ζ(1/2) 2 δ 2 + ζ(3/2) 8 δ 4 − 3ζ(5/2) 16 δ 6 + O(δ 8 )(4) where δ ≡ ∆ 2 α|B| = 27.5 ∆(eV) √ |B|(T) , for example, for ∆ = 0.1eV this formula works starting from approximately 7.5 Tesla, indicating the scale from which the non-linearities occurs. At T = 0 and as ∆ → 0 the range of the values of B where linear magnetization occurs decreases to zero. Therefore, the nonlinearity of magnetization gets stronger with the reduction of the gap and temperature. The susceptibility, following from Eq.(3) reads: χ(∆, 0, 0) = gsC √ α 2 √ |B| 3ζ − 1 2 , ∆ 2 α|B| − (5) −2 ∆ 2 α|B| ζ 1 2 , ∆ 2 α|B| − ∆ 4 α 2 |B| 2 ζ 3 2 , ∆ 2 α|B| At finite temperature and chemical potential, after performing the same subtraction as for the regularization of Ω(∆, µ = 0, T = 0), and additionally subtracting µN (µ = 0, T = 0) which is independent of B we obtain 25 : Ω(∆, µ, T ) = Ω(∆, 0, 0) − (6) −g s CBkT log 1 + exp −∆ + µ k B T + +2 ∞ n=1 log 1 + exp −E n + µ k B T − [µ → −µ] At zero chemical potential the temperature plays a role similar to the gap. It is instructive to note that if we compare McClure's result for magnetic susceptibility at finite temperature: χ = − g s g v C 24 α k B T sech 2 µ 2k B T with the one for the magnetization with gap but at zero temperature, we see that at 2k B T = ∆ the linear magnetization is the same (with µ = 0), while the non-linear parts are different as is shown in Fig. 1. For non-zero chemical potential and temperature there are two regimes, the well known low-field regime and the high-field regime which sets in when E 1 = α|B| 2µ. When the chemical potential In addition, the magnetization at fixed chemical potential 100 meV and at fixed electron concentration 10 12 cm −2 at T=300 K is plotted. Low and high field asymptotic are shown. exceeds the temperature scale, the magnetization rapidly decreases as expected, but it starts to grow when the separation between the LLs becomes comparable to the chemical potential, this behaviour is shown by the upper curve in Fig.1. At such fields the zeroth LL gives the leading constant paramagnetic contribution to the magnetization, but the same non-linear vacuum energy contribution remains: for α|B| ≫ 2(µ + ∆) ≫ k B T M (∆, µ, T ) ≈ g s C (µ − ∆) + M (∆, 0, 0) (7) this asymptotic regime is shown by the upper green dot-dashed line in Fig. 1. When the temperature decreases we get de Haas -van Alphen oscillations for non-zero chemical potential, as expected 22 . The non-linearity we discuss can be alternatively interpreted as being connected to this dHvA oscillations, as its remnant behaviour. When the number of particles is fixed, instead of the chemical potential (both situations are experimentally realizable in cases of graphene on substrate or suspended flakes) then µ is expressed through the relation N = − ∂Ω ∂µ . At small temperature, the De Haas van Alphen oscillations are observed. At high fields the chemical potential inevitably tends to zero since almost all the electrons (or holes if µ < 0) can be hosted by the zero LL. This leads to the |B| 1/2 asymptotic behaviour of magnetization, corresponding to Eq.(3), shown by the black solid curve in Fig. 1. For completeness, we briefly comment on the case of bilayer graphene where the zero-field susceptibility 26,27 diverges logarithmically with the Fermi energy ǫ F → 0 and the divergence is cut by the greatest of trigonal warping scale ǫ trig and ǫ F . When we increase the magnetic field, the magnetic energy scale α|B| eventually becomes the greatest, thus leading to weak logarithmic non-linearity of the magnetization. The asymptotic form of the magnetic thermodynamic potential is 28 : Ω = gsgv 8π e 2 v 2 F γ1 log(γ 1 /( α|B|))B 2 where γ 1 ≈ 0.4 eV is the interlayer hopping energy and the magnetic scale α|B| is assumed to be larger than the trigonal warping energy ǫ trig and ǫ F , otherwise one replaces α|B| → ǫ F . Then one gets M ≈ −|B| g s g v e 2 v 2 F 8πγ 1 log γ 2 1 |B|α(8) At even larger magnetic fields B 100 T, the magnetic energy becomes larger than the interlayer hopping γ 1 , thus effectively reducing the bilayer to two monolayers. Since the non-linearity of magnetization of graphene bilayer is significantly weaker than for the monolayer, we expect that the impurities would make this effect hard to observe. So, we concentrate on the monolayer in what follows. The Nlayered graphene was shown to have [N/2] bilayer bands and N mod 2 monolayer bands 26 . III. EFFECT OF IMPURITIES Besides the mass gap and temperature, the nonlinearity of the magnetization is influenced by impurities and interactions. We consider first the shortrange scattering impurities and then the effect of charge inhomogeneities: the electron and hole puddles. A. Short-range impurities Consider for simplicity the short-range impurities with momentum independent scattering. It is sufficient to adopt the self-consistent Born approximation (SCBA) 29,31 . The treatment of the vacancy-type impurities 32 , or the commonly used phenomenological Lorentzian broadening leads to similar conclusions qualitatively, as we have checked. In SCBA 29 , the self-energy reads Σ(ǫ) = W α|B| 2 n g(ǫ n ) ǫ − ǫ n − Σ(ǫ)(9) where ǫ n is a full spectrum and g(ǫ) is some smooth cut-off function. This is valid for small strength of disorder W = niu 2 i 4πv 2 F ≪ 1, where n i is the impurity concentration and u i a measure of the strength of the random on-site impurity interaction. For discrete LL spectrum and for small W we obtain the solution iteratively. The density of states is ρ(ǫ) = − g v g s 2π 2 2 v 2 F W Im Σ(ǫ + i0)(10) For the strong-field regime the single-level approximation of the level width works well, as it has been checked numerically. Solving the Eq.(9) with a single energy level we obtain two roots and the relevant solution is: Σ(ǫ) = 1 2 ǫ − ǫ n − sign(ǫ − ǫ n ) (ǫ − ǫ n ) 2 − 2αW B , therefore the density of states can be approximated by a semi-circle form ρ(ǫ) = g v g s C παW 2αW |B| − ǫ 2 θ(2αW |B| − ǫ 2 ) (11) where θ is a step-function and ǫ means the deviation from the LL of the clean system. In the single-level approximation, we neglect the shift of the level centre due to the real part of the self-energy. Similar form for the density of states is supported by more elaborate computations, see e.g. 30 . Note that the level width scales as |B|. The above approximation can be applied for levels ǫ n up to n 1 8W , for higher levels their overlap would become essential. Since at medium or strong fields the higher levels are typically far from the Fermi surface, we may use the same level-broadening for all the levels, since this does not alter the result for the magnetization. Under the assumption that all the levels are broadened with the same profile ρ(ǫ), the partition function can be computed by integrating Eq.(6) with the broadened chemical potential: Ω imp = dǫ ρ(ǫ − µ) ρ 2 (ǫ ′ )dǫ ′ Ω(∆, ǫ, T )(12) We note that this equation automatically inherits the regularization from Ω(∆, ǫ, T ). For the magnetization one gets a similar formula, but with an extra contribution coming from the B-derivative of the broadening profile. At high fields this contribution is paramagnetic due to broadening of the zero LL. As mentioned above, there is no significant dependence on the actual width of higher LLs. The effect of temperature may be alternatively taken into account, by convolving the impurity-broadening with the derivative of the Fermi function with respect to energy f ′33 . The typical line width of the high-quality sample 34 , 37 is estimated to be δ ≈ 3 B(T) meV, so W ≈ 0.003 and the Fermi energy is µ ≈ 14 meV. Such high-quality sample is close to the ideal case and exhibits strongly non-linear ∼ |B| magnetization already at 1 Tesla, as illustrated in Fig. 2. From this subsection we conclude that low concentrations of short-range impurities do not significantly alter the magnetization. For larger concentrations of impurities the above analysis is not applicable. B. Charge inhomogeneities It was shown experimentally, that for many graphene samples the charge inhomogeneities play the dominant role 38 . The effect of charge inhomogeneities can be modelled as long-range smooth variation of carrier density N (electron and hole puddles), and the range may be assumed to be comparable to the size of cyclotron orbits. A simple model proposed in Ref. [38] fits well the experiment. It assumes Gaussian variation of N with standard deviation of order δN ≈ 4 · 10 11 cm −2 . Since N ∼ µ 2 , the variation δN leads to large δµ close to the tip of the Dirac cone and results in the large broadening of the zeroth LL. This contrasts with the equal profile broadening, coming from the scattering on short-range impurities. Apart from variation of the charge density, the charged impurities cause a usual broadening of the levels. We use the simple model of constant Lorentzian broadening, independent of the magnetic field, as compared to short-ranged impurities, where we got the √ B dependence. This does not significantly change the result and is partly justified by computations in Ref. [35], where the B dependence was shown to be more shallow than √ B due to the fact that the screening increases with B. To perform calculations, we broaden the levels with Lorentz profile, getting the density of states ρ ǫ (E) then integrate it to find N (E F ) and it's inverse E F (N ) and then contract with Gaussian density fluctuation profile: P (N, N , δN ) = 1 √ 2πδN exp − (N − N ) 2 2δN 2(13) For example, the density of states is given by ρ(N ) = dN ρ ǫ (E F (N )) P (N, N , δN )(14) The consideration of graphene with fixed total number of electrons is equivalent to the situation of graphene on substrate and the particle number being proportional to gate voltage. For suspended graphene (or exfoliated flake) one may imagine a situation of the fixed local chemical potential at spots where it touches a contact. Assuming that these spots are far from the charged impurity (neutral spot), the local electron density at such spots would coincide with the average value N , so we may use the same as above function N (E F ) to convert ρ(N ) to ρ(N (µ)). The resulting density of states (DOS) as a function of the chemical potential is plotted in the figure 3 for magnetic fields 10 and 16 Tesla, δN ≈ 4 · 10 11 cm −2 and Lorentzian broadening with half-width ǫ = 15 meV. The same figure but plotted against the electron density N (or gate voltage) can be found in Ref. [38]. For high temperatures, the level broadening is dominated by temperature, and for low temperatures -by impurities. For low temperatures and Lorentz impurity broadening ǫ the Eq.(6) for the grand canonical potential, combined with Eq.(12) gives at T = 0: Ω(∆, µ, ǫ) = Ω(∆, 0, 0) + (15) + g s CB [F (µ + ∆) + 2 ∞ n=1 F (µ + E n )] + [µ → −µ] where E n = αn|B| + ∆ 2 , F (e) = e π (arctan(max(Λ, e)/ǫ) − arctan(max(−Λ, e)/ǫ)) + ǫ 2π log max(e,−Λ) 2 +ǫ 2 max(e,Λ) 2 +ǫ 2 and Λ is a large cut-off for Lorentzian broadening. To see the effect of charge puddles on magnetization, we compute the magnetization as a function of electron density N and then convolve with Gaussian profile: M (N , δN, T ) = 1 √ 2πδN dN M (N, T )e − (N −N ) 2 2(δN ) 2 (17) this prescription follows from summation over separate puddles, N denotes the average electron density. The results for fixed average carrier number N and for fixed chemical potential µ of neutral spot are shown in Figs. 4, 5. From these plots we see that charge disorder observed in experiments plays an important role to smear the magnetic oscillations (the plot for lower charged disorder shows clear magnetic oscillations), as does also the temperature, but the resulting magnetization is still non-linear. The non-linearity of magnetization gets weaker with increase of both types of disorder, but in a different way -see the insets to Figs. 4, 5. IV. EFFECT OF INTERACTIONS It is worth noting that the most drastic effect of interactions is the splitting of the zero LL into two levels separated by a new gap 2∆ (different from the initial ∆), when µ ≈ 0 and the zero LL is not completely filled 36,37,39 .∆ grows with magnetic field. The precise form of its dependence on the magnetic field is an open question and depends on the sample. Generally, there are the following kinds of energy gaps as a consequence of the magnetic field:(i) linear in B dependence, coming from the Zeeman spin splitting ∆ Z = 2µ B B ≈ 0.11B(T) meV and from the potential pseudospin splitting (from Kekule-type distortion of the lattice) ∆ Kekule ≈ 0.2B(T) meV, and (ii) interaction contributions, scaling as √ B, as explained e.g. in Ref. [3] and references therein. The experimental data in Ref. [37] contain significant uncertainty, allowing for various fits. For samples with high mobility 17000cm 2 /(Vs) the fit by √ B looks reasonable:∆ ≈ k 1/2 |B|, with k 1/2 ≈ 3 meV/ √ T. For samples with lower mobility a fit linear in B is within error bars:∆ ≈ k linear |B| with k linear ≈ 0.8 meV/T at half-filling. This is 7 times larger than the Zeeman splitting, indicating a different mechanism. The splitting of the zeroth LL leads to an extra paramagnetic contribution, as compared to Eq.(6) with∆ = 0, µ = 0. The correction to thermodynamic potential is easily computed by replacing log 2 → 1 2 log 1 + e −∆/(kB T ) + we have in the singleparticle approximation: δM = Cg s k B T 2 log cosh x 2 + x tanh x 2 This is shown by the dot-dashed curve on Fig. 2. The paramagnetic nature of this contribution can be understood as due to the reduction in energy of the filled half of the zeroth LL by interactions. The correction is quadratic for small x and linear at large x. For √ B splitting, the correction of the magnetization is δM = Cg s k B T 2 log cosh x 2 + 1 2 x tanh x 2 with x = k 1/2 √ |B| kB T . The different fittings of the splitting of the zeroth LL as a function of B, lead to different B-dependence of the magnetization, which, in turn, can provide one more method to distinguish between the main possibilities. We note that still there is no fully satisfactory theoretical explanation of the level-splitting effect 37 , and its microscopic explanation is beyond the scope of this work. V. EXPERIMENTAL PROPOSAL AND DISCUSSION The measurements of non-linear magnetization can be used to study various properties of graphene sample: magnetization is sensitive to the number of carriers, mass gap and disorder as well as number of layers. In particular, one can extract the magnetic field dependence of the interaction-induced splitting of the zero LL. One possible way to measure the nonlinear magnetization is to measure the magnetization of a suspended graphene flake with scanning SQUID microscopy. Alternatively, one may go to higher magnetic fields (up to 50 Tesla) with larger amount of lower-quality graphene samples, e.g. the graphene laminate, used in Ref. [40]. At high fields the cyclotron orbits would fit better in the smallsized flakes and one can neglect the boundary effects. In real samples there will be a significant non-linear part of the magnetization, coming from localized impurity spins (Pauli contribution), but this effect can be fitted at lower magnetic fields or fields parallel to the surface, and consistently subtracted 41 . Moreover, at magnetic fields greater or of the order of 10 Tesla and at low temperatures T < 4 K, it is expected that all the localized moments will come to saturation, therefore the detected nonlinearity will be a consequence of the orbital contribution. In conclusion, the two-dimensional nature and the linear spectrum of graphene are the necessary conditions to observe non-linear magnetization at accessible magnetic fields. The underlying reason is the breakdown of the linear response theory due to the fact that the magnetic energy is the dominant in the system. The linear dispersion relation gives relatively large distances between the LLs and the two-dimensionality leads to absence of k z dispersion of LLs. We have found that even at room temperature and with moderate concentration of impurities the non-linearity should be revealed at about 10-20 Tesla, while with very clean suspended samples at low temperatures a lower value of magnetic field is sufficient. There are two types of non-linear behaviour that are present: near half-filling, the magnetization scales as √ B at higher fields, due to nonanalyticity of the effective action; then at higher values of the chemical potential, the magnetization is small and linear at small fields, while it increases the slope after the first LL crosses the Fermi energy, and at even higher magnetic fields one can also observe the √ B behaviour. We acknowledge inspiring discussions with Feo Kusmartsev, Marat Gaifullin, Roberto Soldati and discussions of experimental results with Irina Grigorieva and Paulina Plochocka. This work was supported by the Engineering and Physical Sciences Research Council under EP/H049797/1. FIG. 1 . 1The magnetization at T=300 K, µ = 0 and at gap values ∆ = 0 and 52 meV is compared to the linear behaviour. FIG. 2 . 2Non-linear magnetization with short-range impurities for fixed µ or for fixed N . Green dashed curve takes into account the linear in B splitting of zero LL due to interactions, while the green dot-dashed curve is for ∼ √ B splitting of zero LL -see Section IV. FIG. 3 . 3Density of states for 16 T (solid) and 10 T (dotted) as a function of chemical potential of neutral spot with gaussian carrier density fluctuation δN ≈ 4 · 10 11 cm −2 and Lorentzian broadening of levels with halfwidth ǫ = 15 meV. FIG. 4 .FIG. 5 . 45Non-linear magnetization with charged impurities for fixed average electron density. Bottom to top: N = 0; 5 · 10 11 ; 10 12 ; 1.5 · 10 12 cm −2 , temperature T = 300 K (black solid) and T = 0 (red dashed); Lorentz broadening ǫ = 15 meV, density fluctuation dispersion is δN = 4 · 10 11 cm −2 . A plot for smaller δN = 4 · 10 10 cm −2 with N = 10 12 cm −2 and T = 0 (blue dotted) is shown for comparison. Inset: Dependence on the impurity strength. Red dashed curve is the same as the bottom one on the main plot: N = 0, T = 0, ǫ = 15 meV, δN = 4 · 10 11 cm −2 ; green dotted: Lorentz broadening increased to ǫ = 30 meV; blue dot-dashed: density fluctuation increased to δN = 6 · 10 11 cm −2 . Non-linear magnetization charged impurities for fixed chemical potential of neutral spot. Bottom to top: µ = 0; 50; 100; 150 meV temperature T = 300 K (black solid) and T = 0 (red dashed); Lorentz broadening is ǫ = 15 meV, density fluctuation dispersion is δN = 4·10 11 cm −2 . A plot for smaller δN = 4·10 10 cm −2 , µ = 100 meV, T = 0 (blue dotted) is shown for comparison. Inset: Dependence on the impurity strength. Red dashed curve is the same as on the main plot: µ = 150 meV, T = 0, ǫ = 15 meV, δN = 4 · 10 11 cm −2 ; green dotted: Lorentz broadening increased to ǫ = 30 meV; blue dot-dashed: density fluctuation increased to δN = 6 · 10 11 cm −2 . 1 2 1log 1 + e +∆/(kB T ) . For the linear dependence on B, defining x = k linear |B| kB T . * on leave from PNPI. * on leave from PNPI; . S Slizovskiy@lboro, † J Betouras@lboro, [email protected][email protected] . K S Novoselov, A K Geim, S V Morozov, D Jiang, Y Zhang, S V Dubonos, I V Grigorieva, A A Firsov, Science. 306666K. S. Novoselov, A. K. Geim, S. V. Morozov, D. Jiang, Y. Zhang, S. V. Dubonos, I. V. Grigorieva, and A. A. Firsov, Science, 306, 666 (2004) . A H Castro Neto, F Guinea, N M Peres, K S Novoselov, A K Geim, Rev .Mod. Phys. 81109A. H. Castro Neto, F. Guinea, N. M. Peres, K. S. Novoselov, and A. K. Geim, Rev .Mod. Phys. 81, 109 (2009). . M O Goerbig, Rev. Mod. Phys. 831193M. O. Goerbig, Rev. Mod. Phys. 83, 1193 (2011). . B Partoens, F M Peeters, Phys. Rev. B. 7475404B. Partoens and F. M. Peeters, Phys. Rev. B 74, 075404 (2006). . A N Redlich, Phys. Rev. 292366A. N. Redlich, Phys. Rev. D29, 2366 (1984). . D Cangemi, G Dunne, Ann. Phys. 249582D. Cangemi and G. Dunne, Ann. Phys. 249, 582 (1996). . D Allor, T D Cohen, D A Mcgady, Phys. Rev. 7896009D. Allor, T. D. Cohen, and D. A. McGady, Phys. Rev. D78, 096009 (2008). . B Dóra, R Moessner, Phys. Rev. 81165431B. Dóra and R. Moessner, Phys. Rev. B81, 165431 (2010). . M Mueller, J Schmalian, L Fritz, Phys.Rev.Lett. 10325301M. Mueller, J. Schmalian, L. Fritz, Phys.Rev.Lett. 103, 025301 (2009) . G Gómez-Santos, T Stauber, Phys. Rev. Lett. 10645504G. Gómez-Santos and T. Stauber, Phys. Rev. Lett. 106, 045504 (2011). . S Slizovskiy, J J Betouras, in preparationS. Slizovskiy and J. J. Betouras, in preparation . . D Belitz, T R Kirkpatrick, T Vojta, Phys. Rev. B. 559452D. Belitz, T. R. Kirkpatrick, and T. Vojta, Phys. Rev. B 55, 9452 (1997). . D Belitz, T R Kirkpatrick, A J Millis, T Vojta, Phys. Rev. B. 5814155D. Belitz, T. R. Kirkpatrick, A. J. Millis, and T. Vo- jta, Phys. Rev. B 58, 14155 (1998). . A V Chubukov, D L Maslov, Phys. Rev. B. 68155113A. V. Chubukov and D. L. Maslov, Phys. Rev. B 68, 155113 (2003). . D V Efremov, J J Betouras, A Chubukov, Phys. Rev. 77220401D. V. Efremov, J. J. Betouras, and A. Chubukov, Phys. Rev. B77, 220401 (2008). . J Betouras, D Efremov, A Chubukov, Phys. Rev. B. 72115112J. Betouras, D. Efremov, and A. Chubukov, Phys. Rev. B 72, 115112 (2005). . J W Mcclure, Physical Review. 104666J. W. McClure, Physical Review 104, 666 (1956). . S Y Zhou, G.-H Gweon, A V Fedorov, P N First, W A De Heer, D.-H Lee, F Guinea, A H Castro Neto, A Lanzara, Nature Materials. 6916S. Y. Zhou, G.-H. Gweon, A. V. Fedorov, P. N. First, W. A. de Heer, D.-H. Lee, F. Guinea, A. H. Cas- tro Neto, and A. Lanzara, Nature Materials 6, 916 (2007). . S Y Zhou, D A Siegel, A V Feodorov, F El Gabaly, A K Schmid, A H Castro Neto, D.-H Lee, A Lanzara, Nature Materials. 7259S. Y. Zhou, D. A. Siegel, A. V. Feodorov, F. El Ga- baly, A. K. Schmid, A. H. Castro Neto, D.-H. Lee, and A. Lanzara, Nature Materials 7, 259 (2008). . H Sahin, S Ciraci, Phys. Rev. B. 8435452H. Sahin and S. Ciraci, Phys. Rev. B 84, 035452 (2011). A band gap of order 0.1 eV has been observed in graphene epitaxially grown on SiC substrate 18,19 , but a word of caution is needed here, since substrate may induce high impurity concentrations. A band gap of order 0.1 eV has been observed in graphene epitaxially grown on SiC substrate 18,19 , but a word of caution is needed here, since substrate may induce high impurity concentrations. . S G Sharapov, V P Gusynin, H Beck, Phys. Rev. 6975104S. G. Sharapov, V. P. Gusynin, H. Beck, Phys. Rev. B69, 075104 (2004) . M Nakamura, Phys. Rev. 76113301M.Nakamura, Phys. Rev. B76, 113301 (2007) . M Koshino, T Ando, Phys. Rev. 81195431M. Koshino and T. Ando, Phys. Rev. B81, 195431 (2010). In the context of dynamical flavor symmetry breaking in 2+1 relativistic field theory models, related work can be found in V. P Gusynin, V A Miransky, I A Shovkovy, Phys. Rev. 524718In the context of dynamical flavor symmetry breaking in 2+1 relativistic field theory models, related work can be found in V. P. Gusynin, V. A. Miransky, I. A. Shovkovy, Phys. Rev. D52, 4718 (1995); . K G Klimenko, Z. Phys. C. 54323K. G. Kli- menko, Z. Phys. C 54, 323 (1992); . K G Klimenko, Theor. Math. Phys. 90K.G. Klimenko, Theor. Math. Phys. 90, 1, (1992). . M Koshino, T Ando, Phys. Rev. 7685425M. Koshino and T. Ando, Phys. Rev. B76, 085425 (2007). . S A Safran, Phys. Rev. 30421S. A. Safran, Phys. Rev. B30, 421 (1984). . M I Katsnelson, G E Volovik, arXiv:1203.1578M. I. Katsnelson and G. E. Volovik, arXiv: 1203.1578. . M Koshino, T Ando, Phys. Rev. 75235333M. Koshino and T. Ando, Phys. Rev. B75, 235333 (2007). . B Dóra, Low Temp, Phys. 34801B. Dóra, Low Temp. Phys. 34, 801 (2008) . N Shon, T Ando, Journal of the Physical Society of Japan. 672421N. Shon and T. Ando, Journal of the Physical Society of Japan 67, 2421 (1998). . N M R Peres, F Guinea, A H Castro Neto, Phys. Rev. 73125411N. M. R. Peres, F. Guinea, and A. H. Castro Neto, Phys. Rev. B73, 125411 (2006). 12) can be straighforwardly generalized to the case when the zero LL has a different broadening profile ρ0 by adding µ −∞ dǫ ǫ (ρ0(ǫ)−ρ(ǫ)) where the profiles need to be convoluted with the derivative of the Fermi function f ′ to account for temperature. Eq, Eq.(12) can be straighforwardly generalized to the case when the zero LL has a different broadening pro- file ρ0 by adding µ −∞ dǫ ǫ (ρ0(ǫ)−ρ(ǫ)) where the pro- files need to be convoluted with the derivative of the Fermi function f ′ to account for temperature. . M L Sadowski, G Martinez, M Potemski, C Berger, W A De Heer, Phys. Rev. Lett. 97266405M. L. Sadowski, G. Martinez, M. Potemski, C. Berger, and W. A. de Heer, Phys. Rev. Lett. 97, 266405 (2006). . C H Yang, F M Peeters, W Xu, Phys. Rev. B. 82205428C. H. Yang, F. M. Peeters, and W. Xu Phys. Rev. B 82, 205428 (2010). . Y Zhang, Z Jiang, J P Small, M S Purewal, Y.-W Tan, M Fazlollahi, J D Chudow, J A Jaszczak, H L Stormer, P Kim, Phys. Rev. Lett. 96136806Y. Zhang, Z. Jiang, J. P. Small, M. S. Purewal, Y.- W. Tan, M. Fazlollahi, J. D. Chudow, J. A. Jaszczak, H. L. Stormer, and P. Kim, Phys. Rev. Lett. 96, 136806 (2006). . E A Henriksen, P Cadden-Zimansky, Z Jiang, Z Q Li, L.-C Tung, M E Schwartz, M Takita, Y.-J , E. A. Henriksen, P. Cadden-Zimansky, Z. Jiang, Z. Q. Li, L.-C. Tung, M. E. Schwartz, M. Takita, Y.-J. . P Wang, H L Kim, Stormer, Phys. Rev. Lett. 10467404Wang, P. Kim, and H. L. Stormer, Phys. Rev. Lett. 104, 067404 (2010). . L A Ponomarenko, Phys. Rev. Lett. 105136801L. A.Ponomarenko et al. Phys. Rev. Lett. 105, 136801 (2010) . A J M Giesbers, L A Ponomarenko, K S Novoselov, A K Geim, M I Katsnelson, J C Maan, U Zeitler, Phys. Rev. 80201403A. J. M. Giesbers, L A. Ponomarenko, K. S. Novoselov, A. K. Geim, M. I. Katsnelson, J. C. Maan, and U. Zeitler, Phys. Rev. B80, 201403 (2009). . M Sepioni, R R Nair, S Rablen, J Narayanan, F Tuna, R Winpenny, A K Geim, I V Grigorieva, Phys. Rev. Lett. 105207205M. Sepioni, R. R. Nair, S. Rablen, J. Narayanan, F. Tuna, R. Winpenny, A. K. Geim, and I. V. Grigorieva, Phys. Rev. Lett. 105, 207205 (2010). . R R Nair, M Sepioni, I-Ling Tsai, O Lehtinen, J Keinonen, A V Krasheninnikov, T Thomson, A K Geim, I V Grigorieva, Nature Physics. 8199R. R. Nair, M. Sepioni, I-Ling Tsai, O. Lehtinen, J. Keinonen, A. V. Krasheninnikov, T. Thomson, A. K. Geim, and I. V. Grigorieva, Nature Physics 8, 199 (2012)
[]
[ "Phonon Control of Magnetic Relaxation in the Pyrochlore Slab Compounds SCGO and BSZCGO", "Phonon Control of Magnetic Relaxation in the Pyrochlore Slab Compounds SCGO and BSZCGO" ]
[ "M Zbiri ", "H Mutka ", "M R Johnson ", "H Schober ", "C Payen ", "\nInstitut des Matériaux Jean Rouxel\nInstitut Max von Laue-Paul Langevin\n6 rue Jules Horowitz, BP 15638042, Cedex 9GrenobleFrance\n", "\nUniversité de Nantes -CNRS\nB.P. 3222944322Nantes Cedex 3 -France\n" ]
[ "Institut des Matériaux Jean Rouxel\nInstitut Max von Laue-Paul Langevin\n6 rue Jules Horowitz, BP 15638042, Cedex 9GrenobleFrance", "Université de Nantes -CNRS\nB.P. 3222944322Nantes Cedex 3 -France" ]
[]
We are interested in the phonon response in the frustrated magnets SrCr9xGa12−9xO19 (SCGO) and Ba2Sn2ZnCr7xGa10−7xO22 (BSZCGO). The motivation of the study is the recently discovered, phonon-driven, magnetic relaxation in the SCGO compound [Mutka et al. PRL 97 047203 (2006)] pointing out the importance of a low-energy ( ω ∼7 meV) phonon mode. In neutron scattering experiments on these compounds the phonon signal is partly masked by the magnetic signal from the Cr moments and we have therefore examined in detail the non-magnetic isostructural counterparts SrGa12O19 (SGO) and Ba2Sn2ZnGa10O22 (BSZGO). Our ab-initio lattice dynamics calculations on SGO reveal a peak in the vibrational density of states matching with the neutron observations on SGO and SCGO. A strong contribution in the vibrational density of states comes from the partial contribution of the Ga atoms on the 2b and 12k sites, involving modes at the M-point of the hexagonal system. These modes comprise dynamics of the kagomé planes of the pyrochlore slab magnetic sub-lattice, 12k sites, and therefore can drive magnetic relaxation via spin-phonon coupling. Both BSZCGO and BSZGO show a similar low-energy Raman peak but no corresponding peak in the neutron determined density of states of BSZGO is seen. However, a strong non-Debye enhancement of low-energy phonon response is observed. We attribute this particular feature to the Zn/Ga disorder on the 2d -site, already evoked earlier to affect the magnetic properties of BSZCGO. We propose that this disorder-induced phonon response explains the absence of a characteristic energy scale and the much faster magnetic relaxation observed in BSZCGO.
null
[ "https://arxiv.org/pdf/0810.3941v2.pdf" ]
117,005,981
0810.3941
be68c25a4b5e0d1b6e524816e7e21605d8ef3a69
Phonon Control of Magnetic Relaxation in the Pyrochlore Slab Compounds SCGO and BSZCGO 17 Dec 2009 M Zbiri H Mutka M R Johnson H Schober C Payen Institut des Matériaux Jean Rouxel Institut Max von Laue-Paul Langevin 6 rue Jules Horowitz, BP 15638042, Cedex 9GrenobleFrance Université de Nantes -CNRS B.P. 3222944322Nantes Cedex 3 -France Phonon Control of Magnetic Relaxation in the Pyrochlore Slab Compounds SCGO and BSZCGO 17 Dec 2009numbers: 7425Kc7870Nx7830-j7115Mb We are interested in the phonon response in the frustrated magnets SrCr9xGa12−9xO19 (SCGO) and Ba2Sn2ZnCr7xGa10−7xO22 (BSZCGO). The motivation of the study is the recently discovered, phonon-driven, magnetic relaxation in the SCGO compound [Mutka et al. PRL 97 047203 (2006)] pointing out the importance of a low-energy ( ω ∼7 meV) phonon mode. In neutron scattering experiments on these compounds the phonon signal is partly masked by the magnetic signal from the Cr moments and we have therefore examined in detail the non-magnetic isostructural counterparts SrGa12O19 (SGO) and Ba2Sn2ZnGa10O22 (BSZGO). Our ab-initio lattice dynamics calculations on SGO reveal a peak in the vibrational density of states matching with the neutron observations on SGO and SCGO. A strong contribution in the vibrational density of states comes from the partial contribution of the Ga atoms on the 2b and 12k sites, involving modes at the M-point of the hexagonal system. These modes comprise dynamics of the kagomé planes of the pyrochlore slab magnetic sub-lattice, 12k sites, and therefore can drive magnetic relaxation via spin-phonon coupling. Both BSZCGO and BSZGO show a similar low-energy Raman peak but no corresponding peak in the neutron determined density of states of BSZGO is seen. However, a strong non-Debye enhancement of low-energy phonon response is observed. We attribute this particular feature to the Zn/Ga disorder on the 2d -site, already evoked earlier to affect the magnetic properties of BSZCGO. We propose that this disorder-induced phonon response explains the absence of a characteristic energy scale and the much faster magnetic relaxation observed in BSZCGO. I. INTRODUCTION The interplay between structural, electronic and dynamic degrees of freedom in geometrically frustrated magnetic materials has the consequence of creating highly degenerate ground states, which have generated considerable interest 1,2,3 . Short-range correlations in frustrated magnets lead to the formation of weakly coupled, fluctuating clusters and consequently a macroscopic, collective degeneracy can prevail 1,4,5,6 . At low temperature, frustrated magnets are expected to be sensitive to weak perturbations, which can raise the ground-state degeneracy, produce a hierarchy of closely space energy levels and allow the possibility of slow dynamics at low temperature between the corresponding states. When the associated energy scales of these states fall well below that of magnetic interactions 4,7 , various perturbations including static 8 or dynamic 9,10 lattice effects may play a key role due to coupling between lattice and magnetic degrees of freedom. A recent neutron spinecho (NSE) examination of the pyrochlore slab antiferromagnet (kagome bilayer) SrCr 9x Ga 12−9x O 19 (SCGO) 11 suggested that phonons affect the slow relaxation of this highly frustrated spin system. The SCGO compound has a particular ground-state in which dimer correlations within the pyrochlore slabs show a partial freezing of about 1/3 of the total possible ordered magnetic moment in spite of the high value of the average intraslab superexchange interaction (∼ 13 meV) 12 . The frozen ground-state does not fit into a spin-glass picture, for example it has been shown that both the freezing transition temperature and the frozen moment decrease with increased level of magnetic dilution due the Ga/Cr substitution 13 . The extremely broad relaxation seen using NSE spectroscopy was fitted with a phenomenological stretched exponential time-dependence to infer an activated temperature dependence with an activation energy of E a ∼ 7 meV 11 . A vibrational mode was observed close to this same position by Raman spectroscopy and inelastic neutron scattering (INS), leading to the conclusion that phonons drive the relaxation. However, why phonons at 7 meV, and not other frequencies, drive magnetic relaxation is not clear and further investigation of phonons and related spin-phonon mechanisms is required. In this context we have studied the isostructural non-magnetic material SrGa 12 O 19 (SGO) using ab-initio lattice dynamics calculations that provide a full picture of the phonon dispersion relations, total and partial density of states, energies of Raman active modes and the neutron scattering cross-section. The choice of the non-magnetic counterpart SGO is motivated by experimental considerations (see below) and the fact that, practically, handling the chemical disorder associated with the Cr/Ga substitution in SCGO efficiently is not feasible in the lattice dynamics calculation.The calculated neutron scattering crosssection is used for the evaluation of the powder averaged Q-dependent intensity that can be compared with experiment. The calculations are accompanied by new In BSZCGO the magnetic relaxation rate at low-T was observed to be by some two orders of magnitude faster than in SCGO but without a welldefined energy scale 11 . Nevertheless, both in BSZCGO and BSZGO Raman data reveal a phonon peak at an energy close to the one seen in SCGO and SGO, while no peak is seen in the INS response. As we shall see below the absence of a characteristic energy in the relaxation of BSZCGO makes more sense when we consider the rather strong non-Debye enhancement of the phonon DOS that we attribute to the substitutional disorder, the 50/50 mix of Zn and Ga on the 2d site in this compound 16 . Accordingly we conclude that the localized phonon modes associated with the substitutional disorder can also affect the magnetic interactions and induce relaxational dynamics of the magnetic system. These observations also point out the particularity of the frustrated magnets concerning the system dependence of the low-energy properties. As for SCGO, the aim of the present work is (i) to identify from ab-initio calculations the experimentally observed phonon modes (neutron and Raman spectroscopy), these calculations were used to obtain generalized neutron weighted vibrational density of states (GDOS) and the active Raman modes for SGO, (ii) understand the importance of non-Γ-point modes, and (iii) investigate the normal modes having an effect on the dynamics of the magnetic sites and hence the possibility to drive magnetic relaxation. This paper is organized as follows: The experimental and computational details are provided in Section II and Section III, respectively. Section IV is dedicated to the presentation of the results that are discussed in Section V with conclusions. II. EXPERIMENTAL DETAILS Powder samples of non-magnetic SGO and BSZGO were prepared using a standard solid-state high temperature ceramic synthesis method and characterized by x-ray and neutron diffraction. As for the magnetic SCGO (x=0.95) and BZSCGO (x=0.97) samples, we analyzed those that were already used in our previous studies 11,13,16,17 . The INS measurements were performed at the Institut Laue Langevin (ILL) 18 (Grenoble, France), using two instruments; the cold neutron time-of-flight spectrometer IN6 and the thermal neutron time-of-flight spectrometer IN4. IN6 operating with an incident wavelength of λ i = 4.1Å provides very good resolution (0.2 meV FWHM) in the lower energy transfer range (| ω| ≤ 10 meV) for the anti-Stokes spectrum. On IN4 using incident wavelengths of λ i = 2.6 or 1.8Å one has an extended Q-range and this allows the Stokes spectrum to be measured at low temperature over a broader energy transfer range with a resolution of 0.5 or 1 meV (FWHM), respectively. Experiments were performed on both SCGO/SGO and BSZCGO/BSZGO systems mainly at T = 300K and also at low temperatures 2 K< T <12 K for SCGO/SGO. The data analysis was done using ILL software tools and the GDOS was evaluated using standard procedures 19 without applying multiphonon corrections. The experimental GDOS was normalised to 3N modes where N is the number of atoms in the formula unit. Raman measurements at T = 300 K were performed at the Institut des Matériaux Jean Rouxel (IMN) 20 . We can note that a peaked phonon reponse comparable to the one seen in SGO at ω ≈ -5.5 meV with Q-dependent intensity is observed in SCGO at the highest Q-range, but at a slightly shifted energy transfer ω ≈ -7 meV, as shown by the arrows. The bottom panels show the constant energy transfer scans at ω = -7 meV for SCGO (bottom left) and at ω = -5.5 meV for SGO (bottom right). For SCGO the magnetic form factor Q-dependence is shown (dashed line) in order to indicate the trend of the magnetic scattering. The phonons in SCGO appear less structured and somewhat blurred due to the weaker coherent scattering power of Cr-atoms as well as due to the overlap with the magnetic signal. III. COMPUTATIONAL DETAILS Relaxed geometries and total energies were obtained using the projector-augmented wave (PAW) formalism 21 of the Kohn-Sham formulation of the density functional theory KSDFT 22,23 at the generalized gradient approximation level (GGA), implemented in the Vienna ab-initio simulation package (VASP) 24,25 . The GGA was formulated by the Perdew-Burke-Ernzerhof (PBE) 26,27 density functional. All results are well converged with respect to k-mesh and energy cutoff for the plane wave expansion. The break conditions for the self consistent field (SCF) and for the ionic relaxation loops were set to 10 −6 eV and 10 −4 eVÅ −1 , respectively. Hellmann-Feynman forces following geometry optimisation were less than 10 −4 eVÅ −1 . Full geometry optimization, including cell parameters, was carried out on the experimentally refined SGO structure 14 containing eleven crystallographically inequivalent atoms (5 O, 5 Ga and 1 Sr). A comparison of the ab-initio optimized and experimental structural parameters is given in the Table I. The space group is P6 3 /mmc with 2 formula-units per unit cell (64 atoms). In order to determine all force constants, the supercell approach was used for lattice dynamics calculations. An orthorhombic supercell was constructed from the relaxed structure containing 8 formula-units (256 atoms). A second partial geometry optimization (fixed lattice parameters) was performed on the supercell in order to further minimize the residual forces. Total energies and Hellmann-Feynman forces were calculated for 90 structures resulting from individual displacements of the symmetry inequivalent atoms in the supercell, along the inequivalent cartesian directions (±x, ±y and ±z). 192 phonon branches corresponding to the 64 atoms in the primitive cell, were extracted from subsequent calculations using the direct method 28 as implemented in the Phonon software 29 . The coherent dynamic structure factor S(Q, ω) was calculated by the numerical procedure for evaluation of powder-averaged lattice dynamics, PALD 30,31 . For a list of wave vectors K with randomly chosen directions, but lengths corresponding to the Q range of interest, the dynamical matrix is diagonalised for each wave vector and the corresponding eigenfrequencies and eigenvectors are used to calculated S(Q, ω) for onephonon creation. The wave vectors, spectral frequencies and intensities are then used to construct the 2-D S(Q, ω) map, which can be compared with the measured map as will be shown below. IV. RESULTS A. SGO/SCGO phonon response from experiments Figure 1 shows the S(Q, ω) intensity maps measured at T = 300 K using the IN4 spectrometer, for both the magnetic system SCGO and its non-magnetic counterpart SGO. The strong intensity at low Q centered in the elastic position in SCGO is due to the quasi-elastic magnetic response resulting from the fluctuating S = 3/2 moments of the Cr 3+ atoms. Note that the Q-dependence of the magnetic response is modulated with a main maximum at Q= 1.4Å −1 and the paramagnetic form factor gives just an overall trend. In spite of the decay of the magnetic form factor with increasing Q, this signal dominates and swamps the phonon signal. The S(Q, ω)-map of SGO shows the phonon signal clearly visible in the absence of the magnetic response and it is possible to correlate the 7 meV features at highest Q-range, Q ≥ 3.5Å −1 , in SCGO with the 5.5 meV features in non-magnetic sample SGO. As we can see phonons in SCGO are masked due to the presence of the magnetic signal and therefore are difficult to measure. In addition, the chemical disorder arising from the Cr substitution on the Ga sites, which can in principle be modelled, leads in practice to phonon instabilities in the calculations. For these reasons, this study is focused on the non-magnetic material SGO. These results are then used with proper care to ex- For SCGO the weight of the paramagnetic contribution has been minimized using only the high Q-data (Q el > 3.5Å −1 ). Due to the remaining magnetic contribution the GDOS at low energy shows a faster than Debye growth as compared to SGO. There are distinct differences in the phonon bands in the range 20 meV < ω < 70 meV, but we can note a similar form at low energy, the peaked phonon response seen in SGO at ω ≈ 5.5 meV and in SCGO at ω ≈ 7 meV (see Fig. 1) produces a characteristic kink at that same position. A rather similar response of the two systems is observed in the high-energy range dominated by the oxygen modes. plain phonon-related phenomena in the magnetic system SCGO. The two systems are isostructural and accordingly the magnetic Cr sub-lattice sites can be identified crystallographically using the corresponding positions of the Ga atoms. Even though it is clear that the phonon response of the two systems cannot be identical both Raman spectroscopy and INS provide justification for our approach since the spectra of SCGO and SGO display similar features in the low-lying frequency range. This is reported in Fig. 2 for the Raman data. Both samples show a low-energy mode at ∼ 53 cm −1 (∼ 6.6 meV) and at ∼ 60 cm −1 (∼ 7.4 meV) for SGO and SCGO, respectively. Also the measured neutron density of states of the two systems follow quite similar trends, see Fig. 3. No attempt was made to eliminate the magnetic contribution that, in spite of being small, influences the GDOS evaluated for SCGO in the low-energy range. Supposing that the low-energy Raman mode are characteristic of Ga/Cr vibrations we can argue that the shift of positions of the main low-frequency features is partly connected to the different masses of the Cr and Ga atoms, 52 and 69.72, respectively. The ratio of the observed Raman frequencies (∼0.9) is close to the square root of the ratio of the atomic masses (∼0.86), as would be expected for a vibrational mode in a similar potential involving essentially the Cr or Ga atoms. Comparing the INS response we note a somewhat bigger difference, the peak seen at 5.5 meV in SGO appears at 7 meV in SGCO. Of course for modes with participation from many atomic species and different interatomic potentials the simple comparison based on atomic mass only cannot be quantitatively correct. B. SGO/SCGO phonon response compared with calculation Fig. 4 compares the ab-initio determined and measured GDOS for SGO. In order to compare with experimental data, the calculated GDOS was determined as the sum of the partial vibrational densities of states g i (ω) weighted by the atomic scattering cross sections and masses: GDOS(ω)= i (σ i /M i )g i (ω), where (σ i /M i = 0.265 (O) , 0.096 (Ga), 0.071 (Sr); i={O, Ga, Sr}). A more detailed look at the neutron data on the SGO compound is shown in Fig. 5 where one can see that the low-energy phonon response is well described by a superposition of a Debye term and a broadened peak at ω=5.5 meV. The observed broadening of the peak (δ ≈ 2.3 meV (FWHM) does not depend on the the instrumental resolution that is less than the apparent line-width both on IN4 and on IN6. Disregarding the broadening the calculated spectrum is in very good agreement with the observed one, in particular the peak at ∼ 5.5 meV is very well reproduced and also the relative weight of the peak is in reasonable agreement with calculation. The width of the peak is not resolution limited and we con- clude that the mode seen in Raman spectrum reported in Fig. 2 is included in the peak of the GDOS. Due to the magnetic contribution present in SCGO fitting with the the expected ∝ ω 2 and a gaussian peak is not applicable. However, mimicking the magnetic contribution with an additional linear term, a reasonable fit can be obtained for the gaussian peak at ω = 7 meV with a width of δ ≈ 2.4 meV (FWHM) similar to the SGO case (not shown). Based on the above we can consider that the experimental data validates the computational approach in terms of the generalized density of states and Raman response. Accordingly we can go further in analyzing the calculated partial GDOS. Fig. 6 shows total (GDOS) and the partials g i (ω) of SGO. The major contribution of the Ga atoms to the peaked low-energy response is clear. The Ga partial g Ga (ω) can be further resolved into the contributions from the different crystallographic sites g Ga,j (ω), see Fig. 7. This is of interest for the case of SCGO since the magnetic Cr atoms occupy almost fully the sites 2a, 12k and 4f vi that contain only Ga in SGO, see table I. It appears that the 2b and 12k site partials, g Ga,2b (ω) and g Ga,12k (ω) give the strongest contribution to the total density of states. The 2b site is fully occupied by the nonmagnetic Ga also in SCGO while the Cr occupation on the 12k and 2a sites, forming the pyrochlore slab, is beyond 90 % in the least diluted samples 13 (see Table I and Figure 10). Structural refinements of the isostructural magnetoplumbite compounds, including SGO 14 , usually indicate large Debye-Waller factor for the cation located in the trigonal bipyramid on the 2b site, and a split occupation on the 4e site can be considered (see Table I). The high spectral weight of the g Ga,2b (ω) partial is consistent with dynamical disorder being the reason for the large and anisotropic thermal displacement of Ga atom. The 12k and 2a sites constitute the pyrochlore slab magnetic sub-lattice, kagome and triangular planes respectively, in the SCGO compound, and we begin to understand how phonons at 7 meV can drive magnetic relaxation in SCGO. Calculated phonon dispersion relationships are shown in Fig. 8 in which the color-coded intensities correspond to the coherent, dynamic structure factor for one-phonon creation. Close to the GDOS low-energy peak (∼ 6 meV) the dispersion curves consists of flat branches with two modes around the M-point having maximal intensity. These modes make the strongest contribution to the DOS and we claim we see their signature in form of the gaussian peak seen in the experimental GDOS, too. At the Γ-point there are three Raman-active modes close to the GDOS peak but with weaker response in the INS. Due to the fact that the nuclei of the SGO are coherent neutron scatterers the inelastic scattering cross-section has characteristic features even in powder averaged phonon response that can also be resolved in terms of its wavevector dependence. In order to gain further confidence in the dispersion relations and the overall phonon response, we have calculated the coherent structure factor over the whole Brillouin zone. From this data we have generated the S(Q, ω) map for a powder sample, using the recently developed, powder-averaged lattice dynamics approach 30,31 . The calculated result can be compared with experimental data from IN4, see Fig. 9. Comparison by eye of the low-energy spectrum is ham- Table I for details of site occupations. pered by the elastic intensity in the experimental data, which is not calculated. Measured and calculated S(Q, ω) maps are, however, in reasonably good agreement as shown by constant energy and constant Q cuts. One can also note that the modulations superposed on the overall Q 2 -dependence of the inelastic signal at energy transfer of 5.5 meV appear periodically at positions close to the successive M-point values at Q ≈ 1.9Å −1 , 3.1Å −1 , 4.3Å −1 . Note that the measured data shown in Fig. 9 is taken at low temperature, T = 2K, and therefore a shorter incident wavelength (λ i = 1.8Å) was necessary, compared to the setting used to obtain the data shown in Fig. 1 (λ i = 2.6Å) , in order to reach high enough momentum transfer in the Stokes (neutron energy loss, phonon creation) side of the spectrum. The quality of the data at low temperature is worse due to reduced thermal population of phonon states. Also the energy resolution of the instrument is comparably worse. A check of the temperature dependence of the phonon response in SCGO in the vicinity of the spin freezing transition did not indicate any significant change in the limits of experimental precision. In view of the importance of the two M-point modes (5.6 meV and 6.2 meV), we show schematically their displacement vectors in Fig. 10 for the case of the SCGO magnetic compound. Therein the displacements of Cr and Ga atoms are highlighted. The sites 12k and 2a belong to the magnetically connected pyrochlore slab network, representing the triangular planes and the kagome planes, respectively, and the vibrational characteristics there should be the most important in the context of the magnetic relaxation. As depicted in Fig. 10 the Mpoint phonons involve large amplitude displacements of the Ga atoms on the 2b sites, and the associated response of the 2a and 12 k sites will modulate the magnetic interactions between these sites. In this context, the observed phonon-driven magnetic relaxation could be explained as a consequence of the strong sensivity of the magnetic interactions to subtle variations in the magnetic exchange pathways between adjacent Cr magnetic cations. It was shown that the AFM coupling J between nearest-neighbour Cr ions within the pyrochlore slab is very sensitive to the Cr-Cr distance 12 . The J value follows a phenomenological law ∆J/∆d=450K/Å. Because phonon calculations could be done for SGO only, it is not possible however to give an accurate value for the J modulation associated with the M mode in SCGO. C. The case of BSZGO/BSZGCO We first examine the Raman response, Fig. 11. Here again we can see that both the magnetic and nonmagnetic compounds display a low-energy Raman peak, ressembling the one observed in the SGO/SCGO systems. Also as before, one has a slightly lower energy for the peak in the Cr containing compound, and this situation can be qualitatively ascribed to the difference in atomic mass. The experimental GDOS has a dynamic range comparable to that of SGO/SCGO and especially the high-energy part shows similar form, Fig. 12. However there is a qualitative difference in the experimental GDOS as compared with SGO, the one measured for BSZGO does not show any distinct peak in the range where the Raman peaks are visible. The low-energy phonon response is overall rather different, it does not show the characteristic ω 2 -dependence typical of acoustic phonon response associated with the Debye-regime. In fact we observe an almost linear dependence GDOS ∝ ω with a progressive upwards curvature that never reaches a quadratic law, see Fig. 13. Such a behavior is quite unusual. For crystalline solids the acoustic phonons with linear dispersion should give a quadratic density of states. To explain the unconventional trend we propose that the substitutional disorder in the form of 50/50 occupation by Zn or Ga on one of the 2d sites creates a situation that affects the lattice dynamics and gives rise to a disorder induced dynamic response. We can argue that the damping of phonon modes due to the lattice disorder leads to the observed non-Debye response of the low-energy modes, not necessarily just the long wavelength acoustic modes but also the ones further out in the Brillouin zone or even at the zone boundary. V. DISCUSSION AND CONCLUSIONS In the previous section we have provided evidence of the possibility of particular normal modes being responsible for the magnetic relaxation in SCGO. We suggested that the activated behavior of the relaxation rate is due to the capacity of the phonons at that energy to influence the magnetic system. In the following we attempt to examine more generally the role of phonons in the temperature dependence of the magnetic relaxation. We survey a possible minimal model in which the important issue is the capability of the phonon bath to provide energy for the magnetic system. A. Magnetic relaxation controlled by phonon population Let us assume that the phonon dominated relaxation rate τ −1 is determined by the number of occupied phonon states, accordingly the temperature dependence is related to the thermal population n ph (T ). τ (T ) −1 ∝ n ph (T ) = g(ω)dω e ω k B T − 1 . (1) In the case of a single mode of energy ω m , we have g(ω) ∝ δ(ω −ω m ); and the phonon population and the relaxation rate have the activated dependence τ −1 ∝ n ph (T ) = 1 e ωm k B T − 1 ≈ e − ωm k B T(2) where the approximation is valid at low temperatures when ω m /k B T ≫ 1. This dependence satisfies the experimental observation concerning the magnetic relaxation in SGCO. However, if any low-energy phonon can induce relaxation, one can expect that the low-energy density of states is the important quantity, and then the expected T -dependence of the relaxation rate will depend on the functional form of the density of states g(ω). At low temperatures the integration in eq. 1 can be taken from zero to infinity and after a transformation of variable x = ω/k B T one finds that a power law form g(ω) ∝ ω p leads to a power law in T , n ph (T ) ∝ (k B T / ) p+1 ∞ 0 x p dx e x − 1(3) where the definite integral can be evaluated analytically 32 . Another possible situation might be such that there is a low-energy cut-off ω c due to e.g. Q-dependence of the effective interaction. In this case, for ω c /k B T ≫ 1, the number of phonons affecting the relaxation rate is evaluated to be n ph (T ) ∝ (k B T / ) p+1 ∞ ωc x p dx e x ∝ (k B T / ) p+1 e − ωc k B T (4) We have seen above that the experimental GDOS is a superposition of the Debye ω 2 and a single mode energy contribution in the case of SGO , while in BSZGO a linear ω-dependence appears. In case of independent relaxation processes one can expect that the relaxation rates of each process are additive, τ −1 = i τ −1 i (5) It is straightforward to calculate n ph (T ) in the eqs. 1 to 4. Note, however, that to determine a quantitative estimate for the relaxation rate it would be necessary to know the prefactor describing the efficiency of the coupling between the spin system and the phonon bath. Experimentally we find for the magnetic relaxation in SCGO that τ −1 = τ −1 0 exp(E A /k B T ) with τ −1 0 = 7 × 10 17 s −1 . This means that a Debye phonon controlled relaxation of the type τ −1 = τ −1 D T 3 would dominate at low temperatures for values of τ −1 D ≥ 10 12 s −1 , see Fig. 14, which also shows that both the single mode and the cut-off picture could match equally well the experimental data on SCGO in the activated regime. It is not possible to discern any power law dependence either for SCGO or BSZCGO suggesting that long wavelength acoustic phonons are not effective for the magnetic relaxation process. We can also argue that both in SCGO and in BSZCGO at the lowest temperatures a cross-over to a quantum relaxation regime takes place 11,17 , and we know that at higher temperatures a quasi-elastic response with a width proportional to temperature prevails 33 allow a full analysis of the relaxational dynamics. We suggest that the most effective modes are the ones in the vicinity of the M -point of the reciprocal lattice, indeed these zone boundary modes have a spatial structure on the scale of the in plane lattice parameter, and therefore match the nearest neighbor distance between the localized magnetic moments. With respect to the situation with BSZCGO, we have observed no specific activation energy and it is not possible to see any well-defined power-law dependence in the temperature dependence of the relaxation rate. We evoked earlier (Sect. IV C) the possibility of the disorder induced modification of the lattice dynamics being the origin of the faster relaxation, due to heavily damped phonon modes. Note that here again the space scale is typical of the pyrochlore slab in-plane unit due to the half/half Zn/Ga substitutional disorder. It is conceivable that this disorder has a strong influence especially on the M -point zone boundary modes that, in case of an overdamped response, can achieve a quasi-elastic character, and give rise to the linear GDOS as observed, as well as to the faster relaxation seen in BSZCGO. Anyhow it is clear that the T 2 dependence that one might expect with reference to the model calculation is not observed. B. Other aspects of the relaxation mechanism Our results suggest an important role of lattice vibrations in the low-temperature spin dynamics of the frustrated magnets SCGO and BSZCGO. However, one cannot claim that spin-phonon coupling alone could be the origin of the complex relaxational behavior. The systems studied are disordered due to the non-magnetic dilution that has been widely studied and is a known factor in the pyrochlore slab compounds 12,13,34 . It is The single frequency activated model of eq. 2 follows the experimental data on SCGO over the range 3 K T 7 K as reported earlier 11 and is in that temperature range indiscernable from the model with cut-off at the same frequency, eq. 4. The trend of the power-law models eq. 3 does not match the datapoints of neither SCGO (dots) nor BSZCGO (squares). Note that the two models have been plotted with arbitrarily chosen prefactors. even surprising that even though the phonon properties of the SGO/SCGO lattices appear somewhat better defined when compared to the BSZGO, it is in SCGO that very strongly stretched time decay occurs. A complete microscopic picture of the relaxation process is still to be constructed and one can imagine that the fine details of the magnetic states of lowest energy are of major importance in such a pursuit. Pursuing the analysis might be extremely difficult for systems like the pyrochlore slabs but in simpler cases details of spin-lattice coupling have been already examined and could pave the way for further work 9,10 . In the general context of frustrated magnetism our results point out a trend that has been already evoked, the low-temperature and groundstate properties are highly sensitive to system dependent perturbations. Nevertheless the present work pinpoints the importance of lattice vibrations and their microscopic character in the control of the low-energy spin dynamics. VI. SUMMARY We have presented a detailed investigation of the phonon spectra in the pyrochlore slab compounds SrCr 9x Ga 12−9x O 19 (SCGO) and Ba 2 Sn 2 ZnCr 7x Ga 10−7x O 22 (BSZCGO) based on abinitio lattice dynamics simulations, inelastic neutron scattering and Raman measurements, in order to investigate the origin and mechanisms of the recently observed 11 phonon-driven magnetic relaxation in SCGO. Since the magnetic signal dominates the neutron scattering response at low energy transfer, we have performed new experiments on the isostructural non-magnetic material SrGa 12 O 19 (SGO). Moreover, the chemical disorder is difficult to include in the ab-initio lattice dynamics calculations so these have also focussed on SGO. Results of the calculations for the GDOS and S(Q, ω) for SGO are in good agreement with the experimental data. Neutron and Raman experiments show the similarity of the phonon response in the magnetic system with respect to the non-magnetic counterpart. The calculated partial density of states for SGO indicates that the strongest contribution to the total density of states stems from the vibrations of the Ga-atoms on the 2b and 12k sites of the lattice, associated with flat dispersion branches centered at the zone boundary M-point in the k-space. Thus we conclude that the phonons response in SCGO corresponding to that at the M-point in SGO comprises the partial contribution of the atoms residing on the 12k sites of the magnetic sub-lattice of SCGO. The most relevant M-point modes have displacement vectors that modulate the distances between Cr sites in the kagome layers of the pyrochlore slab and therefore can effectively couple with the magnetic moments. The activation energy for magnetic relaxation is equal to the position of the characteristic peak in the experimental GDOS. Calculated energies of two of the Raman active Γ -point modes are included in the energy range of this peak but they appear with less weight in the calculated GDOS. In comparison BSZCGO, the other pyrochlore slab antiferromagnet shows a qualitatively different low-energy phonon response with an enhanced non-Debye lowenergy response. We suggest that the faster relaxation without a particular energy scale is a consequence of this circumstance, which we attribute to the particular Ga/Zn disorder present in this compound. Future computational and theoretical work is called for to examine the interplay of magnetic interactions and the microscopic mechanisms associated with specific phonon displacements for better understanding of the reported phenomena. FIG. 1 : 1(Color online) Measured S(Q, ω) maps at T = 300 K for SCGO (top left) and SGO (top right). Strong paramagnetic scattering dominates the low-Q response of SCGO. FIG. 2 : 2(Color online) Raman spectra at T = 300 K for SGO (red), with calculated Raman active mode energies shown by the ticks at the bottom of the graph, and SCGO (blue, higher intensity at low energy). Both of these compounds have a Raman active low-energy mode in the vicinity of the kink seen in the GDOS, compare withFig.3. The inset shows in detail the low-energy range highlighting the good match with the position calculated for SGO. FIG. 3 : 3(Color online) Generalized density of states (GDOS) at T = 300 K for SCGO (blue) and SGO (red), obtained using the data shown over a limited (Q, ω)-range inFig. 1. FIG. 4 : 4(Color online) Calculated and neutron determined generalized density of states GDOS for SGO show very similar overall features. The results using both IN4 (red dots) and IN6 (blue circles) data are in good agreement with the calculation (green solid line) and display a similar broadened peak at ω ≈ 5.5 meV, see also Fig. 5. The calculated data has been convoluted with a gaussian to account for the experimental resolution. FIG. 5 : 5(Color online) The low-energy part of the SGO-GDOS can be fitted (full line) with a superposition of a Debye term (∝ ω 2 , ωD = 46 meV) (dashed line) and single mode peak with a gaussian lineshape. The results using both IN4 and IN6 (not shown) data are in good agreement and display a similar broadened peak at ω ≈ 5.5 meV, δ = 2.3 meV (FWHM), indicating that this broadening is not due to instrumental resolution. online) Calculated total and partial neutron weighted GDOS for SGO. Arrows indicate the peak in the Ga partial corresponding to the phonon mode of interest at ω = 5.5 meV. online) Ab initio determined partial DOS of Ga-atoms, gGa,j, in SGO. The strongest contributions to the low-energy peak are from the 2b and 12k sites, respectively, as indicated by arrows. See FIG. 8 : 8(Color online) Calculated Dispersion relations along the high symmetry directions of the Brillouin zone in SGO, with color coded neutron scattering intensity, blue color corresponds to the maximum intensity. The Bradley-Cracknell notation is used for the high-symmetry points: Γ=( online) Simulated S(Q, ω) map (a) for SGO calculated for the Stokes (one phonon annihilation) process at T=0,Fig. 1shows the measured anti-Stokes (phonon creation) response weighted by the thermal population. Therefore the relative intensity at higher absolute value of energy transfer appears less inFig. 1. Constant Q cut (b) of measured and calculated S(Q, ω) (solid and dotted lines, respectively). Constant energy cut (c) of measured and calculated S(Q, ω) ( solid and dashed lines, respectively). The measured cuts shown here are the phonon creation response ( ω > 0) at T =2 K, note similarity of the constant energy cut (c) with the one shown in the lower right panel ofFig. 1at T = 300 K. FIG. 10 : 10(Color online) Schematic representations of the normal modes in SGO around the M-point at ∼ 5.6 meV (left) and at ∼ 6.2 meV (right), with arrows indicating the cation displacements. The 2a, 12k and 4fvi are sites occupied in major part by Cr in SCGO. The 2b and 4fiv are the sites occupied by Ga only both in SGO and in SCGO. 11: (Color online) Raman spectra on BSZGO (red, lower) and BSZSCGO (blue, upper), measured at T =300 K. Note the low-energy mode similar to the SGO/SCGO case shown inFig. 2. FIG. 12: (Color online) GDOS of BSZGO (blue) compared with that of SGO (red), normalised as explained in the text. For the latter note the Debye like dependence at low temperature (dashed line with ωD = 46 meV), see also Fig. 5. . Meanwhile, it is clear that the limited temperature range of the observations does notFIG. 13: (Color online) The low-energy part of the GDOS of BSZGO, the solid line showing the linear non-Debye dependence for ω → 0 . Data with blue and red symbols from IN6 and IN4, respectively. FIG. 14: (Color online) Modeling the relaxation, see details in the text. TABLE I : ILattice parameters and the relevant Ga/Cr position parameters from ab-initio calculations on SGO compared with experimental data on SGO and SCGO. In addition to the SCGO/SGO case we have examined the experimental situation in the related pyrochlore slab compound Ba 2 Sn 2 ZnCr 7x Ga 10−7x O 22 (BSZCGO) and its non-magnetic counterpart Ba 2 Sn 2 ZnGa 10 O 22 (BSZGO). Ab-initio lattice dynamics calculations could not be done for BSZCGO and BSZGO due to their complex crystal structures.parameter SGO (calc) SGO (exp) 14 SCGO (exp) 15 T =473 K T =298 K T =16 K T =4.2 K a(Å) 5.86 5.80 5.80 5.79 5.80 c(Å) 22.87 22.86 22.82 22.78 22.66 2a:x, y, z 0, 0, 0 0, 0, 0 0, 0, 0 0, 0, 0 0, 0, 0 b 2b(4e):x, y, z ac 0, 0, 1/4 0, 0, 1/4(0.258) 0, 0, 1/4(0.257) 0, 0, 1/4 (0.256) 0, 0, 1/4(0.256) 12k:x, z 0.1678, 0.891 0.1684, 0.891 0.1683, 0.891 0.1682, 0.891 0.1681, 0.892 b 4fiv:z c 0.0278 0.0273 0.0273 0.0274 0.0283 4fvi:z 0.190 0.190 0.190 0.190 0.192 b a For refinement of experimental data it is possible to split the 2b occupation over two 4e sites 14,15 . b Cr occupation 90% in least diluted samples 13 . The Kagome (12k) and triangular (2a) planes form the pyrochlore slab. c Ga occupation 100 %. experiments using INS and Raman spectroscopy. Exper- imental techniques are combined because Raman spec- troscopy gives a partial view of the vibrational density of states (Γ-point only) which is complemented by INS that can inform on the Q-dependence and characteristic energies throughout the Brillouin zone. Acknowledgments C.P. thanks J.Y. Mevellec for the Raman measurements. . K Buschow, Elsevier13423AmsterdamK. Buschow (Elsevier, Amsterdam, 2001), vol. 13, p. 423. J Stewart, Proceedings of the Highly Frustrated Magnetism 2003 Conference. the Highly Frustrated Magnetism 2003 ConferenceBristol16J. Stewart, ed., Proceedings of the Highly Frustrated Mag- netism 2003 Conference, vol. 16 of J. Phys.: Condens. Matter (Institute of Physics, Bristol, 2004). Proceedings of the International Conference on Highly Frustrated Magnetism. Z. Hiroi and H. Tsunetsuguthe International Conference on Highly Frustrated MagnetismBristol19Z. Hiroi and H. Tsunetsugu, eds., Proceedings of the Inter- national Conference on Highly Frustrated Magnetism 2006, vol. 19 of J. Phys.: Condens. Matter (Institute of Physics, Bristol, 2007). . R Moessner, A Ramirez, Physics Today. 5924R. Moessner and A. Ramirez, Physics Today 59, 24 (2006). . R Moessner, J T Chalker, Phys. Rev. Lett. 802929R. Moessner and J. T. Chalker, Phys. Rev. Lett. 80, 2929 (1998). . R Moessner, T Chalker, Phys. Rev. B. 5812049R. Moessner and T. Chalker, Phys. Rev. B 58, 12049 (1998). . R Moessner, Can. J. Phys. 791283R. Moessner, Can. J. Phys. 79, 1283 (2001). . O Tchernyshyov, R Moessner, S L Sondhi, Phys. Rev. B. 6664403O. Tchernyshyov, R. Moessner, and S. L. Sondhi, Phys. Rev. B 66, 064403 (2002). . A B Sushkov, O Tchernyshyov, W Ratcliff, I I , S Cheong, H D Drew, Phys. Rev. Lett. 94137202A. B. Sushkov, O. Tchernyshyov, W. Ratcliff II, S. Cheong, and H. D. Drew, Phys. Rev. Lett. 94, 137202 (2005). . C Fennie, K Rabe, Phys. Rev. Lett. 96205505C. Fennie and K. Rabe, Phys. Rev. Lett. 96, 205505 (2006). . H Mutka, G Ehlers, C Payen, D Bono, J R Stewart, P Fouquet, P Mendels, J Y Mevellec, N Blanchard, G Collin, Phys. Rev. Lett. 9747203H. Mutka, G. Ehlers, C. Payen, D. Bono, J. R. Stewart, P. Fouquet, P. Mendels, J. Y. Mevellec, N. Blanchard, and G. Collin, Phys. Rev. Lett. 97, 047203 (2006). . L Limot, P Mendels, G Collin, C Mondelli, B Ouladdiaf, H Mutka, N Blanchard, M Mekata, Phys. Rev. B. 65144447L. Limot, P. Mendels, G. Collin, C. Mondelli, B. Ouladdiaf, H. Mutka, N. Blanchard, and M. Mekata, Phys. Rev. B 65, 144447 (2002). . C Mondelli, H Mutka, C Payen, Can. J. Phys. 791401C. Mondelli, H. Mutka, and C. Payen, Can. J. Phys. 79, 1401 (2001). . H Graetsch, W Gebert, Z. für Krist. 209338H. Graetsch and W. Gebert, Z. für Krist. 209, 338 (1994). X Obradors, A Labarta, A Isalgué, J Tejada, J Rodriguez, M Pernet, 0038-1098Solid State Communications. 65X. Obradors, A. Labarta, A. Isalgué, J. Tejada, J. Rodriguez, and M. Pernet, Solid State Commu- nications 65, 189 (1988), ISSN 0038-1098, URL http://www.sciencedirect.com/science/article/B6TVW-46TY55H-1X7/2/ad8f3771755292c91f45fa44dbe8c769. . P Bonnet, C Payen, H Mutka, M Danot, P Fabritchnyi, J Stewart, A Mellergård, C Ritter, J. Phys.: Condens. Matter. 16835P. Bonnet, C. Payen, H. Mutka, M. Danot, P. Fabritch- nyi, J. Stewart, A. Mellergård, and C. Ritter, J. Phys.: Condens. Matter 16, S835 (2004). . H Mutka, C Payen, S J Ehlers, G , D Bono, P Mendels, J. Phys.: Condens. Matter. 19145254H. Mutka, C. Payen, S. J. Ehlers, G. and, D. Bono, and P. Mendels, J. Phys.: Condens. Matter 19, 145254 (2007). . H Mutka, M Koza, M Johnson, Z Hiroi, J.-I Yamaura, Y Nagao, Phys. Rev. B. 78104307H. Mutka, M. Koza, M. Johnson, Z.Hiroi, J.-I. Yamaura, and Y. Nagao, Phys. Rev. B 78, 104307 (2008). . P E Blöchl, Phys. Rev. B. 5017953P. E. Blöchl, Phys. Rev. B 50, 17953 (1994). . P Hohenberg, W Kohn, Phys. Rev. 136864P. Hohenberg and W. Kohn, Phys. Rev. 136, B864 (1964). . W Kohn, L J Sham, Phys. Rev. 1401133W. Kohn and L. J. Sham, Phys. Rev. 140, A1133 (1965). . G Kresse, J Furthmüller, Comput. Mater. Sci. 615G. Kresse and J. Furthmüller, Comput. Mater. Sci. 6, 15 (1996). . G Kresse, D Joubert, Phys. Rev. B. 591758G. Kresse and D. Joubert, Phys. Rev. B 59, 1758 (1999). . J P Perdew, K Burke, M Ernzerhof, Phys. Rev. Lett. 773865J. P. Perdew, K. Burke, and M. Ernzerhof, Phys. Rev. Lett. 77, 3865 (1996). . J P Perdew, K Burke, M Ernzerhof, Phys. Rev. Lett. 781396J. P. Perdew, K. Burke, and M. Ernzerhof, Phys. Rev. Lett. 78, 1396 (1997). . K Parlinski, Z.-Q Li, Y Kawazoe, Phys. Rev. Lett. 784063K. Parlinski, Z.-Q. Li, and Y. Kawazoe, Phys. Rev. Lett. 78, 4063 (1997). . K Parlinski, software phononK. Parlinski, software phonon, 2003. . M Johnson, M Koza, L Capogna, H Mutka, Nucl. Instr. and Meth. A. 600226M. Johnson, M. Koza, L. Capogna, and H. Mutka, Nucl. Instr. and Meth. A 600, 226 (2009). . M M Koza, M R Johnson, R Viennois, H Mutka, L Girard, D Ravot, Nature Mater. 7805M. M. Koza, M. R. Johnson, R. Viennois, H. Mutka, L. Gi- rard, and D. Ravot, Nature Mater. 7, 805 (2008). J Blundell, K Blundell, Concepts in Thermal Physics. Oxford New YorkOxford University PressJ. Blundell and K. Blundell, Concepts in Thermal Physics (Oxford University Press, Oxford New York, 2006). . C Mondelli, H Mutka, C Payen, B Frick, K Andersen, C. Mondelli, H. Mutka, C. Payen, B. Frick, and K. Ander- sen, Physica B 284-288, 1371 (2000). . D Bono, L Limot, P Mendels, G Collin, N Blanchard, Low Temperature Physics. 31D. Bono, L. Limot, P. Mendels, G. Collin, and N. Blan- chard, Low Temperature Physics 31, 704 (2005), URL http://link.aip.org/link/?LTP/31/704/1.
[]
[ "Phases of holographic superconductors with broken translational symmetry", "Phases of holographic superconductors with broken translational symmetry" ]
[ "Matteo Baggioli \nDepartament de Física and IFAE\nUniversitat Autònoma de Barcelona\n08193Bellaterra, BarcelonaSpain\n", "Mikhail Goykhman \nEnrico Fermi Institute\nUniversity of Chicago\n5620 S. Ellis Av60637ChicagoILUSA\n" ]
[ "Departament de Física and IFAE\nUniversitat Autònoma de Barcelona\n08193Bellaterra, BarcelonaSpain", "Enrico Fermi Institute\nUniversity of Chicago\n5620 S. Ellis Av60637ChicagoILUSA" ]
[]
We consider holographic superconductors in a broad class of massive gravity backgrounds. These theories provide a holographic description of a superconductor with broken translational symmetry. Such models exhibit a rich phase structure: depending on the values of the temperature and the doping the boundary system can be in superconducting, normal metallic or normal pseudo-insulating phases. Furthermore the system supports interesting collective excitation of the charge carriers, which appears in the normal phase, persists in the superconducting phase, but eventually gets destroyed by the superconducting condensate. We also show the possibility of building a phase diagram of a system with the superconducting phase occupying a dome-shaped region, therefore resembling more of a real-world doped high-Tc superconductor.
10.1007/jhep07(2015)035
[ "https://arxiv.org/pdf/1504.05561v2.pdf" ]
53,557,958
1504.05561
b29824af84b687ec896f68fa19fc239d6961d3d5
Phases of holographic superconductors with broken translational symmetry Matteo Baggioli Departament de Física and IFAE Universitat Autònoma de Barcelona 08193Bellaterra, BarcelonaSpain Mikhail Goykhman Enrico Fermi Institute University of Chicago 5620 S. Ellis Av60637ChicagoILUSA Phases of holographic superconductors with broken translational symmetry Prepared for submission to JHEP We consider holographic superconductors in a broad class of massive gravity backgrounds. These theories provide a holographic description of a superconductor with broken translational symmetry. Such models exhibit a rich phase structure: depending on the values of the temperature and the doping the boundary system can be in superconducting, normal metallic or normal pseudo-insulating phases. Furthermore the system supports interesting collective excitation of the charge carriers, which appears in the normal phase, persists in the superconducting phase, but eventually gets destroyed by the superconducting condensate. We also show the possibility of building a phase diagram of a system with the superconducting phase occupying a dome-shaped region, therefore resembling more of a real-world doped high-Tc superconductor. Introduction Holographic superconductors have recently received a new wave of attention. It originated from several attempts [1][2][3][4][5][6] to provide a holographic description of systems which resemble more of the real-world superconductors. One of the essential features of the original holographic superconductor proposal of [7,8] is that it describes the system which exists in two states: a superconducting state which has a nonvanishing charge condensate, and a normal state which is a perfect conductor. As a direct conseguence, already in the normal phase the static electric response, namely the DC conductivity (ω = 0), is infinite. This is a straightforward consequence of the translational invariance of the boundary field theory, which leads to the fact that the charge carriers do not dissipate their momentum, and accelerate freely under an applied external electric field. Therefore one is motivated to introduce momentum dissipation into the holographic framework, breaking the translational invariance of the dual field theory. It is definitely interesting to construct a holographic superconductor on top of such dissipative backgrounds which is indeed going to have a finite DC conductivity in the normal phase, clearly distinguishable from the infinite one in the superconducting phase. One efficient method to implement such a feature relies on the possibility of breaking diffeomorphism invariance in the bulk via giving the graviton a mass, as it has been proposed in [9]. It is very convenient to recast these Lorentz symmetry violating massive gravity theories into a covariant form introducing the Stueckelberg fields, namely the extra degrees of freedom appearing as a consequence of breaking of the diffeomorphism symmetry (see [10] for more details). In the context of applied holography this construction was analyzed for the first time in [11] where momentum dissipation in the field theory was achieved by switching on neutral scalar operators depending linearly on the spatial coordinates of the boundary. These scalar fields on the boundary source the neutral scalar fields in the bulk. The resulting bulk system describes a holographic dual of the field theory with broken translational symmetry. Such a system possesses a finite DC conductivity [11] 1 . The original idea of [11] has been put in a broader context in [16], where the most general form for the Lagrangian of the neutral scalars has been introduced 2 . This Lagrangian is weakly constrained by the consistency conditions in the bulk, which avoid ghost excitations and gradient instabilities [16]. It turns out that imposing physical consistency of the theory still leaves enough freedom to construct models, which exhibit new non-trivial features. To be more specific, one can build models which possess the following attractive properties. The first one is an increase of conductivity as a function of temperature, for temperatures lower than a certain critical value T 0 , dσ DC (T ) dT > 0 , T < T 0 . (1.1) This property bears a resemblance to an insulating behavior, with the population of the conducting energy band depleting upon lowering the temperature. Still, it awaits a better understanding, because of an essentially non-vanishing value of the DC conductivity at zero temperature. We refer to the state (1.1) as pseudo-insulating. The second new feature of the model is an appearance of an extra structure in the optical conductivity. For temperatures lower than a certain critical value T , there appears a peak in the optical conductivity, signaling a new long-lived collective propagating excitation of the charge carriers 3 . This paper is based on the idea to generalize the construction of [4,5] to the more generic effective models for momentum dissipative systems, proposed in [16]. The main questions which we aim to answer are the following: 1. Can one construct a model of holographic superconductor which is separated by the lines of the second order phase transition from the normal metallic phase and the normal pseudo-insulating phase (1.1)? 2. Does the peak in the optical conductivity of [16] continue to exist in the superconducting phase 4 ? We have found that the answers are: 1. Yes, by combining the idea of [16] with the setting of a holographic superconductor one can obtain a system with a rich phase diagram where three different phases are present: superconductor, metal, and pseudo-insulator. 2. The peak in optical conductivity continues to exist in the superconducting phase, as the temperature is lowered below a critical temperature T c of the superconducting phase transition. However, at a certain temperature T = T the peak disappears. Furthermore, we attempt to construct a holographic dual to a real doped superconductor (see, e.g., [20]). The main feature which we will attempt to implement holographically is an enclosing of a superconducting phase on the doping-temperature phase plane by a dome-shaped line of a phase transition (see figure 1). The most successful result would be to have a superconducting dome, separated from an insulating normal state at smaller values of the doping parameter, and a metal normal phase at larger values of the doping parameter. In figure 1 we provide a schematic sketch of what we would like to approach, the phase diagram for High-Tc superconductors. We will demonstrate that implementing the momentum dissipation models of [16] in the holographic superconductor framework can indeed lead to a superconducting dome, located between pseudo-insulating and metallic phases. However, it appears that such models are too restricted to describe superconducting dome with realistic critical temperature of the superconducting phase transition. We have found that the critical temperature of the dome T c (α), where α is the magnitude of the translational symmetry breaking, is bounded from above by a small number (in units of charge density), of the order of 10 −8 . This makes the numerical calculation at finite temperature hopeless. Nevertheless, at zero temperature it is possible to have analytical control of the SC instability through the BF bound reasonings and show the existence of a superconducting dome. In the Discussion section 8 we provide a few ideas to generalize our model, which might be useful to obtain a superconducting dome with a reasonably higher values of the critical temperature. We will be considering charged black brane backgrounds with the neutral scalar fields having vacuum profiles, depending linearly on the spatial coordinates 5 : φ x = α x , φ y = α y . (1.2) This configuration (1.2) breaks translational symmetry (and Lorentz invariance) of the boundary field theory but keeps untouched energy conservation. Within this choice we are going to retain homogeneity and rotational invariance. It would be interesting to reproduce the same sort of computations in an anisotropic setup as in [6]. Besides the parameter α, describing the magnitude of the translational symmetry breaking, we will also introduce another parameter m, which will be primarily important in the models with non-linear action for the neutral scalars (1.2). We consider the system at a finite charge density, which corresponds holographically to the time-like component of the U (1) gauge field having a non-trivial radial profile in the bulk, A t (u). The charged scalar ψ is dual to the condensate O of charge carriers. When the v.e.v. of the condensate is non-vanishing, O = 0, the system is in a superconducting phase. This corresponds holographically to a non-trivial configuration ψ(u) in the bulk, with the vanishing source coefficient of the near-boundary expansion of the ψ(u) [7,8]. We will study various superconducting systems, distinguished by the choice of the Lagrangian V (X) for the neutral scalar fields, where X = 1 2 L 2 g µν ∂ µ φ I ∂ ν φ I ,(1.3) and L is the radius of AdS. In this paper we will be mostly interested in the following models: model 1 : V (X) = X 2 m 2 , (1.4) model 2 : V (X) = X + X 5 (1.5) model 3 : V (X) = X N 2 m 2 , N = 1 (1.6) The model (1.4) gives the simplest way to describe the fields φ I and has been proposed in [11]. We will argue that already in the simple case of (1.4) it is possible to have a superconducting dome. We will demonstrate this analytically at zero temperature. Interestingly, the dome is achieved for the scaling dimension ∆ and the charge q of the scalar ψ, restricted to the small vicinity of the "dome" point, which we have found to be (∆ d , q d ) = (2.74, 0.6) . (1.7) In this case the superconducting dome exists in the middle of a normal metallic phase (the model (1.4) does not allow an insulating phase). We will show that the model with the non-linear Lagrangian (1.5) also possesses the superconducting dome near the point (1.7). In this case it is possible to engineer a model where the dome is separated from metallic phase at larger values of the translational symmetry breaking parameter m, and from a pseudo-insulating phase at smaller values of m. This situation is the closest one to the actual real phase diagram for High-Tc superconductors. We are not aware of other holographic constructions giving this result in this kind of set-up. It is of course pending to improve the model to get a real insulating phase; we leave this issue for future work. To support our statement about the superconducting dome with such a small critical temperature T c , we will calculate numerically the dependence of the critical temperature for the models (1.4), (1.5), on the scaling dimension ∆ and the charge q. We will show that as the (∆, q) approach the dome point (1.7), the critical temperature quickly declines. The rest of this paper is organized as follows. In the next Section 2 we set up the model which we will be studying in this paper. We consider the general Lagrangian V (X) for the massless neutral scalar fields. In Section 3 we review the properties of the normal phase solution. In Section 4 we study the conditions for its instability towards formation of a non-trivial profile of scalar hair. From the field theory point of view this corresponds to a superconducting phase transition. In Section 5 we focus on the features of the broken phase, the condensate and the grand potential, demonstrating explicitly the second order phase transition at T = T c . In Section 6 we study the optical conductivity in the normal and superconducting phases. In Section 7 we describe the way to construct a superconducting dome in the middle of a metallic phase, for the model (1.4), and between pseudo-insulating and metallic phases, for the (1.5). We discuss our results in Section 8. Appendix A contains further details about the calculations of the condensate and the grand potential. Appendix B is dedicated to derivation of the on-shell action for bulk fluctuations, which are holographically dual to current and momentum operators on the boundary. Action and equations of motion The total action of our model is : I = I 1 + I 2 + I 3 , (2.1) where we have denoted the Einstein-Maxwell terms I 1 , the neutral scalar terms I 2 , and the charged scalar terms I 3 ; I 1 = d d+1 x √ −g R − 2Λ − L 2 4 F µν F µν , I 2 = −2m 2 d d+1 x √ −g V (X) ,(2. 2) I 3 = − d d+1 x √ −g |Dψ| 2 + M 2 |ψ| 2 + κ H (X) |ψ| 2 . We have introduced an extra coupling κ, between the charged scalar ψ and the neutral scalars φ I . In this paper we will be mostly considering κ = 0, and comment on the models with non-vanishing κ in the discussion section 8. We have defined X = 1 2 L 2 g µν ∂ µ φ I ∂ ν φ I . (2.3) We denote D µ ψ = (∂ µ − i q A µ )ψ to be the standard covariant derivative of the scalar ψ with the charge q. We fix the cosmological constant to be Λ = −3/L 2 . In this paper we will consider 4-dimensional bulk, d = 3. The equations of motion following from the action I read 6 : R µν − 1 2 R − 2Λ − L 2 4 F 2 − |Dψ| 2 − (M 2 + κ H)|ψ| 2 − 2m 2 V g µν = m 2V + 1 2 κḢ |ψ| 2 ∂ µ φ I ∂ ν φ I + L 2 2 F µλ F λ ν + 1 2 (D µ ψD ν ψ + D ν ψD µ ψ ) 1 √ −g ∂ µ √ −g F µν − i q L 2 (ψ D ν ψ − ψD ν ψ ) = 0 (2.4) 1 √ −g D µ √ −g D µ ψ − (M 2 + κ H)ψ = 0 ∂ µ √ −g (2m 2V + κ H |ψ| 2 ) g µν ∂ ν φ I = 0 , where the dot stands for a derivative w.r.t. X, V (X) ≡ dV dX ,Ḣ(X) ≡ dH dX . (2.5) Background We consider the following black brane ansatz for the background: ds 2 = L 2 − 1 u 2 f (u)e −χ(u) dt 2 + 1 u 2 (dx 2 + dy 2 ) + 1 u 2 f (u) du 2 φ I = α δ I i x i , I, i = x, y . (2.6) A = A t (u)du , ψ = ψ(u) . The φ I scalars have profiles linear in the spatial coordinates x, y of the boundary. They effectively describe momentum dissipation mechanisms in the boundary field theory, making the DC conductivity of the theory finite [11]. We will take ψ to be real-valued, since due to the u component of Maxwell equations the phase of the complex field ψ is a constant. We are looking for charged black brane solutions with a scalar hair where u h is the position of the horizon, and the boundary is located at u = 0. We allow for non-trivial χ(u) because we want to have in general a non-trivial ψ(u). If ψ = 0, then χ = 0. The resulting equations of motion read: q 2 u e χ A 2 t ψ 2 f 2 − χ + u ψ 2 = 0 (2.7) ψ 2 − 2 f u f + e χ u 2 A 2 t 2f + M 2 L 2 ψ 2 u 2 f + κ L 2 Hψ 2 u 2 f + e χ q 2 A 2 t ψ 2 f 2 + 2m 2 L 2 V u 2 f + 2ΛL 2 u 2 f + 6 u 2 = 0 (2.8) 2q 2 A t ψ 2 u 2 f − χ 2 A t − A t = 0 (2.9) ψ + − 2 u + f f − χ 2 ψ + e χ q 2 A 2 t f 2 − M 2 L 2 u 2 f − κH L 2 u 2 f ψ = 0 (2.10) The Hawking temperature of the black brane (2.6) is given by: T = − f (u h ) 4π e − χ(u h ) 2 . (2.11) Using eqs. (2.7)-(2.10), the temperature can be written as: T = − e − χ 2 16πu h −12 + 4m 2 L 2 V + 2(M 2 + κ H)L 2 ψ 2 + e χ u 4 h A 2 t . (2.12) with all the fields evaluated at the horizon u h . Normal phase In the case of a non-trivial condensate ψ(u) it is in general impossible to solve the background equations of motion (2.7)-(2.10) analytically. However, when ψ(u) = 0, the solution is known [16]. From now on we will fix the coupling κ to zero, κ = 0 . (2.13) The resulting normal phase background is given by: ψ(u) = 0 , χ(u) = 0 , (2.14) A t (u) = µ − uρ , µ = ρ u h , (2.15) f (u) = u 3 u u h − 3 y 4 + m 2 L 2 V (α 2 y 2 ) y 4 + ρ 2 4 dy (2.16) Due to (2.12) the temperature in the normal state reads: T = − 1 16πu h −12 + 4m 2 L 2 V + u 4 h ρ 2 . (2.17) All the features of this normal phase solution are going to be reviewed in detail in the following section. Normal phase features As suggested in [16], for models with a specific choice of the Lagrangian V (X), the solution exhibits various interesting properties. Using the membrane paradigm the DC part (ω = 0) of the optical conductivity can be computed analytically [14] and for a generic Lagrangian V (X) it is given by [16]: σ DC = 1 e 2 1 + ρ 2 u 2 h 2 m 2 α 2V (u 2 h α 2 ) . (3.1) The DC conductivity consists of two parts: σ DC = σ pair + σ dissipation , (3.2) which is a generic holographic feature The first one σ pair is due to pair creation in the background, and it is present even at zero charge density [25]. It corresponds exactly to the probe limit result. It is temperature independent, and therefore is always present (unless we introduce a dilaton field) as an offset in the value of σ DC , leading to σ DC (T = 0) = 0. The second term σ dissipation is really the one dealing with dissipative mechanism, and it can be thought as the strongly coupled analogue of the Drude formula for the conductivity. In the limit of zero translational symmetry breaking parameter m, this second term gives rise to the infinite DC conductivity, typical for backgrounds preserving translational symmetry, such as the AdS Reissner-Nordstrom black brane case. Due to the freedom of choice of the Lagrangian V (X) this solution can be either a metal or a pseudo-insulator and can provide a transition between the two phases (see figure 2). The pseudo-insulator phase is characterized by the conductivity, declining at smaller temperatures, dσ/dT > 0, for T < T 0 , but reaching a non-vanishing value at T = 0 (which is the reason why we are not calling it an insulating phase) 7 . The transition between the two phases is provided by the existence of a maximum in the DC conductivity as a function of temperature (see figure 2), at T = T 0 , which gives a clear separation between two different regimes: dσ dT < 0 , T > T 0 → metal (3.3) dσ dT > 0 , T < T 0 → pseudo-insulator (3.4) The temperature T 0 at which the metal-insulator transition happens can be obtained analytically, solving the following equation: dσ DC du h = 0 ⇒ YV (Y ) =V (Y ) , Y = u 2 h α 2 . (3.5) The metal-insulator transition in the behavior of the DC conductivity is related to a non-trivial structure in the optical conductivity, namely a weight transfer from a Drude peak into a localized new peak in the mid-infrared regime (see figure 2). This feature corresponds to an emerging collective propagating excitation of the charge carriers, whose nature is not completely clear yet. The phase diagram of this normal phase is already rich and can give insights towards the interpretation about the various ingredients introduced into the model. In the case of the linear Lagrangian, which goes back to the original model [4], the parameters m and α are combined into m α, which can be interpreted as the strength of translational symmetry breaking. From the dual field theory point of view this is thought to be related to some sort of homogeneously distributed density of impurities, representing the doping of the material. In the case of a more general V (X), the m parameter keeps this kind of interpretation while the α one represents the strength of interactions of the neutral scalar sector. This reasoning is confirmed by the study of the phase diagrams of the system (figure 3) which makes evident the difference between the two parameters. Indeed, while the m parameter, which we are going to interpret as the doping of our High-Tc superconductor, enhances the metallic phase, the α one clearly reduces the mobility of the electronic sector driving the system towards the pseudo-insulating phase. Superconducting instability In this section we will describe the instability conditions for the normal phase towards the development of a non-trivial profile of the charged scalar field. This allows one to determine a line of the second order superconducting phase transition, T c (α) (or T c (m) for the model (1.5)), in the boundary field theory, with broken translational symmetry. We start by considering the system at zero temperature, which we are able to study analytically. Then we proceed to studying the normal phase at a finite temperature. Upon lowering the temperature, at a certain critical value T = T c , the normal phase becomes unstable. This is the point of a superconducting phase transition. We construct numerically T c as a function of the parameters ∆, q, α (or m), for the models with various V (X). Zero-temperature instability In the case of T = 0 the normal phase geometry interpolates between the AdS 4 in the ultra-violet and the AdS 2 × R 2 in the infra-red. We can apply the known analytical calculation to study the stability of the normal phase towards formation of a non-trivial profile of the scalar ψ [28]. Due to eq. (2.10), the effective mass M ef f of the scalar ψ is given by: M 2 ef f L 2 = M 2 L 2 + κ H L 2 + q 2 A 2 t g tt L 2 . (4.1) Notice that at the boundary the mass of the scalar is just M 2 but at the horizon it gets an additional contribution. This is because near the horizon we have: g tt = − 2u 2 h L 2 f (u h )(u h − u) 2 ,(4.2) at zero temperature. Due to (2.16), and the zero temperature T = 0 condition, with the temperature given by (2.17), we obtain: The normal phase is unstable towards formation of the scalar hair, if M ef f violates the BF stability bound in the AdS 2 , namely: f (u h ) = 2 6 + L 2 m 2 −2V (u 2 h α 2 ) + u 2 h α 2V (u 2 h α 2 ) u 2 h .M 2 ef f L 2 2 < − 1 4 . (4.4) In (4.4) we have denoted the AdS 2 radius as L 2 8 : L 2 2 = 2 L 2 f (u h ) u 2 h (4.5) Combining (4.1), (4.3), (4.5), the IR instability condition (4.4) finally reads 9 : D < 0 , (4.6) where we have defined the function D as: D = 1 4 + L 2 (κH + M 2 ) L 2 m 2 α 2 u 2 hV (α 2 u 2 h ) − 2V (α 2 u 2 h ) + 6 − q 2 ρ 2 u 4 h L 2 m 2 α 2 u 2 hV (α 2 u 2 h ) − 2V (α 2 u 2 h ) + 6 2 (4.7) 8 In the usual RN case we have f (u h ) = 12 L 2 u 2 h , and we find the usual L 2 2 = L 2 6 in d = 3. 9 This formula agrees with [5] in the case of V (X) = X 2 m 2 and κ = 0. For the practical calculations we will solve the equation T = 0, see (2.17), for the value of u h , giving the position of the horizon of the extremal black brane, − 12 + u 4 h ρ 2 + 4L 2 m 2 V (u 2 h α 2 ) = 0 . (4.8) We will measure all the dimensional quantities in units of ρ; both for zero temperature and finite-temperature instability analyses the ρ can be scaled out. In figure 4 we plot the IR instability region on the (∆, q) plane, for the model 1, (1.4), with α = 2, as well as a few contour lines of the constant critical temperature. In figure 5 we plot the IR instability region and several T c = const curves on the (∆, q) plane, for the model 2, (1.5), with α = 0.25, m = 4. Analogous plot for ordinary holographic superconductor can be found in [28]. Plot in the case of the linear V (X) model first appeared in [5]. Finite-temperature instability Consider the system at large temperature in a normal phase, which exists in a superconducting phase at low temperatures. Therefore as we decrease the temperature, at certain critical value T c the superconducting phase transition occurs. If T c is non-vanishing, then for T < T c the system is in a superconducting phase, with a non-trivial scalar condensate ψ(u). Recall that near the boundary the scalar field with mass M : M = 1 L ∆(∆ − 3) ,(4.9) behaves asymptotically as: Figure 8. Critical temperature as a function of α for the potential V (X) = X/2m 2 + β X 5 /2m 2 for different choices β. All the curves have a runaway behavior at α → ∞, and only the shape depends on the value of β. ψ(u) = ψ 1 L 3−∆ u 3−∆ + ψ 2 L ∆ u ∆ ,(4. where ψ 1 is the leading term, identified as the source in the standard quantization. To find the value of T c we can look for an instability of the normal phase towards formation of the scalar field profile [28,29]. Near the second order phase transition point T = T c the value of ψ is small, and therefore one can neglect its backreaction on the geometry. The SC instability can be detected by looking at the motion of the QNMs of ψ in the complex plane. To be more specific, it corresponds to a QNM going to the upper half of a complex plane. Exactly at critical temperature we have a static mode at the origin of the complex plane, ω = 0, and the source at the boundary vanishes, ψ 1 = 0. In the next section we will solve numerically the equations (2.7)-(2.10) for the whole background, and confirm this explicitly. The scalar field is described by eq. (2.10), which in the normal phase becomes: ψ + − 2 u + f f ψ + q 2 ρ 2 f 2 − M 2 L 2 u 2 f − κH L 2 u 2 f ψ = 0 ,(4.11) where f (u) is given by (2.16). To determine the critical temperature T c we need to find the highest temperature, at which there exists a solution to eq. (4.11), satisfying the ψ 1 = 0 condition. In this case for T < T c the system is in a superconducting state, with a non-vanishing condensate ψ 2 . We are interested in the phases of the models (1.4)-(1.6) on the temperaturedoping plane. In figure 6 we plot T c (α) for the model 1, (1.4), and the model 3, (1.6), with N = 1/2, 2, 3, for different values of the charge q. It is clear that when the power N in the potential V is higher, the critical temperature for the SC phase transition is smaller. One interesting behavior, which still lacks an interpretation, is the non-monotonic behavior of T c as a function of α, which was already observed in the original model [4,5] and still persists in more generic setups. In figure 7 we plot T c (∆) for q = 0.6 and α = 1 for the model 1, (1.4), and the model 3, (1.6), with N = 1/2, 2, 3. The T c (∆) curves explicitly show that the critical temperature quickly declines as ∆ approaches the border of the IR instability region. It is further underlined how higher powers/non-linearities in the potential lead to deeper supression of the critical temperature. We also plot T c (α) for the generalized model 2, (1.5), in figure 8 for various amounts of non-linearity β X 5 , showing again the same behavior of suppression of the superconducting phase at larger β. Broken phase and phase diagram In this section we study superconducting phase and construct the phase diagram on the (m, T ) plane of the model (1.5). We will confirm existence of the second order phase transition between normal and superconducting phases by solving four equations (2.7)-(2.10) for the fully backreacted background. Knowing the near-boundary asymptotic behavior of this solution, one can determine the grand potential of the superconducting phase, and compare it with the grand potential of the normal phase to corroborate the phase transition at T = T c . Running the numerical procedure described in details in appendix A, we were able to construct the condensate ψ 2 /ρ ∆/2 as a function of temperature T /ρ 1/2 . In figure 9 we provide the plot of the condensate, for the model (1.5) with ∆ = 2, q = 1, α u h = 0.5 (α in units of entropy density), m L = 1. There we also plot the grand potential for the broken and normal phases which confirms the superconducting transition at T = T c . The holographic prescription for the calculation of the grand potential is: Ω = −T log Z = T S E ,(5.1) where S E is a Euclidean on-shell action of the bulk theory. After some computations showed in details in appendix A we obtain: S E = d 3 x 16π S T + 2L 2 γ 2 + L 2 µρ ,(5.2) where we have also used the area-law expression for the entropy: S = L 2 4u 2 H , (5.3) The grand potential is finally given by: Ω = P V = − 1 16π S E ,(5.4) where V is a volume of spatial region. In conclusion we obtain (denotingρ = ρL 2 ) the expected thermodynamic relation: P = − T S − µρ . (5.5) where the energy density of the system is given by: = −2γ 2 L 2 . (5.6) We now have enough information to construct the full phase diagram of the nonlinear model (1.5). In figure 10 we plot the phase diagram of the model (1.5) with ∆ = 3, q = 4, α = 0.7 (in units of ρ = 1). We see that the superconducting region can be connected smoothly to both a metallic phase and a pseudo-insulating phase. Optical conductivity Our main aim in this section is to see whether the non-trivial structure in the optical conductivity (see figure 2), pointed out for the model (1.5) in the normal phase [16], persists to exist in the superconducting phase. Fluctuation equations In order to compute the optical conductivity, we study the fluctuations on top of the charged black brane background with spatially-dependent neutral scalars, as follows: δg tx (t, u, y) = +∞ −∞ dω 2π e −i ω t+ i k y h tx (u) u 2 δφ x (t, u, y) = +∞ −∞ dω 2π e −i ω t+ i k y ξ(u) (6.1) δA x (t, u, y) = +∞ −∞ dω 2π e −i ω t+ i k y a x (u) Further in this section we consider homogenous case k = 0, for which it is consistent to put all the fluctuations, besides (6.1), to zero. In this section we also put L = 1. The equations for the perturbations read 10 : a x + f f − χ 2 a x + e χ ω 2 f 2 − 2 q 2 ψ 2 u 2 f a x + e χ A t f h tx = 0 (6.2) u 2 a x A t + h tx + 2 i e −χ m 2 α fV (u 2 α 2 ) ω ξ = 0 (6.3) ξ + − 2 u + f f − χ 2 + 2 u α 2V (u 2 α 2 ) V (u 2 α 2 ) ξ + e χ ω 2 f 2 ξ − i e χ α ω f 2 h tx = 0 (6.4) One can eliminate h tx from the second equation (6.3) right away, and substitute it into equations for a x and ξ [11]. It is then convenient to perform the following redefinition: (6.5) and reduce the problem to a 2x2 system: ζ(u) = f (u) i ω α u 2 ξ (u)e − χ 2 f a x +e − χ 2 − 2q 2 ψ 2 u 2 + e χ (ω 2 −u 2 f A 2 t ) f a x +2m 2 α 2 u 2 e − χ 2V A t ζ=0 (6.6) u 2 f e − χ 2 V e − χ 2V ζ + u 2 A t a x + ω 2 u 2 f − 2m 2 α 2 e −χV ζ ζ = 0 ,(6.7) which in the normal phase agrees with the equations, derived in [16]. Superconducting phase In order to extract the optical conductivity of the system we first derive the on-shell action for the fluctuations. We leave the technical steps for appendix B while here we just quote the result: I f tot = dω a (1) x (−ω), Z (1) (−ω) M a (2) x (−ω) Z (2) (−ω) ,(6.8) where ζ = Z/u and we have defined the matrix M to be: M =     1 0 0 2m 2 α 2 V 1 1−2 √ 2+ √ 2ω 2 m 2 α 2 V 1     , (6.9) and expanded the fluctuations near the boundary u = 0 as: a x (u, ω) = a (1) x (ω) + a (2) x (ω) u , (6.10) Z(u, ω) = f (u) iωαu ξ (u, ω) , (6.11) Z(u, ω) = Z (1) (ω) + Z (2) (ω) u . (6.12) We solve two coupled fluctuation equations (6.6), (6.7) numerically, for two independent sets of initial conditions which satisfy the infalling behavior near the horizon [30] 11 . Due to linearity of the fluctuation equations (6.6), (6.7), the precise choice of the two sets of initial conditions is not important, and one can check that correlation matrix does not depend on it. For example, let us choose: a (1) x Z (1) = 1 1 (u h − u) iω f (u h ) e χ h /2 (6.13) a (2) x Z (2) = 1 −1 (u h − u) iω f (u h ) e χ h /2 . (6.14) Figure 11. The AC conductivity for the model (1.5) with α = √ 2 (in units of 1/u h ), m 2 L 2 = 0.025, q = 4, and ∆ = 2. Black line is at the temperature, slightly below the corresponding critical temperature T c /ρ 1/2 0.16, and matches the result of the normal phase calculation at T = T c . Red, blue, orange and green lines are for T /ρ 1/2 = 0.15, 0.12, 0.09, 0.06, respectively. Notice that as we decrease the temperature, between blue and orange line, the peak in the imaginary part of the AC conductivity disappears. We call the corresponding critical temperature T /ρ 1/2 0.1. We also provide the condensate as a function of temperature and mark the points where we calculated the AC conductivity. Near the boundary the fields behave as: a (j) x Z (j) = A (j) a A (j) Z + B (j) a B (j) Z u , j = 1, 2 . (6.15) and we can assemble the matrices of leading and subleading coefficients: A = A (1) a A (1) Z A (2) a A (2) Z , B = B (1) a B (1) Z B (2) a B (2) Z . (6.16) We collect the entries of the matrices (6.16) by integrating the equations for the fluctuations numerically and extracting the asymptotic behavior using (6.15). Knowing (6.16) and (6.9), we can finally calculate the correlation matrix: G = MBA −1 . (6.17) Finally from the correlation matrix (6.17), it is straightforward to find the AC conductivity in superconducting phase, using the Kubo formula: σ(ω) = G 11 iω . (6.18) In figure 11 we plot the AC conductivity for the model (1.5) with the non-linear Lagrangian for the neutral scalars, for ∆ = 2, q = 4, α u h = √ 2, m 2 L 2 = 0.025. We consider various values of temperature running from the normal phase to the superconducting phase. The AC conductivity for the normal phase of the model (1.5) has first appeared in [16], where it has been shown that (temperatures are measured in units of square root of charge density) 1. For T > T 0 (T 0 0.46 (for the considered model) the system exhibits a metallic behavior, dσ DC /dT < 0. 2. For T < T 0 the system exhibits an insulating behavior, dσ DC /dT > 0. 3. For T < T < T 0 , where T 0.35 (for the considered model) the non-trivial structure in the AC conductivity appears. To be more precise a mid-infrared peak shows up signaling a weight transfer mechanism and an emerging collective degree of freedom. These properties are illustrated in figure 3. We checked that the sum rules for the optical conductivity are satisfied in both normal and broken phases. After we couple this model of [16], with the potential (1.5) for the neutral scalars, to the superconducting sector, more features appear. For the choice of parameters ∆ = 2, q = 4, α u h = √ 2, m 2 L 2 = 0.025 we continue to enumerate what happens as we decrease the temperature: 4. At T c 0.16 (for the considered model) the second order phase transition occurs. The system lives in a superconducting phase, when T < T c . At T = T 0.1 (for the considered model) the peak in the imaginary part of the AC conductivity disappears. The peak in the real part of the AC conductivity in superconducting phase gets smaller as the temperature is lowered and eventually disappears. These properties can be seen in figure 11. We will comment more on these features in discussion section 8. It would be very interesting to find the QNM excitations of the system in both normal and broken phase to have complete control on its transport properties and its collective excitations. We leave this topic for future studies. Dome of superconductivity In this section we describe how to construct a superconducting dome, by tuning the parameters of the model (1.5) with the non-linear Lagrangian for the neutral scalars. In nature, High-Tc superconductors exhibit a dome of superconductivity (see figure 1) between insulating and metallic normal phases. Due to limitations of our system we cannot construct an actual insulator, however the non-linear model (1.5) still allows to distinguish between two qualitatively different states of the normal phase, (3.3) and (3.4). The first observation is that when we decouple the translational-symmetry breaking sector of the neutral scalar fields, by setting m = 0, we restore the framework of an ordinary holographic superconductor. Therefore, in order to confine the superconducting phase inside a dome, we need to make sure that the ordinary holographic superconductor exists in the normal state at any temperature. The way to achieve this is to make sure that the parameters ∆ and q are such that the normal phase at T = 0 is stable. That is, we should have D > 0, where D is given by (4.7), with κ = 0 and m = 0. The T = 0 IR stability condition therefore reduces to a well-known inequality, which reads: superconducting dome, is to restore a superconductor at a finite value of m = m 1 , and then make sure that there is another value m 2 > m 1 , such that the system at m > m 2 is again in a normal phase. The procedure to search for the parameters which lead to the superconducting dome is the following. For the chosen value of α we plot D = 0 curves one the (∆, q) plane, with the D given by (4.7), parametrized by various values of m. We search for the points (∆, q) of intersection of two curves, corresponding to two different values of m. These values of m can be the boundaries of the dome region at T = 0. We then verify this explicitly by plotting the D for given α, ∆, q. In figure 12 we plot the D = 0 curves for α = 0.25, on the (∆, q) plane, and demonstrate explicitly that the dome requirement restricts us to consider a small sub-region on the (∆, q) plane. In figure 13 we repeat this for α = 0.5, and also plot the corresponding phase diagram. The superconducting phase is bounded from above by a small critical temperature, and is represented on the graph by a red interval. 3 + 2∆(∆ − 3) − 4q 2 > 0 . We have found that the requirement of having an interval of superconductivity [m 1 , m 2 ] at T = 0 is rather restrictive 12 . We have found that on order to achieve the 'dome' at a vanishing temperature we need to tune ∆ and q to a small subregion of the region (7.1), centered around the point (∆ d , q d ) (2.74, 0.6) . (7.2) For such ∆ and q we can engineer a model which, at T = 0, exists in a normal pseudoinsulating phase for m ∈ [0, m 1 ], in a superconducting phase for m ∈ [m 1 , m 2 ], and a normal metallic phase for m > m 2 . The next step to construct the superconducting dome is to study the phase structure of the system at finite temperature. To determine the boundary of the superconducting region, that is the line of the second oder superconducting phase transition, we can start in the normal phase, at larger values of temperature, and determine when it becomes unstable towards formation of the scalar hair. This procedure has been reviewed in Subsection 4.2. However, the point (7.2) is very close to the boundary of the T = 0 infrared instability region of the model (1.5). This behavior is rather generic and leads to the conclusion that the height size of the dome is very limited, the T c is very small and not accessible through stable numerical analysis. Another way of realizing this issue relies on noticing that the BF bound is very mildly violated in the dome region such that the instability is very soft. We have repeated a similar dome analyses for the model (1.4), with the parameter A = α m playing the role of the doping. Interestingly enough, we have found that again for ∆ and q tuned to a small vicinity of the point (7.2) we obtain a superconducting dome. This time, however, the normal phase can only be metallic. The superconducting dome at T = 0 is an interval [A 1 , A 2 ], existing between two regions of normal metallic state, at A ∈ [0, A 1 ] and A > A 2 . The critical temperature is bounded from above by a small number, and we did not access the finite-temperature superconducting state. We plot our results for the dome in model (1.4) in figure 14. This analysis shows that the existence of a superconducting dome region is a rather generic feature of these models, independent of the choice of the potential. In the next section we discuss the possible ways to alleviate the problem of the flatness of the dome. This seems to require introduction of an extra elements to our holographic system. Discussion In this paper we considered a holographic superconductor with broken translational symmetry, continuing the research, initiated in [4,5]. To break the translational symmetry we used the known technique [11], coupling our system to the sector of massless neutral scalar fields, depending linearly on the spatial coordinates. We studied the standard Lagrangian for these neutral scalars, as well as its non-linear generalization, proposed in [16]. We have constructed models, exhibiting the following non-trivial new features: 1. The Holographic superconductor in the non-linear Lagrangian model has a rich phase diagram on the temperature-doping plane. In particular the superconducting phase is separated from the normal pseudo-insulating phase and the normal metallic phase by the line of second order phase transition as shown in figure 10. 2. In the same model the optical conductivity exhibits a non-trivial emerging structure, signaling a collective excitation of the charge carriers localized in the mid-infrared. This has been observed in [16] in the normal phase of the same model, for temperatures, lower than a certain critical value. In this pa-per we have demonstrated that this structure persists in the superconducting phase. Eventually it gets destroyed by the charge condensate. This suggests a possible competition between the superconducting mechanism and the momentum dissipating one. In particular it seems clear that a large superfluid density completely screens this collective excitation which in a sense gets eaten by the large condensate. We are not aware of real superconducting system supporting a collective localized excitation like the one we see. It would be nice to see if other holographic models, providing translational symmetry breaking, support the same property. In this direction it would be very interesting to study the QNM structure of the system as initiated in [32]. 3. We performed a complete analysis of the behavior of the critical (superconducting) temperature as a function of the various parameters of our model. In particular we studied the curious non-monotonic behavour of T c as a function of the graviton mass m, which was already observed in [4,5]. Our results suggest that this feature persists for generic Lagrangian for neutral scalars. We do not have any clear explanation of the big mass regime where T c actually increases with the strength of translational symmetry breaking. It is even tempting to doubt the model in that regime, reminiscing the following known issues: for large momentum dissipation it seems that the energy density of the dual field theory at zero charge density gets negative [32]; the diffusion bounds for the model are unrestricted from below and the diffusion constants go to zero in that limit [33]. A very similar behavior has been observed in holographic SC with helical lattices [6] and with disorder [2]. It would be interesting to further analyze the universality and the meaning of this feature. It would be interesting to improve the model, so that pseudo-insulating phase is replaced by an actual insulating phase. This will make the phase diagram more resembling such of an actual high-Tc superconductor. This could be easily achieved introducing a dilaton field into the model. Another interesting open problem is to increase the critical temperature of the superconductor, enclosed by a dome-shaped line. One of the ways to accomplish this might be realized by an inclusion of a non-trivial coupling κ between the charged scalar condensate, and the neutral scalars. A further interesting question is to look at universal properties of these large class of effective toy models such as the accomplishment of Homes' Law following [35]. We leave these interesting questions for future investigation. checked explicitly that the results are stable towards changing . We have the freedom of choice of the initial conditions ψ(u h ), A t (u h ), and χ(u h ). The freedom of choice of χ(u h ) is spurious, due to the time scaling symmetry, as we discuss below. The values of ψ(u h ) and A t (u h ) are fixed by the requirement of having a fixed temperature T /ρ 1/2 and zero source ψ 1 = 0, see (4.10). Both the charge density ρ, in units of which me measure the temperature, and the source ψ 1 are determined by the near-boundary behavior of the numerical solution, with the gauge field behaving as: A t (u) = µ − ρ u + O(u 2 ) . (A.2) In practical calculation we do the following. Suppose the temperature is sufficiently small, so that the system is in a superconducting phase. We know that increasing the temperature will decrease the condensate, ψ 2 /ρ ∆/2 , until finally at the critical temperature T c the condensate is zero. At that point ψ(u h ) = 0, that is, we do not have the solution with vanishing source and non-trivial profile of ψ(u) in the bulk. Therefore we can start at ψ(u h ) = 0, and take gradually incrementing values of ψ(u h ). For each value of ψ(u h ) we search for A t (u h ), such that ψ 1 = 0. For an example of this kind of result see figure 15. Finally for the given pair (ψ(u h ), A t (u h )) we calculate numerically T /ρ 1/2 , ψ 2 /ρ ∆/2 as shown for example in figure 9. Scaling symmetry The equations of motion (2.7)-(2.10) are invariant under the scaling symmetry: u =ũ/a , (t, x, y) = (t,x,ỹ)/a , A t =à t a , α =α a , (A.3) where a is a parameter of the symmetry transformation. The temperature, chemical potential, and the charge density therefore transform as: T = aT , µ = aμ , ρ = a 2ρ . (A.4) The scaling symmetry (A.3) allows one to fix u h = 1. If u h is not fixed to one, then we should substitute u −2 hà t (u h ) as the initial condition for the flux at the horizon. We have checked explicitly that the results are invariant under change of u h . Time scaling symmetry The equations of motion (2.7)-(2.10) are invariant under the time scaling symmetry: e χ = b 2 eχ , t = bt , A t =à t /b , (A.5) where b is a parameter of the symmetry transformation. We can use the time scaling symmetry (A.5) to fix χ(0) = 0 at the boundary. This is necessary, so that the speed of light in the boundary field theory is equal to one. To achieve this, we impose the initial conditions on χ to be χ(u h ) + 2 log b, and on the flux to beà t (u h )/b. We fix χ(u h ) once and for all. We have demonstrated explicitly that the result is independent of the choice of χ(u h ). After fixing χ(u h ), for the givenà t (u h ), we integrate numerically the equations of motion, with b = 1. We then impose b = e −χ(0)/2 , where χ(0) is determined numerically. For this b we impose the initial conditions χ(u h ) + 2 log b,à t (u h )/b and integrate the equations of motion again. This time, due to the time scaling symmetry (A.5), we have χ(0) = 0. We have verified this explicitly. Running the described numerical procedure we were able to construct the condensate ψ 2 /ρ ∆/2 as a function of temperature T /ρ 1/2 . In figure 9 we provide the plot of the condensate, for the model (1.5) with ∆ = 2, q = 1, α u h = 0.5 (α in units of entropy density), m L = 1. For the same parameters we also plot the initial conditions (−A t (u h ), ψ(u h )), which we imposed, to enable the vanishing source ψ 1 = 0 in figure 15. A.2 Grand potential Here we provide intermediate steps for calculation of the grand potential. The holographic prescription for the calculation of the grand potential is: Ω = −T log Z = T S E , (A.6) where S E is a Euclidean on-shell action of the bulk theory. This should be supplemented with the boundary Gibbons-Hawking term, and the counter-terms 13 . The resulting action reads: S E = I + I GH + I c.t. , (A.7) where the boundary Gibbons-Hawking term is given by: I GH = −2 d 3 x √ −h K u= , (A.8) where is a UV cutoff, h ab the pullback metric on the boundary and K ab the extrinsic curvature 14 . The counter-term action I c.t. is a sum of gravitational, scalar and axion fields counter-terms [4,36]: I c.t. = − d 3 x √ −h 4 L + 1 L ψ 2 − 2m 2 L V u= . (A.10) It is convenient to evaluate the following Lagrangian on shell: L = L + 2 √ −h K . (A.11) to get: I + I GH = d 4 xL − 2 d 3 x √ −h K u=u H . (A.12) After a straightforward calculation we obtain: L = B , B = L 2 A t A t e χ/2 − 4u −3 f e −χ/2 . (A.13) Notice that which B(u H ) = 0. Therefore the full on-shell action is given by: I + I GH = d 3 x 4πL 2 T u 2 H − B( ) . (A.14) To proceed with the calculation, we need to be able to evaluate the counter-term action (A.10) and the B( ) term of (A.14). We need to know the near-boundary behavior of the fields. That is given by 15 : A t = µ − ρu + O(u 2 ) (A.15) ψ = ψ 1 L 3−∆ u 3−∆ + ψ 2 L ∆ u ∆ + O(u ∆+1 ) (A.16) f = 1 + γ 1 u 2 + γ 2 u 3 + O(u 4 ) (A.17) χ = ζ 1 u 2 + ζ 2 u 3 + O(u 4 ) (A.18) V = V 1 u 2 + O(u 3 ) . (A.19) 14 It is defined by K = ∇ µ n µ , n µ = 0 , 0 , 0 , u f (u) 1/2 /L . (A.9) where n µ is the unit vector normal to the boundary. 15 Note that this is true only if the potential reads V (X) = X + X n1 + X n2 + ... where the smallest power is always equal to one. We are interested in the systems with vanishing source of the charged scalar, ψ 1 = 0. By solving equations of motion near the boundary, we obtain: γ 1 = −m 2 L 2 V 1 . (A.20) Combining all the results together, we arrive at the final expression for the on-shell action: S E = d 3 x 16π S T + 2L 2 γ 2 + L 2 µρ . (A.21) B On-shell action for fluctuations The calculation of the on-shell action for fluctuations is similar to the one for the grand potential performed in appendix A. The total action is a sum of the total bulk action (2.1), the Gibbon-Hawking (GH) term on the boundary (A.8), and the counter-term action (A.10): I f tot = I f b + I f GH + I f c.t. (B.1) We evaluate the action (B.1) on the ansatz: ds 2 = L 2 u 2 −f (u)dt 2 + 2 h tx (u, t)dtdx + dx 2 + dy 2 + 1 f (u) du 2 , A = A t dt + a x (u, t)dx , φ x = α x + ξ(u, t) , (B.2) φ y = α y , ψ = ψ(u) , and collect O( 2 ) terms, which describe dynamics of the fluctuations h tx , a x , ζ. The O( ) terms vanish due to equations of motion, satisfied by the background fields f , A t , φ x,y , ψ, and the O( 0 ) terms are contributions to the grand potential for the background. The GH term vanishes at the horizon. Therefore: I f = I f b + I f GH = I f b − −2 √ −h K . (B.3) We obtain: I f = L 2 e − χ 2 4u 4 f 2 −2f e χ 2uh tx (u, t) u 3 f A t ∂ u a x (u, t) + 2q 2 uA t ψ 2 a x (u, t) + 4f ∂ u h tx (u, t) + 2αL 2 m 2 u∂ t ξ(u, t)V + u 2 − u 2 (∂ t a x (u, t)) 2 + f (∂ t h tx (u, t)) 2 + 2L 2 m 2 (∂ t ξ(u, t)) 2V + h tx (u, t) 2 (−2 (uf + 3) + f u 2 ψ 2 + 2uχ − 6 + L 2 2m 2 V − α 2 u 2 V + M 2 ψ 2 − u 2 e 2χ h tx (u, t) 2 u 2 A 2 t + 2q 2 A 2 t ψ 2 − 2u 4 f 3 (∂ u a x (u, t)) 2 − 4q 2 u 2 f 2 ψ 2 a x (u, t) 2 − 4L 2 m 2 u 2 f 3 (∂ u ξ(u, t)) 2V (B.4) To proceed, we integrate the a 2 x , h 2 tx , ξ 2 terms by parts, and substitute expressions for a x , h tx , ξ from the corresponding fluctuation equations. We need to keep track of the boundary terms. Then let us go to the momentum space. As a result we arrive atĨ f = B f , where: B f = − L 2 e − χ 2 2u 3 e χ h tx (u, −ω) u 3 A t a x (u, ω) − uh tx (u, ω) + 4h tx (u, ω) + uf u 2 a x (u, −ω)a x (u, ω) + 2L 2 m 2 ξ(u, −ω)ξ (u, ω)V , (B.5) where prime, as before, stands for a derivative w.r.t. u. The counter-term action (A.10) for the ansatz (B.2) is given by: where again not all the coefficients of expansion are independent, and in fact: ξ (1) (ω) = i 2V 1 m 2 αρω 2γ 1 a (2) x (ω) + 2V 1 m 2 α 2 ρ h (1) tx (ω) + ω 2 a (2) x (ω) , (B.10) ξ (2) (ω) = iω 4m 2 V 1 αρ 2γ 1 + ω 2 a (2) x (ω) (B.11)I f tot = a (1) x (−ω)a (2) x (ω) − ρ a (1) x h (1) tx (−ω) + 2γ 2 h (1) tx (−ω)h (1) tx (ω) − 3h (1) tx (−ω)h (3) tx (ω) + 6m 2 V 1 ξ (1) (−ω)ξ (3) (ω) . (B.14) where we have kept ξ (1) , for brevity (but keep in mind it is not an independent expansion coefficient, due to (B.10)). It is convenient to replace ξ → Z, so that we are dealing with two fields, (a x , Z), which have the same near-boundary expansion, at least up to the first two orders. Due to (6.5), we obtain: Z(u, ω) = f (u) iωαu ξ (u, ω) , (B.15) which near the boundary becomes: Z(u, ω) = Z (1) (ω) + Z (2) (ω) u + · · · = − 2i αω ξ (2) (ω) − 3i αω ξ (3) (ω) u + . . . . (B.16) We can represent I f tot in the form, convenient for calculation of correlation matrix: I f tot = a (1) x (−ω), Z (1) (−ω) M a(2) x (−ω) Z (2) (−ω) + · · · , (B.17) where dots denote ξ (1) terms. We cannot extract ξ (1) by solving system of equations for (a x , Z), because Z ∼ ξ . So we assume that ξ (1) is a constant of integration, which we fix to be: ξ (1) (ω) = i(1 + √ 2)(ω 2 − 2m 2 α 2 V 1 ) 2m 2 αρωV 1 a (2) x (ω) , (B.18) which is the choice enabling a diagonal matrix M . Let us rescale the fluctuation fields (this is a symmetry transformation of fluctuation equations): a x Z → 1 1 − 2 √ 2 + √ 2ω 2 m 2 α 2 V 1 a x Z (B.19) The corresponding matrix is: M =     1 0 0 2m 2 α 2 V 1 1−2 √ 2+ √ 2ω 2 m 2 α 2 V 1     . (B.20) For the purpose of finding AC conductivity we only need the (a x , a x ) component of the correlation matrix. Figure 1 . 1Schematic phase diagram of a real cuprate high-Tc superconductor. Figure 2 . 2Left: Optical conductivity in the normal phase for (1.5) (α = √ 2, m 2 = 0.025, ρ = 1) with temperature running from T = 0.04 (black line) to T = 1 (red line). Right: DC conductivity as a function of temperature for the same model and parameters. Figure 3 . 3Region plots for the model(1.5) in the normal phase. We choose units where the density is ρ = 1. Here we have fixed α = 1 (left plot) and m = 1 (right plot). The blue region is pseudo-insulating, dσ DC /dT > 0, the green region is metallic, dσ DC /dT < 0. Figure 4 . 4Region and contour plots for the model (1.4) with linear potential for the neutral scalars (dimensional quantities are measured in units of the charge density ρ). We choose α = 2. The region of ∆, q, satisfying the IR instability condition (4.6) is shaded in grey. The red dot is centered around (q d , ∆ d ) = (0.6, 2.74). These tuned (q, ∆) confine superconducting phase of the model (1.4) into a dome region, as we discuss in Section 7. Notice the proximity of the red dot to the boundary of the IR instability region, resulting in T c (q d , ∆ d ) being very small. Figure 5 . 5Region and contour plots for the model (1.5) with non-linear potential for the neutral scalars with α = 0.25, m = 4 (dimensional quantities are measured in units of the charge density ρ). The region of ∆, q, satisfying the IR instability condition (4.6) is shaded. The blue dot on the plot has coordinates (q d , ∆ d ) = (0.6, 2.74). These tuned (q, ∆) confine superconducting phase of the model (1.5) into a dome region, as we discuss in Section 7. Notice the proximity of the blue dot to the boundary of the IR instability region, resulting in T c (q d , ∆ d ) being very small. Figure 6 . 6Critical temperature as a function of α for the model (1.6). Left: q = 1, ∆ = 2. Right: q = 3, ∆ = 2. Figure 7 . 7Critical temperature as a function of ∆ for the model (1.6). Left (α = 1, q = 2). Right: (α = 1, q = 0.6). Figure 9 . 9Condensate for ∆ = 2, q = 1, α u h = 0.5, m L = 1 model with V = z + z 5 , and the corresponding Grand Potential for the two phases. Figure 10 . 10Left: The T = 0 contour plots of the IR instability lines, for the model (1.5) with α = 0.7 and various choices of m. Right: Phase diagram for the model (1.5) at the point q = 4, ∆ = 3 with α = 0.7 . We have chosen units ρ = 1. Figure 12 . 12Searching for the dome for the model (1.5) with α = 0.25 at vanishing temperature. From the left graph we conclude that in order to have the superconducting dome we need to choose ∆ and q from a small vicinity of the point (7.2). In the right graph we have verified explicitly the IR instability of the model with ∆ = 2.75, q = 0.61, between two finite values of m. Figure 13 . 13we stay on top of the T = 0 line on the (∆, q) plane of ordinary holographic superconductor. The next step in engineering a model exhibiting a Top: The D = 0 contours for the model (1.5) with α = 0.5; Bottom Left: BF bound violation for ∆ = 2.728, q = 0.61; Bottom Right: Full Phase Diagram for the model. Green region is a normal pseudo-insulating phase, grey region is a normal metallic phase, red region is a superconducting phase. Figure 14 . 14Top: The D = 0 contours for the model (1.4); Bottom Left: BF bound violation for ∆ = 2.745, q = 0.6; Bottom Right: Full Phase Diagram for the model. Orange region is a superconductor, lettuce region is a metal. Figure 15 . 15Condensate for ∆ = 2, q = 1, α u h = 0.5, m L = 1 model with V = z + z 5 , and the corresponding imposed near-horizon data (for χ(u h − ) = 1). 2L 4 m 2 u 2 f 2V ξ (u, ω)ξ (u, −ω) + L 2 e χ h tx (u, −ω)h tx (u, ω) 2L 2 m 2 V − ψ 2 −4 −2L 2 m 2 u 2V ω 2 ξ(u, −ω)ξ(u, ω)+αh tx (u, −ω)(αh tx (u, ω)+2iωξ(u, ω)) , (B.6)evaluated at u = 0. Now let us evaluate (B.6) minus (B.5) at u = 0, which gives I f tot . Consider the case ∆ = 2. First we need to solve fluctuation equations near the boundary. We already determined the near-boundary asymptotics (A.17) for the background fields. In superconducting phase we have ψ 1 = 0. Besides, from the equations of motion, one obtains:γ 1 = −V 1 m 2 L 2 α 2 . (B.7)Similarly, the fluctuation equations of motion, near the boundary give:ξ(u, ω) = ξ (1) (ω) + ξ(2) (ω) u 2 + ξ (3) (ω) u 3 , a x (u) = a(1) x + a 6iV 1 m 2 α ξ (3) (ω) . (B.13)Using these asymptotic expansions, evaluating I f c.t − B f at u = 0 gives 16 : It is really tempting to make a comparison to polaron physics, see, e.g.,[18].4 See also[19], where non-trivial structure has been observed in the optical conductivity of a holographic superconductor. It is still not completely clear which should be the correct interpretation of these fields, see, e.g.,[13,[21][22][23][24]. The bulk action we will propose for them is just a particular case of the most generic effective action for phonons in solids. When V (X) = X/2m 2 , we recover the equations of[5]. One easy way to enable σ DC (T = 0) = 0 is adding a dilaton field to the action[26], which allows to get a "real" insulating state. See also[27] for an alternative approach. In the case V (X) = X 2m 2 these equations reduce to the fluctuation equations obtained in[5]. See also[31] for an example of calculation of the correlation matrix in a different system of two coupled fluctuations. Being more specific, it seems that there exists a lower bound for ∆ below which no SC dome can be built within this model. It would be nice to understand better this bound. . By tuning the values of the scaling dimension ∆ and the charge q of the scalar field, which is a bulk dual to the charge condensate of the boundary superconductor, one can obtain a system, which exists in a superconducting phase, enclosed in a dome region. The critical temperature of the dome is very small, and in fact appears to be too hard to calculate numerically. The superconducting dome exists for both linear and non-linear models. In the case of the model with the linear Lagrangian for the neutral scalars, the superconducting dome exists in the middle of the normal metallic phase. In its non-linear extension instead, the dome exists between a pseudo-insulating phase for smaller values of the doping, and a metallic phase for larger values of the doping. This is the closest example we have to the real phase diagram of the high-Tc superconductors.We are aware of one only holographic example which shows the same kind of features in a way different setup[34]. Equivalently, one can calculate difference of the grand potentials of two phases, in order to avoid adding the counter-terms. This is in agreement with eq. (3.14) of[5]. See that only the leading linear term V 1 in the near-boundary expansion of V (z) matters in this formula. Setting up the modelIn this section we introduce the model, which we will be studying in this paper. We begin by writing down the action and equations of motion of the bulk theory. We proceed by deriving equations of motion for the general ansatz, describing the charged black brane geometry, with linearly-dependent sources for φ I , and radially dependent charged scalar ψ(u). Then we will review the normal-state solution of the model, which has a trivial charged scalar field profile ψ ≡ 0.AcknowledgementsWe would like to thank Richard Davison, Daniel Arean, Siavash Golkar, Gary Horowitz, Keun-Young Kim, Rene Meyer, Eun-Gook Moon, Nick Poovuttikul and Matthew Roberts for valuable discussions and comments. We would like to thank Oriol Pujolás for initial collaboration on the project and for insights about the superconducting dome. MB acknowledges support from MINECO under grant FPA2011-25948, DURSI under grant 2014SGR1450 and Centro de Excelencia Severo Ochoa program, grant SEV-2012-0234. The work of MG was supported by Oehme Fellowship.A Condensate and grand potentialThe aim of this appendix is to provide more details about the computations and the numerical procedures we did in Section 5.A.1 CondensateIn this subsection we will outline the routine to obtain the numerical solution of the equations of motion (2.7)-(2.10) for the whole superconducting background. First of all, evaluating the equations (2.7)-(2.10) at u = u h , we can express. Therefore we impose the initial conditions at u h − in the following way:where is a small IR cutoff. One can solve equations of motion near the horizon to arbitrary order in . We have found that imposing (A.1) is sufficient. We have Holographic superconductors from the massive gravity. H B Zeng, J P Wu, arXiv:1404.5321Phys. Rev. D. 90446001hep-thH. B. Zeng and J. P. Wu, "Holographic superconductors from the massive gravity," Phys. Rev. D 90, no. 4, 046001 (2014) [arXiv:1404.5321 [hep-th]]. D Arean, A Farahi, L A Zayas, I S Landea, A Scardicchio, arXiv:1407.7526Holographic p-wave Superconductor with Disorder. hep-thD. Arean, A. Farahi, L. A. Pando Zayas, I. S. Landea and A. Scardicchio, "Holographic p-wave Superconductor with Disorder," arXiv:1407.7526 [hep-th]. Holographic Superconductor on Q-lattice. Y Ling, P Liu, C Niu, J P Wu, Z Y Xian, arXiv:1410.6761hep-thY. Ling, P. Liu, C. Niu, J. P. Wu and Z. Y. Xian, "Holographic Superconductor on Q-lattice," arXiv:1410.6761 [hep-th]. Relaxed superconductors. T Andrade, S A Gentle, arXiv:1412.6521hep-thT. Andrade and S. A. Gentle, "Relaxed superconductors," arXiv:1412.6521 [hep-th]. A simple holographic superconductor with momentum relaxation. K Y Kim, K K Kim, M Park, arXiv:1501.00446hep-thK. Y. Kim, K. K. Kim and M. Park, "A simple holographic superconductor with momentum relaxation," arXiv:1501.00446 [hep-th]. J Erdmenger, B Herwerth, S Klug, R Meyer, K Schalm, arXiv:1501.07615S-Wave Superconductivity in Anisotropic Holographic Insulators. hep-thJ. Erdmenger, B. Herwerth, S. Klug, R. Meyer and K. Schalm, "S-Wave Superconductivity in Anisotropic Holographic Insulators," arXiv:1501.07615 [hep-th]. Building a Holographic Superconductor. S A Hartnoll, C P Herzog, G T Horowitz, arXiv:0803.3295Phys. Rev. Lett. 10131601hep-thS. A. Hartnoll, C. P. Herzog and G. T. Horowitz, "Building a Holographic Superconductor," Phys. Rev. Lett. 101, 031601 (2008) [arXiv:0803.3295 [hep-th]]. Holographic Superconductors. S A Hartnoll, C P Herzog, G T Horowitz, arXiv:0810.1563JHEP. 081215hep-thS. A. Hartnoll, C. P. Herzog and G. T. Horowitz, "Holographic Superconductors," JHEP 0812, 015 (2008) [arXiv:0810.1563 [hep-th]]. Holography without translational symmetry. D Vegh, arXiv:1301.0537hep-thD. Vegh, "Holography without translational symmetry," arXiv:1301.0537 [hep-th]. Infrared-modified gravities and massive gravitons. V A Rubakov, P G Tinyakov, arXiv:0802.4379Phys. Usp. 51hep-thV. A. Rubakov and P. G. Tinyakov, "Infrared-modified gravities and massive gravitons," Phys. Usp. 51, 759 (2008) [arXiv:0802.4379 [hep-th]]. A simple holographic model of momentum relaxation. T Andrade, B Withers, arXiv:1311.5157JHEP. 1405hep-thT. Andrade and B. Withers, "A simple holographic model of momentum relaxation," JHEP 1405, 101 (2014) [arXiv:1311.5157 [hep-th]]. Momentum relaxation in holographic massive gravity. R A Davison, arXiv:1306.5792Phys. Rev. D. 8886003hep-thR. A. Davison, "Momentum relaxation in holographic massive gravity," Phys. Rev. D 88, 086003 (2013) [arXiv:1306.5792 [hep-th]]. Holographic Lattices Give the Graviton an Effective Mass. M Blake, D Tong, D Vegh, arXiv:1310.3832Phys. Rev. Lett. 112771602hep-thM. Blake, D. Tong and D. Vegh, "Holographic Lattices Give the Graviton an Effective Mass," Phys. Rev. Lett. 112, no. 7, 071602 (2014) [arXiv:1310.3832 [hep-th]]. Universal Resistivity from Holographic Massive Gravity. M Blake, D Tong, arXiv:1308.4970Phys. Rev. D. 8810106004hep-thM. Blake and D. Tong, "Universal Resistivity from Holographic Massive Gravity," Phys. Rev. D 88, no. 10, 106004 (2013) [arXiv:1308.4970 [hep-th]]. Holographic duality and the resistivity of strange metals. R A Davison, K Schalm, J Zaanen, arXiv:1311.2451Phys. Rev. B. 89245116hep-thR. A. Davison, K. Schalm and J. Zaanen, "Holographic duality and the resistivity of strange metals," Phys. Rev. B 89 (2014) 24, 245116 [arXiv:1311.2451 [hep-th]]. M Baggioli, O Pujolas, arXiv:1411.1003Holographic Polarons, the Metal-Insulator Transition and Massive Gravity. hep-thM. Baggioli and O. Pujolas, "Holographic Polarons, the Metal-Insulator Transition and Massive Gravity," arXiv:1411.1003 [hep-th]. Inhomogeneity simplified. M Taylor, W Woodhead, arXiv:1406.4870Eur. Phys. J. C. 7412hep-thM. Taylor and W. Woodhead, "Inhomogeneity simplified," Eur. Phys. J. C 74, no. 12, 3176 (2014) [arXiv:1406.4870 [hep-th]]. Optical conductivity of a strong-coupling Frohlich polaron. S N Klimin, J T Devreese, arXiv:1310.4413Phys. Rev. B. 8935201cond-mat.str-elS. N. Klimin and J. T. Devreese "Optical conductivity of a strong-coupling Frohlich polaron," Phys. Rev. B 89 (2014), 035201 [arXiv:1310.4413 [cond-mat.str-el]]. Holographic Superconductors with Various Condensates. G T Horowitz, M M Roberts, arXiv:0810.1077Phys. Rev. D. 78126008hep-thG. T. Horowitz and M. M. Roberts, "Holographic Superconductors with Various Condensates," Phys. Rev. D 78, 126008 (2008) [arXiv:0810.1077 [hep-th]]. Doping a Mott insulator: physics of hight temperature superconductivity. P A Lee, N Nagaosa, X.-G Wen, arXiv:cond-mat0410445Rev. Mod. Phys. 8935201Phys. Rev. BP. A. Lee, N. Nagaosa and X.-G. Wen Phys. Rev. B 89, 035201 Published 3 January 2014 "Doping a Mott insulator: physics of hight temperature superconductivity," Rev. Mod. Phys. 78, 17 (2006) [arXiv:cond-mat0410445]. Relativistic Fluids, Superfluids, Solids and Supersolids from a Coset Construction. A Nicolis, R Penco, R A Rosen, arXiv:1307.0517Phys. Rev. D. 89445002hep-thA. Nicolis, R. Penco and R. A. Rosen, "Relativistic Fluids, Superfluids, Solids and Supersolids from a Coset Construction," Phys. Rev. D 89, no. 4, 045002 (2014) [arXiv:1307.0517 [hep-th]]. Nonrelativistic effective Lagrangians. H Leutwyler, hep-ph/9311264Phys. Rev. D. 493033H. Leutwyler, "Nonrelativistic effective Lagrangians," Phys. Rev. D 49, 3033 (1994) [hep-ph/9311264]. Null energy condition and superluminal propagation. S Dubovsky, T Gregoire, A Nicolis, R Rattazzi, hep-th/0512260JHEP. 060325S. Dubovsky, T. Gregoire, A. Nicolis and R. Rattazzi, "Null energy condition and superluminal propagation," JHEP 0603, 025 (2006) [hep-th/0512260]. Phonons as goldstone bosons. H Leutwyler, hep-ph/9609466Helv. Phys. Acta. 70H. Leutwyler, "Phonons as goldstone bosons," Helv. Phys. Acta 70, 275 (1997) [hep-ph/9609466]. Metallic AdS/CFT. A Karch, A O&apos;bannon, arXiv:0705.3870JHEP. 070924hep-thA. Karch and A. O'Bannon, "Metallic AdS/CFT," JHEP 0709 (2007) 024 [arXiv:0705.3870 [hep-th]]. Charge transport in holography with momentum dissipation. B Goutéraux, arXiv:1401.5436JHEP. 1404181hep-thB. Goutéraux, "Charge transport in holography with momentum dissipation," JHEP 1404 (2014) 181 [arXiv:1401.5436 [hep-th]]. . M Baggioli, O Pujolas, To appearM. Baggioli and O. Pujolas, To appear Landscape of superconducting membranes. F Denef, S A Hartnoll, arXiv:0901.1160Phys. Rev. D. 79126008hep-thF. Denef and S. A. Hartnoll, "Landscape of superconducting membranes," Phys. Rev. D 79 (2009) 126008 [arXiv:0901.1160 [hep-th]]. Hydrodynamics of Holographic Superconductors. I Amado, M Kaminski, K Landsteiner, arXiv:0903.2209JHEP. 090521hep-thI. Amado, M. Kaminski and K. Landsteiner, "Hydrodynamics of Holographic Superconductors," JHEP 0905, 021 (2009) [arXiv:0903.2209 [hep-th]]. Holographic Operator Mixing and Quasinormal Modes on the Brane. M Kaminski, K Landsteiner, J Mas, J P Shock, J Tarrio, arXiv:0911.3610JHEP. 100221hep-thM. Kaminski, K. Landsteiner, J. Mas, J. P. Shock and J. Tarrio, "Holographic Operator Mixing and Quasinormal Modes on the Brane," JHEP 1002 (2010) 021 [arXiv:0911.3610 [hep-th]]. Fluctuations in finite density holographic quantum liquids. M Goykhman, A Parnachev, J Zaanen, arXiv:1204.6232JHEP. 121045hep-thM. Goykhman, A. Parnachev and J. Zaanen, "Fluctuations in finite density holographic quantum liquids," JHEP 1210 (2012) 045 [arXiv:1204.6232 [hep-th]]. Momentum dissipation and effective theories of coherent and incoherent transport. R A Davison, B Goutraux, arXiv:1411.1062JHEP. 150139hep-thR. A. Davison and B. Goutraux, "Momentum dissipation and effective theories of coherent and incoherent transport," JHEP 1501, 039 (2015) [arXiv:1411.1062 [hep-th]]. A Amoretti, A Braggio, N Magnoli, D Musso, arXiv:1411.6631Bounds on intrinsic diffusivities in momentum dissipating holography. hep-thA. Amoretti, A. Braggio, N. Magnoli and D. Musso, "Bounds on intrinsic diffusivities in momentum dissipating holography," arXiv:1411.6631 [hep-th]. Superconducting Dome from Holography. S Ganguli, J A Hutasoit, G Siopsis, arXiv:1302.5426Phys. Rev. D. 87126003cond-mat.str-elS. Ganguli, J. A. Hutasoit and G. Siopsis, "Superconducting Dome from Holography," Phys. Rev. D 87 (2013) 12, 126003 [arXiv:1302.5426 [cond-mat.str-el]]. Towards a Holographic Realization of Homes' Law. J Erdmenger, P Kerner, S Muller, arXiv:1206.5305JHEP. 121021hep-thJ. Erdmenger, P. Kerner and S. Muller, "Towards a Holographic Realization of Homes' Law," JHEP 1210, 021 (2012) [arXiv:1206.5305 [hep-th]]. A Stress tensor for Anti-de Sitter gravity. V Balasubramanian, P Kraus, hep-th/9902121Commun. Math. Phys. 208413V. Balasubramanian and P. Kraus, "A Stress tensor for Anti-de Sitter gravity," Commun. Math. Phys. 208, 413 (1999) [hep-th/9902121].
[]
[]
[ "V Krishnan [email protected] \nSchool of Physical Sciences\nUniversity of Tasmania\nPrivate Bag 377001HobartTasmaniaAustralia\n\nCSIRO Astronomy and Space Science\nTelescope National Facility, CSIRO\nAustralia 3. Harvard-Smithsonian Center for AstrophysicsPO Box 761710, 02138Epping, CambridgeNSW, MassachusettsAustralia, USA\n", "S P Ellingsen \nSchool of Physical Sciences\nUniversity of Tasmania\nPrivate Bag 377001HobartTasmaniaAustralia\n", "M J Reid ", "A Brunthaler \nMax-Plank-Institut für Radioastronomie\nAuf dem Hügel 6953121BonnGermany\n", "A Sanna \nMax-Plank-Institut für Radioastronomie\nAuf dem Hügel 6953121BonnGermany\n", "J Mccallum \nSchool of Physical Sciences\nUniversity of Tasmania\nPrivate Bag 377001HobartTasmaniaAustralia\n", "C Reynolds \nInternational Centre for Radio Astronomy Research\nCurtin University\nBuilding 610, 1 Turner Avenue6102BentleyWAAustralia\n", "H E Bignall \nInternational Centre for Radio Astronomy Research\nCurtin University\nBuilding 610, 1 Turner Avenue6102BentleyWAAustralia\n", "C J Phillips \nCSIRO Astronomy and Space Science\nTelescope National Facility, CSIRO\nAustralia 3. Harvard-Smithsonian Center for AstrophysicsPO Box 761710, 02138Epping, CambridgeNSW, MassachusettsAustralia, USA\n", "R Dodson \nInternational Centre for Radio Astronomy Research, The University of Western Australia (M468)\n35 Stirling Highway6009CrawleyWAAustralia\n", "M Rioja \nInternational Centre for Radio Astronomy Research, The University of Western Australia (M468)\n35 Stirling Highway6009CrawleyWAAustralia\n\nObservatorio Astronómico Nacional (IGN)\nAlfonso XII, 3 y 5, E-, Spain 8. Shanghai Astronomical Observatory, 80 Nandan Road28014, 200030Madrid, ShanghaiChina\n", "J L Caswell \nCSIRO Astronomy and Space Science\nTelescope National Facility, CSIRO\nAustralia 3. Harvard-Smithsonian Center for AstrophysicsPO Box 761710, 02138Epping, CambridgeNSW, MassachusettsAustralia, USA\n", "X Chen ", "J R Dawson \nCSIRO Astronomy and Space Science\nTelescope National Facility, CSIRO\nAustralia 3. Harvard-Smithsonian Center for AstrophysicsPO Box 761710, 02138Epping, CambridgeNSW, MassachusettsAustralia, USA\n\nDepartment of Physics and Astronomy and MQ Research Centre in Astronomy, Astrophysics and Astrophotonics\n10. Department of Physics\nFaculty of Science\nMacquarie University\n2109NSWAustralia\n\nYamaguchi University\nYoshida 1677-1, Yamaguchi-city, Japan 11. Hartebeesthoek Radio Astronomical ObservatoryPO Box 443753-8512, 1740Yamaguchi, KrugersdorpSouth Africa\n", "K Fujisawa ", "S Goedhart ", "J A Green \nCSIRO Astronomy and Space Science\nTelescope National Facility, CSIRO\nAustralia 3. Harvard-Smithsonian Center for AstrophysicsPO Box 761710, 02138Epping, CambridgeNSW, MassachusettsAustralia, USA\n\n13. Mizusawa VLBI Observatory\nDepartment of Astronomical Science\nSKA Organisation, Jodrell Bank Observatory, Lower Withington\nNational Astronomical Observatory of Japan &\nSK11 9DLMacclesfieldUK\n\n14. Purple Mountain Observatory\n15. Department of Astronomy\nThe Graduate University for Advanced Study\nChinese Academy of Sciences\n181-8588, 210008NanjingMitakaJapan, China\n\n16. SKA South Africa\nNanjing University\n3 rd Floor, The Park, Park Road, Pinelands210093, 7405NanjingChina, South Africa\n", "K Hachisuka ", "M Honma ", "K Menten \nMax-Plank-Institut für Radioastronomie\nAuf dem Hügel 6953121BonnGermany\n", "Z Q Shen ", "M A Voronkov \nCSIRO Astronomy and Space Science\nTelescope National Facility, CSIRO\nAustralia 3. Harvard-Smithsonian Center for AstrophysicsPO Box 761710, 02138Epping, CambridgeNSW, MassachusettsAustralia, USA\n", "A J Walsh \nInternational Centre for Radio Astronomy Research\nCurtin University\nBuilding 610, 1 Turner Avenue6102BentleyWAAustralia\n", "Y Xu ", "B Zhang ", "X W Zheng " ]
[ "School of Physical Sciences\nUniversity of Tasmania\nPrivate Bag 377001HobartTasmaniaAustralia", "CSIRO Astronomy and Space Science\nTelescope National Facility, CSIRO\nAustralia 3. Harvard-Smithsonian Center for AstrophysicsPO Box 761710, 02138Epping, CambridgeNSW, MassachusettsAustralia, USA", "School of Physical Sciences\nUniversity of Tasmania\nPrivate Bag 377001HobartTasmaniaAustralia", "Max-Plank-Institut für Radioastronomie\nAuf dem Hügel 6953121BonnGermany", "Max-Plank-Institut für Radioastronomie\nAuf dem Hügel 6953121BonnGermany", "School of Physical Sciences\nUniversity of Tasmania\nPrivate Bag 377001HobartTasmaniaAustralia", "International Centre for Radio Astronomy Research\nCurtin University\nBuilding 610, 1 Turner Avenue6102BentleyWAAustralia", "International Centre for Radio Astronomy Research\nCurtin University\nBuilding 610, 1 Turner Avenue6102BentleyWAAustralia", "CSIRO Astronomy and Space Science\nTelescope National Facility, CSIRO\nAustralia 3. Harvard-Smithsonian Center for AstrophysicsPO Box 761710, 02138Epping, CambridgeNSW, MassachusettsAustralia, USA", "International Centre for Radio Astronomy Research, The University of Western Australia (M468)\n35 Stirling Highway6009CrawleyWAAustralia", "International Centre for Radio Astronomy Research, The University of Western Australia (M468)\n35 Stirling Highway6009CrawleyWAAustralia", "Observatorio Astronómico Nacional (IGN)\nAlfonso XII, 3 y 5, E-, Spain 8. Shanghai Astronomical Observatory, 80 Nandan Road28014, 200030Madrid, ShanghaiChina", "CSIRO Astronomy and Space Science\nTelescope National Facility, CSIRO\nAustralia 3. Harvard-Smithsonian Center for AstrophysicsPO Box 761710, 02138Epping, CambridgeNSW, MassachusettsAustralia, USA", "CSIRO Astronomy and Space Science\nTelescope National Facility, CSIRO\nAustralia 3. Harvard-Smithsonian Center for AstrophysicsPO Box 761710, 02138Epping, CambridgeNSW, MassachusettsAustralia, USA", "Department of Physics and Astronomy and MQ Research Centre in Astronomy, Astrophysics and Astrophotonics\n10. Department of Physics\nFaculty of Science\nMacquarie University\n2109NSWAustralia", "Yamaguchi University\nYoshida 1677-1, Yamaguchi-city, Japan 11. Hartebeesthoek Radio Astronomical ObservatoryPO Box 443753-8512, 1740Yamaguchi, KrugersdorpSouth Africa", "CSIRO Astronomy and Space Science\nTelescope National Facility, CSIRO\nAustralia 3. Harvard-Smithsonian Center for AstrophysicsPO Box 761710, 02138Epping, CambridgeNSW, MassachusettsAustralia, USA", "13. Mizusawa VLBI Observatory\nDepartment of Astronomical Science\nSKA Organisation, Jodrell Bank Observatory, Lower Withington\nNational Astronomical Observatory of Japan &\nSK11 9DLMacclesfieldUK", "14. Purple Mountain Observatory\n15. Department of Astronomy\nThe Graduate University for Advanced Study\nChinese Academy of Sciences\n181-8588, 210008NanjingMitakaJapan, China", "16. SKA South Africa\nNanjing University\n3 rd Floor, The Park, Park Road, Pinelands210093, 7405NanjingChina, South Africa", "Max-Plank-Institut für Radioastronomie\nAuf dem Hügel 6953121BonnGermany", "CSIRO Astronomy and Space Science\nTelescope National Facility, CSIRO\nAustralia 3. Harvard-Smithsonian Center for AstrophysicsPO Box 761710, 02138Epping, CambridgeNSW, MassachusettsAustralia, USA", "International Centre for Radio Astronomy Research\nCurtin University\nBuilding 610, 1 Turner Avenue6102BentleyWAAustralia" ]
[]
We have conducted the first parallax and proper motion measurements of 6.7 GHz methanol maser emission using the Australian Long Baseline Array (LBA). The parallax of G 339.884−1.259 measured from five epochs of observations is 0.48±0.08 mas, corresponding to a distance of 2.1 +0.4 −0.3 kpc, placing it in the Scutum spiral arm. This is consistent (within the combined uncertainty) with the kinematic distance estimate for this source at 2.5±0.5 kpc using the latest Solar and Galactic rotation parameters. We find from the Lyman continuum photon flux that the embedded core of the young star is of spectral type B1, demonstrating that luminous 6.7 GHz methanol masers can be associated with high-mass stars towards the lower end of the mass range.
10.1088/0004-637x/805/2/129
[ "https://arxiv.org/pdf/1503.05917v1.pdf" ]
55,103,778
1503.05917
887c7fd790a86b35a433976ff474a9ad64d0fc12
19 Mar 2015 V Krishnan [email protected] School of Physical Sciences University of Tasmania Private Bag 377001HobartTasmaniaAustralia CSIRO Astronomy and Space Science Telescope National Facility, CSIRO Australia 3. Harvard-Smithsonian Center for AstrophysicsPO Box 761710, 02138Epping, CambridgeNSW, MassachusettsAustralia, USA S P Ellingsen School of Physical Sciences University of Tasmania Private Bag 377001HobartTasmaniaAustralia M J Reid A Brunthaler Max-Plank-Institut für Radioastronomie Auf dem Hügel 6953121BonnGermany A Sanna Max-Plank-Institut für Radioastronomie Auf dem Hügel 6953121BonnGermany J Mccallum School of Physical Sciences University of Tasmania Private Bag 377001HobartTasmaniaAustralia C Reynolds International Centre for Radio Astronomy Research Curtin University Building 610, 1 Turner Avenue6102BentleyWAAustralia H E Bignall International Centre for Radio Astronomy Research Curtin University Building 610, 1 Turner Avenue6102BentleyWAAustralia C J Phillips CSIRO Astronomy and Space Science Telescope National Facility, CSIRO Australia 3. Harvard-Smithsonian Center for AstrophysicsPO Box 761710, 02138Epping, CambridgeNSW, MassachusettsAustralia, USA R Dodson International Centre for Radio Astronomy Research, The University of Western Australia (M468) 35 Stirling Highway6009CrawleyWAAustralia M Rioja International Centre for Radio Astronomy Research, The University of Western Australia (M468) 35 Stirling Highway6009CrawleyWAAustralia Observatorio Astronómico Nacional (IGN) Alfonso XII, 3 y 5, E-, Spain 8. Shanghai Astronomical Observatory, 80 Nandan Road28014, 200030Madrid, ShanghaiChina J L Caswell CSIRO Astronomy and Space Science Telescope National Facility, CSIRO Australia 3. Harvard-Smithsonian Center for AstrophysicsPO Box 761710, 02138Epping, CambridgeNSW, MassachusettsAustralia, USA X Chen J R Dawson CSIRO Astronomy and Space Science Telescope National Facility, CSIRO Australia 3. Harvard-Smithsonian Center for AstrophysicsPO Box 761710, 02138Epping, CambridgeNSW, MassachusettsAustralia, USA Department of Physics and Astronomy and MQ Research Centre in Astronomy, Astrophysics and Astrophotonics 10. Department of Physics Faculty of Science Macquarie University 2109NSWAustralia Yamaguchi University Yoshida 1677-1, Yamaguchi-city, Japan 11. Hartebeesthoek Radio Astronomical ObservatoryPO Box 443753-8512, 1740Yamaguchi, KrugersdorpSouth Africa K Fujisawa S Goedhart J A Green CSIRO Astronomy and Space Science Telescope National Facility, CSIRO Australia 3. Harvard-Smithsonian Center for AstrophysicsPO Box 761710, 02138Epping, CambridgeNSW, MassachusettsAustralia, USA 13. Mizusawa VLBI Observatory Department of Astronomical Science SKA Organisation, Jodrell Bank Observatory, Lower Withington National Astronomical Observatory of Japan & SK11 9DLMacclesfieldUK 14. Purple Mountain Observatory 15. Department of Astronomy The Graduate University for Advanced Study Chinese Academy of Sciences 181-8588, 210008NanjingMitakaJapan, China 16. SKA South Africa Nanjing University 3 rd Floor, The Park, Park Road, Pinelands210093, 7405NanjingChina, South Africa K Hachisuka M Honma K Menten Max-Plank-Institut für Radioastronomie Auf dem Hügel 6953121BonnGermany Z Q Shen M A Voronkov CSIRO Astronomy and Space Science Telescope National Facility, CSIRO Australia 3. Harvard-Smithsonian Center for AstrophysicsPO Box 761710, 02138Epping, CambridgeNSW, MassachusettsAustralia, USA A J Walsh International Centre for Radio Astronomy Research Curtin University Building 610, 1 Turner Avenue6102BentleyWAAustralia Y Xu B Zhang X W Zheng 19 Mar 2015Received ; accepted -3 -Subject headings: astrometry -Galaxy:kinematics and dynamics -masers - stars:formation We have conducted the first parallax and proper motion measurements of 6.7 GHz methanol maser emission using the Australian Long Baseline Array (LBA). The parallax of G 339.884−1.259 measured from five epochs of observations is 0.48±0.08 mas, corresponding to a distance of 2.1 +0.4 −0.3 kpc, placing it in the Scutum spiral arm. This is consistent (within the combined uncertainty) with the kinematic distance estimate for this source at 2.5±0.5 kpc using the latest Solar and Galactic rotation parameters. We find from the Lyman continuum photon flux that the embedded core of the young star is of spectral type B1, demonstrating that luminous 6.7 GHz methanol masers can be associated with high-mass stars towards the lower end of the mass range. Introduction To properly understand the Milky Way's scale and shape as well as the physical properties of the objects within it, including their mass, luminosity, ages and orbits, we need to be able to accurately measure distances to Galactic sources. Unfortunately, this fundamental attribute is one of the most difficult measurements to make. Common techniques include the use of a Galactic rotation model, or "standard candles" where the distances to objects with known luminosities can be determined from their observed brightness. These indirect methods of measurement can result in large uncertainties unless an accurate, reliable and robust model is available. For example Xu et al. (2006) found that the kinematic distance was factor of 2 greater than the parallax distances to W3(OH) due to the peculiar velocity of the source. Uncertainties and errors in distance determination naturally propagate into the estimation of other important properties, such as luminosity (L ∝ D 2 ) and mass (L ∝ M a , where 3 < a < 4 for main sequence stars), and hence there is a need to constrain any errors in distance as much as possible. One of the most direct methods to determine distances beyond our solar system is through the use of trigonometric parallax. A decade ago, Honma et al. (2004) used the VLBI Exploration of Radio Astrometry (VERA) array to conduct high precision astrometry on the water maser pair W49N/OH43.8−0.1 and achieved an accuracy of 0.2 mas. This success demonstrated the feasibility of performing maser astrometry on Galactic scales and subsequently several groups obtained parallax results to masers and their associated star formation regions (Hachisuka et al. 2006;Xu et al. 2006;Honma et al. 2007). Following this, the BeSSeL project was launched as a comprehensive Northern hemisphere survey to measure accurate distances to high-mass star formation regions (HMSFR) and associated Hii regions of our Milky Way galaxy using trigonometric parallax (Reid et al. 2009a;Moscadelli et al. 2009;Xu et al. 2009;Zhang et al. 2009;Brunthaler et al. 2009;Reid et al. 2009b;Sanna et al. 2009;Brunthaler et al. 2011). This is being undertaken using the National Radio Astronomy Observatory's (NRAO) Very Long Baseline Array (VLBA) to measure position changes of methanol and water masers in the Galactic disk, with respect to distant background quasars. Phase referencing of one of the quasar and maser emission data to the other (Alef 1988;Beasley & Conway 1995;Reid & Honma 2014), combined with careful calibration of atmospheric effects, allows the change in the relative separation between the masers and quasars to be accurately measured. Repeated observations timed to maximise the measured amplitude of the parallax signature can measure the parallax to an accuracy of up to 10 µarcsec and simultaneously determine the source proper motions to ∼1 µarcsec year −1 (Reid & Honma 2014). Thus far the combination of all astrometric VLBI observations (including the European VLBI Network (EVN))(e.g. Reid & Honma 2014;Chibueze et al. 2014;Imai et al. 2013;Honma et al. 2012;Sakai et al. 2012;Rygl et al. 2010Rygl et al. , 2008 has yielded more than 100 parallax and proper motion measurements for star forming regions across large portions of the Milky Way visible from the Northern hemisphere. Having determined the position at a reference epoch, parallax and proper motion of the masers, the complete three-dimensional location and velocity vectors of these sources relative to the Sun can be found. used these measurements to fit a rotating disk model of the Milky Way and from this data were able to refine the best-fit Galactic parameters finding the circular rotation speed of the Sun, Θ 0 = 240 ± 8 km s −1 and distance to the Galactic centre R 0 = 8.34 ± 0.16 kpc. To date, parallax distances to masers have only been measured using Northern hemisphere VLBI arrays, and therefore, the sources for which accurate distances have been measured are heavily concentrated towards objects in the first and second quadrants of the Galaxy. The sources with trigonometric parallax measurements compiled in predominantly lie within the Galactic longitude range of 0 • < l < 240 • . There are two sources within the fourth quadrant of the Galaxy, with the most southerly of these at a declination of −39 • . Sources at southerly declinations achieve only low elevations for Northern hemisphere antennas, and for these, the measurement error in the zenith tropospheric delay estimate produces significant degradation in the astrometric accuracy, and corresponding parallax measurement (Honma et al. 2008). Therefore, the updated estimates for fundamental Galactic parameters such as Θ 0 and R 0 are based on data which are restricted to only a little over half the total Galactic longitude range. In order to ensure that models of Galactic structure and rotation are reliable, they must be derived from a more uniformly sampled distribution including sources in the third and fourth quadrants. Distances to southern maser sources and their associated Hii regions have exclusively been determined via indirect methods, such as through kinematic distance estimates (e.g. Caswell et al. 1975Caswell et al. , 2010, with the ambiguity being resolved with Hi absorption against Hii continuum (Jones & Dickey 2012;Jones et al. 2013), Hi Self-absorption (HiSA) (Green & McClure-Griffiths 2011) and radio recombination lines (Sewilo et al. 2004). However, there are issues which can produce significant errors in these techniques including, potentially large peculiar motions, broad spectral line profiles, and the as yet unknown relationship between cold Hi distribution against the maser associated component in HiSA emission. In 2008 a project was initiated to precisely measure the positions of maser sources relative to extragalactic quasars using the Australian Long Baseline Array (LBA). The aim of this ongoing project is to determine the parallax distances to 30 prominent southern HMSFRs. Current maser parallax measurements made with the VLBA and VERA have primarily been towards either 12.2 GHz methanol masers or water masers. However, not all LBA antennas have receivers capable of observing at 12.2 GHz, and many of the LBA antennas have significantly poorer sensitivity at 22 GHz than they do at frequencies less than 10 GHz. Hence the 6.7 GHz class II methanol maser transition was considered to provide the best targets for parallax observation with the LBA. They have strong, stable and compact emission over the timescales required for parallax measurements, and are well sampled from the Southern hemisphere, with close to 1000 sources being documented in the Methanol Multibeam (MMB) survey Green et al. 2010;Caswell et al. 2011;Green et al. 2012, a sensitive, unbiased search for 6.7 GHz methanol masers in the southern Galactic plane). Here we present the first trigonometric parallax measurements of a southern 6.7 GHz methanol maser. 2. Overview of G 339.884−1.259 G 339.884−1.259 is one of the strongest (1520 Jy at −38.7 km s −1 (Caswell et al. 2011)) 6.7 GHz methanol masers and has been well studied in a range of maser and thermal molecular transitions (e.g. Norris et al. 1993;De Buizer et al. 2002;Ellingsen et al. 2004Ellingsen et al. , 2011 and in continuum emission at a range of wavelengths (e.g. Ellingsen et al. 1996;Walsh et al. 1998;Ellingsen et al. 2005). The autocorrelation spectrum of the 6.7 GHz methanol maser emission of this source from the 2013 June epoch of observation is shown in Figure 1. Its peak flux density and spectral profile has been relatively stable over the last 20 years, and interferometric observations of the masers indicate several compact features (Norris et al. 1993;Ellingsen et al. 1996;Caswell et al. 2011), making it a suitable candidate for phase referenced astrometry. Previous estimates of the distance to G 339.884−1.259 have utilised the kinematic distance method (e.g. Caswell & Haynes 1983;Green & McClure-Griffiths 2011) which yields a kinematic heliocentric distance of between 2.5 to 3.0 kpc (depending on the assumed Galactic parameters). The coordinates used for G 339.884−1.259 in the current observations are given in Section 5.3 with updated coordinates presented in Table 1. Methanol masers in G 339.884−1.259 were first observed by Norris et al. (1987) at 12.2 GHz and high resolution synthesis images were made of the emission at 6.7, and 12.2 GHz by Norris et al. (1993). Interestingly, the maser emission was found to have a linear spatial distribution and had a corresponding monotonic velocity gradient. In light of this, G 339.884−1.259 became a prime candidate for astronomers to model the conditions for high-mass star formation (Ellingsen et al. 1996;Norris et al. 1998;Phillips et al. 1998;Walsh et al. 1998;De Buizer et al. 2002;Dodson 2008). Ellingsen et al. (1996) proposed that the masers are located within a cicrumstellar disk, and Norris et al. (1998) showed that the emission fit a model of a Keplerian disk around an OB type star. However, these claims were disputed by De Buizer et al. (2002) who demonstrated that what was initially thought to be a single circumstellar disk could be resolved into three mid-infrared sources near the location of the methanol masers. Dodson (2008) attempted to test the hypothesis of a circumstellar disk in G 339.884−1.259 by making polarization measurements of the magnetic fields associated with this source. From their images, they report a primarily disordered field accompanying much of the emission, and proposed that this matches the expectations for the masers being associated with an outflow-related shock. One small region of methanol maser emission does show a magnetic field direction perpendicular to the elongation of the maser emission, suggestive of a disk (Dodson 2008 constrains on the environmental conditions in the region surrounding the young high-mass star where the masers arise. This is because any model will need to account for the specific conditions required for the observed transition (Cragg et al. 1992;Sobolev et al. 1997). Norris et al. (1993) made high-resolution maps of the 6.7 and 12.2 GHz maser spots in G 339.884−1.259 and found little evidence for spatial coincidence with the spots at different frequencies. Caswell et al. (2000) made observations of 107.0 and 156.6 GHz methanol masers in G 339.884−1.259 and found the emission peak at these frequencies coincided with the 6.7 GHz maser site to within 5 ′′ . These are extremely rare transitions, with only 22 and 4 detections respectively from a pool of 80 sources. Ellingsen et al. (2004) discovered emission from the 19.9 GHz transition in G 339.884−1.259, and in Krishnan et al. (2013) it was shown that there was little evidence for spatial or velocity coincidence between the masers at 6.7 and 19.9 GHz, with only 2 maser components identified at 19.9 GHz as opposed to ∼10 components in 6.7 and 12.2 GHz observations with similar angular resolution (Norris et al. 1993). Ultracompact (UC) Hii regions are bubbles of ionized gas associated with newly formed massive stars, and the first detection of such a region in G 339.884−1.259 was by Ellingsen et al. (1996). The emission was measured at 8.5 GHz and had a peak brightness of 6.1 mJy beam −1 . The peak of the methanol emission at −38.7 km s −1 was found to be offset Observations The Long Baseline Array (LBA) is a heterogeneous VLBI array with either 5 or 6 antennas available for the observations reported here. In the period spanning 2012 March to 2014 March, a total of six epochs of observations were undertaken (LBA experiment code v255, epochs q to v inclusive) towards the southern 6.7 GHz methanol maser G 339.884−1.259 (see Table 2). The LBA antennas available for one or more epochs were the Australia Telescope Compact Array (ATCA), Ceduna 30m, Hartebeesthoek 26m, Hobart 26m, Mopra 22m and Parkes 64m antennas. The ATCA is itself a connected 6-element interferometer, which was operated in tied-array mode, with the outputs from either four or five 22 metre antennas phased and combined. The Table 1. During the data reduction process, the phase information from the maser emission in a single channel (where the emission is strong and compact) is transferred to the nearby quasar (see Section 4.2). In doing this, it is assumed that the state of the array has remained constant in the time interval between consecutive maser scans, so that the quasar phase can be interpolated with small error contributions (Fomalont 2013). The data were correlated with the DiFX correlator at Curtin University (Deller et al. 2011 corresponding to a resolution of 500 kHz. The ICRF data were correlated using the same spectral resolution as the background quasar observations in each epoch. Calibration We used the Astronomical Image Processing System (AIPS) (Greisen 2003) for data processing, employing the same data reduction steps across all epochs and based on the procedure described in Reid et al. (2009a). ICRF Data Prior to using observations of the ICRF catalogue quasars to determine the tropospheric and clock delay parameters for each antenna, it was necessary to remove the estimated ionospheric delay (determined from global models based on GPS total electron content (TEC) observations (Walker & Chatterjee 1999)), the Earth Orientation Parameters (EOPs) and parallactic angle effects. We then performed delay calibration, using a strong ICRF source with well known position (<1 mas) (Ma et al. 2009) (see Table 3 for details). A least squares fit based on the approach in Reid et al. (2009a) was then employed to determine the zenith atmospheric delay and the results (detailed in Appendix A) were applied to the phase referenced observations in Section 4.2. Phase Referenced Data We removed modeled residuals attributed to the ionospheric delay, EOPs and parallactic angle before applying the troposphere and clock drift rate corrections which were determined from the ICRF observations. The AIPS task CVEL was then used to re-position the maser spectrum in the bandpass to account for Doppler shifts due to antenna positions and Earth's rotation specific to each epoch. Figure 1 shows that the G 339.884−1.259, 6.7 GHz maser spectral peak, has a local standard of rest velocity (v lsr ) of −38.8 km s −1 as reported by Caswell et al. (2011). ACCOR was then used to correct the amplitude of the data for imperfect sampler statistics at recording, and also for incorrect amplitude scaling at the correlator which was a problem in some versions of the DiFX software used for correlation. Following this, we extracted a single autocorrelation scan of the maser spectrum from the most sensitive for each epoch is presented in Table 4. The RMS noise in the quasar image was determined from an area of size 1.5 × 1.5 arcsec and from a region of the image where there was no emission. The RMS noise for the maser image was determined from an area of the same dimensions and from a spectral channel where there was no emission. We used a single scan (1 to 2 minutes) on J 1706−4600 for delay calibration to correct the initial residual clock and instrumental error from correlation. However the quasar dataset was not correlated with the same spectral resolution as G 339.884−1.259 (see Section 3) and there was a need to account for this by modifying the solutions copied to the zoom-band maser data. This crucial step was required in order to prevent the introduction of spurious R-L polarization phase differences into the dataset, and the efficacy of this procedure (see Appendix B) is demonstrated by the similarity in the phase solutions between the different polarizations in Figure 2. Similar phase transfer issues between datasets with differing frequency properties have previously been resolved using comparable methods (Rioja et al. 2008;Dodson et al. 2014). Figure 1 shows that there are multiple strong (>100 Jy) 6.7 GHz methanol maser components in G 339.884−1.259 which offer potential spectral channels for astrometry. Astrometry, Parallax and Proper Motion We examined the cross correlation spectra to find the spectral features which showed the smallest relative flux density variations across all baselines for the duration of the observations, taking this to be an indication that it was an unresolved point source, which would enable an accurate position determination. We found the spectral feature at (-90, -64, -32, -8, -4, -2, -1, 1, 2, 4 Table 1. We located the centroid position of the quasar by fitting a 2D Gaussian to the deconvolved J 1706−4600 emission. The offset of the emission peak from the centre of the image field was recorded for all epochs and we present these data in Table 4. The change in the position of the −35.6 km s −1 feature in G 339.884−1.259 with respect to J 1706−4600 was modelled independently in right ascension and declination, and included the ellipticity of Earth's orbit. We allowed for systematic sources of uncertainty in right ascension and declination in the parallax model and added these in quadrature to the formal errors in Table 4. A χ 2 ν (per degree of freedom) for the East-West and North-South residuals was determined, and we iteratively adjusted the estimated error floors until χ 2 ν ≈ 1. The parallax was measured to be 0.48±0.08 mas corresponding to a distance of 2.1 +0.4 −0.3 kpc to G 339.884−1.259. The proper motion was found to be µ x = −1.6±0.1 mas y −1 and µ y = −1.9±0.1 mas y −1 (Figure 4 and Table 5). In order to constrain errors in the measured proper motion, we made image cubes of the maser emission for all epochs, and analyzed the changes in the distribution from 2012 March to 2014 November. We found that internal motions in G 339.884−1.259 can be up to ∼0.4 mas y −1 in right ascension and declination. These motions dominate over the formal errors in (µ x , µ y ) when added in quadrature, and we report the measured proper motion with errors as µ x = −1.6±0.4 mas y −1 and µ y = −1.9±0.4 mas y −1 . This measured uncertainty corresponds to internal motions of ∼2 km s −1 in the maser emission and is consistent with proper motion estimates of 6.7 GHz methanol masers in HMSFRs (e.g. Goddi et al. 2011;Moscadelli & Goddi 2014;Sugiyama et al. 2014). A more detailed analysis of the internal motions of the 6.7 GHz emission in G 339.884−1.259 is beyond the scope of the current text and will be presented in future publications. We excluded data from the 2014 March observations in our parallax and proper motion analysis as there were significant technical difficulties during that session which prevented us from obtaining accurate measurements of the maser-quasar separation for that epoch. The LBA has previously been used to measure distances to a number of pulsars at 1.6 GHz using trigonometric parallax (Dodson et al. 2003;Deller et al. 2009b,a). The uncertainty in our LBA parallax measurement to G 339.884−1.259 is estimated to be 80 µarcsec and is equivalent to the errors of the best LBA southern pulsar parallax measurement (Deller et al. 2009a). The uncertainty from our observations is a factor of 2 poorer than the parallax of 6.7 GHz methanol masers in ON 1 (0.389 ± 0.045 mas) measured by Rygl et al. (2010) (Table 1). This would have adversely affected the interpolated phase transfer solutions (described in Section 3) and contributed to the larger uncertainty estimate presented here (see Appendix A). In considering these factors, we assess that with better atmospheric calibration and smaller separation between the maser and quasar, it will be possible to attain parallax accuracies of ∼20 µarcsec using the LBA. Kinematic distance to G 339.884−1.259 The kinematic distance to sources in the Galactic disk can be determined from a model which describes the rotational speed of the disk at the Sun (Θ 0 ), distance of the Sun from the Galactic centre (R 0 ), and the measured v lsr . Green & McClure-Griffiths (2011) report a kinematic distance to G 339.884−1.259 of 2.6 ± 0.4 kpc using Θ 0 = 246 km s −1 , R 0 = 8.4 kpc (Reid et al. 2009b) and v lsr = −34.3 km s −1 (the mid-point of the 6.7 GHz methanol maser emission range Figure 1). Previous studies of the high-mass star formation region associated with G 339.884−1.259 (e.g. Ellingsen et al. 1996;De Buizer et al. 2002;Dodson 2008) have used a kinematic distance of ∼3 kpc to this source. This value was determined using earlier models with Galactic rotation speeds of Θ 0 ≃ 220 km s −1 (see Section 6.2 for further comments). Using and W ⊙ = 8.90 km s −1 (towards the North Galactic Pole) ), we report the kinematic distance to the associated CS(2-1) cloud with v lsr = −31.6 km s −1 (Bronfman et al. 1996) to be 2.5±0.5 kpc. This result is comparable to the parallax distance in Section 5 but with a large estimated error. Given an uncertainty of ∼8 km s −1 in Θ 0 , the estimated error in the quoted kinematic distance of 2.5 kpc is doubled to ∼1 kpc. Association of G 339.884−1.259 High-Mass Star Formation Region with Scutum Arm Our ability to precisely determine the structure of the Milky Way is hampered by our location in the midst of its spiral arms. Westerhout (1957); Cohen et al. (1980); Dame et al. Table 1. We find that there is a difference of 0.219 ′′ between our measured position and that reported by Caswell et al. (2011). The formal errors in the fitted position for J 1706−4600 were found to be <0.1 mas (Table 4). As this is an order of magnitude smaller than the known positional error of this source (see Section 3), the resultant uncertainty Table 1 to an accuracy of 2.1 mas. There is a separation difference of 0.074 ′′ between the original and updated positions. streaming motions in general. Physical constraints on the ionizing star High-mass star formation is still not well understood, and there is a vibrant debate regarding the processes which result in their formation (e.g. McKee & Tan 2003;Bonnell et al. 1998;Garay & Lizano 1999). Accurate distances to star formation regions help to put tight constraints on the physical environments from which young high-mass stars are born. This includes fundamental attributes such as size, enclosed mass and luminosity. Therefore, any accurate model describing the star formation process must also be governed by the limits of these physical constraints. Previous groups have determined the properties of G 339.884−1.259 based on kinematic distance (see Section 5.1). Using values from Ellingsen et al. (1996) and the equations from Panagia & Walmsley (1978), we present updated estimates for the electron density (n e ), mass of ionized hydrogen (M Hii ), excitation parameter (U) and the Lyman continuum photon flux (N L ) in Table 6. Based on log N L = 45.6 s −1 in Table 6, we find that continuum emission is consistent for a core object to be a B1 type star (Panagia 1973). This classification implies that G 339.884−1.259 is relatively small for a young high-mass star. It has been proposed that the luminosity of 6.7 GHz methanol masers increases as the associated young stellar object evolves (e.g. Breen et al. 2010). In this scenario sources such as G 339.884−1.259, which is amongst the most luminous 6.7 GHz methanol masers in the galaxy and is associated with a number of rare class II methanol transitions (Krishnan et al. 2013;Ellingsen et al. 2013), are close to the end of the methanol-maser phase of high-mass star formation. Recently, Urquhart et al. (2015) have put forward an alternative interpretation, that the methanol maser luminosity has a closer dependence on the bolometric luminosity of the associated high-mass star than its evolutionary phase. In this case, we would expect G 339.884−1.259 to be associated with a high-mass young stellar object towards the upper end of the mass-range. However, this is not the case and Conclusion We are currently conducting a large project using the LBA to measure the positions of thirty, 6.7 GHz methanol masers relative to background quasars, in the Southern hemisphere to sub-milliarcsecond accuracy. These measurements will be used to determine their distances using trigonometric parallax. The source list contains many prominent HMSFRs in the third and fourth quadrants of the Milky Way galaxy, where the LBA's performance is unmatched in its astrometric capabilities as a VLBI instrument. In this paper, we have shown the potential of this project by successfully making the first parallax measurements to a southern 6.7 GHz methanol maser source. The parallax of 0.48±0.08 mas We have used the parallax distance to update the estimated physical parameters for the G339.884−1.259 star formation region and now classify it to be of spectral type B1. The young stellar object associated with G 339.884−1.259 is relatively small for a "high-mass" object and is a strong example of the uncorrelated nature between stellar mass and intensity of 6.7 GHz maser emission. Results from VLBI astrometric observations including the BeSSeL survey are continuing to shape the way we view our Galaxy, by clearly revealing its spiral arm structure, rotation dynamics and mass. These results are based on observations which are concentrated in the first and second quadrants of the Milky Way , and the inclusion of results from the LBA is vital to ensure that sources are sampled from across a broad range of Galactic longitudes in order for a complete picture of the Galaxy to emerge. in reviewing this paper. A. Tropospheric and Ionospheric Contributions to Astrometric Accuracy We found that the parallax measurement with the smallest error floors was obtained when troposphere corrections were applied to the 2013 March and 2013 August epochs only. When the corrections were incorporated for all epochs, the parallax was measured to be 0.58±0.11 mas. When we excluded the tropospheric corrections for all epochs the measured trigonometric parallax was 0.50±0.13 mas. We attribute the effectiveness of applying this calibration in each epoch to limitations that we encountered in estimating both the tropospheric and ionospheric delay. A.1. Troposphere Calibration The multiband delay solutions obtained from 4 × 16 MHz IFs were found to be significantly noisier than those obtained when only 2 × 16 MHz IFs were used. As a result, the multiband delays for our observations were determined from a total bandwidth separation of only 32 MHz instead of the intended 400 MHz. This led to a loss in the accuracy of the delay measurement and affected our ability to model the troposphere path length contribution. Deller et al. (2009a) demonstrated that the available TEC models, for ionosphere delay correction in the Southern hemisphere, are sometimes in error due to the lower density of GPS stations in this region. The ionospheric delay is dispersive whereas the tropospheric delay is not. Hence uncorrected ionospheric delays which are inadvertently handled as non-dispersive tropospheric delays can result in erroneous phase corrections (Rygl et al. 2010). This could be the reason why the search for multiband delays over the highly sensitive 400 MHz range was noisy, and we detail the ionospheric path length contribution in the next subsection. In addition to the troposphere path length, the clock drift over the course of the observations was also resolved from the multiband delays. The clock model assumes that the H-maser at each station had a delay drift which varied linearly with time. Figure A1 shows the residual differences between the measured clock and modeled offsets for the duration of an observation. The squares, circles and crosses represent (respectively) the data, the model and residuals between the two. The near zero scatter of the residuals indicates that a linear model for the clock drift at each station was successful. The RMS noise of the residuals were found to be between 0.03 to 0.3 nanoseconds across all baselines and epochs. The hetrogeneous nature of the array means that some antennas were able to participate in a relatively small number of ICRF scans, Parkes due to a combination of slow slew rates, limited elevation coverage and HartRAO due to its distance from the rest of the LBA antennas. We were not able to include HartRAO in the majority of the ICRF observations as the quasars which had risen at the Australian stations were often set here. As a result, we were unable to determine the clock drift for HartRAO and have not included observations from this station to determine the results given in Table 5. We are currently investigating alternative methods to determine the clock drift rate from HartRAO for inclusion in the future. A.2. Ionosphere calibration Given the observing parameters (6.7 GHz, 4 minute cycle time, 2.48 • separation between the calibrator and the target, 3 cm residual zenith path length (Reid et al. 1999) and residual ionospheric content of 6 TECU (Ho et al. 1997)) we can estimate the expected error contributions from static and dynamic components of the troposphere and the ionosphere using the formulae in Asaki et al. (2007). This predicts a dynamic tropospheric phase error of 28 • , a static tropospheric phase error of 15 • , a dynamic ionospheric phase error of 5 • and a static ionospheric phase error of 21 • . The geodetic blocks typically reduce the residual zenith path length to about 1 cm, which would decrease the static tropospheric phase error to 5 • . Dynamic errors, because of the short timescale on which these operate, reduce the measured flux density of the targeted source without having a large effect on the positional centroid. This clearly leaves the residual static ionosphere as the largest uncorrected effect. Static residuals will introduce a shift in the observed position, as measured on any single baseline. For arrays with 10 or so antennas these shifts average out somewhat, but the LBA with 5 to 6 antennas is more vulnerable to this effect. Tuning the observational parameters, such as the cycle time and the separation between sources, is the most effective method to reduce these error contributions. For example, reducing the cycle time to 60 seconds would diminish the dynamic phase errors to less than 10 • and decreasing the separation between the calibrator and the target to 1 • would cut back the static phase errors to less than 10 • . However, the array sensitivity places limits the minimum useful scan duration and the separation of the closest suitable phase calibrator. In this case we were not able to have shorter scans or closer calibrators. For observations at higher frequencies, such as 22 GHz, the contribution of the residual zenith path length dominates. Therefore the geodetic blocks are absolutely essential for the measurements in ; Honma et al. (2012) as they reduce the typical errors from 3 cm to about 1 cm. However at 6.7 GHz the residual ionospheric contribution of 6 TECU is equivalent to 5.3 cm, as opposed to 0.5 cm at 22 GHz, and there is currently no established strategy for minimizing these contributions. Alternative approaches to lower the residual ionospheric contribution are currently being investigated and will be the subject of future publications. B. Delay Calibration for Phase Referencing Data We used FRING to determine the visibility phases and group delay from an individual scan on the delay calibrator J 1706−4600 taken from the continuum mode data. It is assumed that the delay τ calculated by FRING is a constant for the IF and can be used to determine a phase correction ∆φ for a frequency channel with width ∆ν and with respect to the lower-band edge given by ∆φ = τ ∆ν The 2 MHz maser zoom-band is a sub-band of the IF used to determine the manual delay, but it has a different lower band-edge (offset by B of f ) to the continuum data and a different spectral channel width δν. To apply the appropriate phase correction to the maser reference channel it was necessary to edit the FQ table in AIPS so that the spectral channel width δν ′ times the channel number σ pr corresponds to the frequency of the maser phase reference channel ν pr , referenced to the lower band-edge of the continuum mode data given by ν pr = B of f + σ pr δν = σ pr δν ′ · · · δν ′ = B of f σ pr + δν It was also necessary to edit the total bandwidth parameter in the FQ table for the maser data so that it matched the bandwidth of the continuum mode data. This step can be avoided if the delay calibrator scans are correlated for both continuum and zoom mode configurations and it is recommended that this be the general procedure in future LBA observations. Equivalent solutions for ∆φ and τ can be obtained for both datasets if an identical time range is used in FRING. An alternative approach would be to correlate the continuum data into several channels such that one of these corresponds exactly to the band coverage of the zoom mode data. The multi-band group delay solutions obtained from the continuum mode data could then be used to find the phase solution in the channel corresponding to the zoom band. This solution could then be directly transferred to the zoom band data. Fig. 1 . 1-The autocorrelation spectrum of G 339.884−1.259 using all antennas except Har-tRAO from 2013 June. from the continuum peak by 0.6 ′′ , with the methanol masers lying in a line approximately across the centre of the continuum emission and in an orientation which is perpendicular to the direction of extension of the UCHii region. De Buizer et al. (2002) interpret the radio continuum emission is being due to an ionized outflow along the axis of extension. observations for each epoch typically lasted for between 18 to 24 hours, of which approximately one-third of the time was utilised for observation of G 339.884−1.259 (and associated calibration observations). The remaining time was used for observations of other sources, the results of which will be reported in future publications. The setup consisted of two different frequency configurations to maximize on the sensitivity requirements of the different modes. The first configuration was to record continuum observations for calibration of the tropospheric component of the delay, and the second for phase referenced maser observations.The first configuration utilized as broad a frequency range as possible. The heterogeneous nature of the array meant that with different receiver front ends, it was necessary to compromise the frequency setup so that it could be adopted at all antennas.The LBA Data Acquisition System (DAS) can record the observed signals onto two independent IF bands. The optimal frequency configuration which is able to accommodate these restrictions is to have 4 × 16 MHz bands, the first pair (LBA DAS IF 1 with RR polarization) with lower-band edges at 6300 and 6316 MHz and the second pair (LBA DAS IF 2 with LL polarization) with lower-band edges at 6642 and 6658 MHz. Brief observations of approximately 12 to 18 quasars (∼2 minutes per source) with point-like structure at high resolution were undertaken for a broad azimuth range (generally at low elevation). These quasars were selected from the International Celestial Reference Frame (ICRF) Second Realization catalogue(Ma et al. 2009). These "ICRF observations" were used to determine the troposphere path length contributions to the delay (τ ), as well as to model the clock drift rate at the observatories (Section 4.1 and Appendix A). The ICRF observations were grouped into 45 minute blocks with an interval of between 3 to 6 hours separating consecutive blocks.The second frequency configuration was for phase referencing between the maser G 339.884−1.259 and a nearby background quasar for parallax determination. The LBA DAS system was set to record dual circular polarisation for 2 × 16 MHz bands with lower-band edges at 6642 and 6658 MHz. The phase referencing technique was employed by alternating scans for 2 minutes on the target maser with scans lasting 2 minutes on a nearby quasar. In order to achieve accurate phase referenced astrometry, suitable background quasars had to fulfill the criteria that they have little or no extended structure, their coordinates be known to an uncertainty of <1 mas and that they are in close angular proximity to the associated maser of ∼1 •(Reid et al. 2009a). The primary databases which were used in our search for quasars were the AT20G(Hancock et al. 2011), Astrogeo(Petrov et al. 2011) and ATPMN (McConnell et al. 2012) catalogues. A list of potential background quasars which were observed in conjunction with G 339.884−1.259 is given in single antenna (Parkes, with the exception of the final epoch) and used ACFIT to scale the spectra at all observing stations to this template, based on the nominal system equivalent flux density (SEFD) of the antennas 1 . The resultant amplitude gains as a function of time were then applied to the maser and quasar datasets. The calibrated peak flux intensity for the feature at −35.6 km s −1 in G 339.884−1.259 (the maser component used for phase referencing) and J 1706−4600 (the phase referenced quasar used for parallax measurement) −35.6 km s − 1 (Fig. 2 . 12Figure 3) to be clearly the best choice for G 339.884−1.259. All the other peaks exhibited substantial variability across individual baselines suggesting that emission is not point-like on milliarcsecond scales, or that there is a blend of emission from different locations. We then fringe fit on the maser spectral channel associated with this feature before transferring the phase solutions to J 1706−4600.Figure 2shows a typical plot of phase versus time from the 2013 August observations. After transferring the phase corrections, we averaged all channels in the quasar dataset -Phase solutions in each polarization from ATCA (AT), Ceduna (CD), Hobart (HO), Mopra (MP) and Parkes (PA) in the 2013 August session after fringe fitting (with reference to Hobart 26m) on the −35.6 km s −1 feature in G 339.884−1.259. Peak flux = 6 .Fig. 3 . 634301E-02 JY/BEAM Percentage levels = 6.430E-04 * (-90, -64, -32, -8, 8, 16, 32, 64, 90) -Emission from the phase reference channel corresponding to v lsr = −35.6 km s −1 in G 339.884−1.259 (left) that was strong and showed compact structure. J 1706−4600 (right) showed consistent centroid structure dominated by a single peak throughout all epochs. and then imaged the emission using a Gaussian beam of 5.9 × 4.2 mas (average from all epochs) (Figure 3). We report detections for J 1706−4600 and J 1654−4812 on VLBI baselines, and only the first of these was suitable for astrometry. J 1706−4600 was observed in all epochs and appears to be dominated by a single component with no jets. J 1654−4812 was observed in all epochs except 2012 March and showed variable source structure which made it unsuitable for parallax determination. J 1706−4600 has a positional accuracy of 2.10 mas (Petrov et al. 2011) and shows deviation from point-like structure at levels <10% of the peak flux density (Figure 3). J 1654−4812 had an estimated positional accuracy of 0.4 ′′ (McConnell et al. 2012) and from our phase referenced images, we are able to present updated coordinates for J 1654−4812 to an uncertainty of 2.1 mas in using the EVN. The LBA (when operating without HartRAO; see Appendix A) and the EVN configuration in Rygl et al. (2010) both have a maximum East-West baseline separation of ∼2000 km, giving them similar resolution Parallax and proper motion of the −35.6 km s −1 reference feature of G 339.884−1.259. The expected positions from the fits are indicated with triangular and circular markers. Left panel: the sky positions with the first and last epochs labeled. Middle panel: East-West (triangles) and North-South (circles) motion of the position offsets and best combined parallax and proper motions fits versus time. The models are offset along the y-axis for clarity. Right panel: the parallax signature with the best fit proper motions removed.capabilities for parallax determination(Reid et al. 2009a).Rygl et al. (2010) reduced random errors in their measurements by modelling parallaxes determined from the averaged positions of several maser spots. However, given the strong and compact nature of the maser spot at −35.6 km s −1 , we are not able to reduce our astrometric errors using this method as these will be dominated by systemics from the uncompensated atmosphere. Additionally, the the EVN observations of ON1 utilise background quasars with separation angles of 1.71 • and 0.73 • , and in comparison J 1706−4600 is separated by 2.48 • from G 339.884−1.259 updated Galactic parameters of Θ 0 = 240 km s −1 , R 0 = 8.34 kpc and solar motion parameters of U ⊙ = 10.70 km s −1 (towards the Galactic centre), V ⊙ = 15.60 km s −1 (clockwise and in the direction of Galactic rotation as viewed from the North Galactic Pole) ( 2001 ) 2001;Jones et al. (2013); and others have used surveys of Hi and CO molecular clouds to identify the Galaxy's spiral arms from the ensuing longitude-velocity (ℓ − V ) diagrams.This method of spiral arm modelling has been used byXu et al. (2013);Zhang et al. (2013);Choi et al. (2014);;Sanna et al. (2014);Sato et al. (2014);Wu et al. (2014) to assign HMSFRs to spiral arms, by associating them with molecular clouds in our Galaxy.Based on the v lsr of the CO spectrum in the region (T.Dame 2014, private communication) and the parallax distance inTable 5, we suggest that G 339.884−1.259 is located in the near edge of the Scutum spiral arm as shown inFigure5. Sato et al. (2014) modeled the Scutum arm based on measurements of 16 HMSFRs, and we show the position of G 339.884−1.259 with relation to these in Figure 5. It can be seen that at the longitude of G 339.884−1.259, the Sagittarius and Scutum arms are in close proximity, and extrapolating the best information currently available suggests that these two arms may merge at lower longitudes. Using a log-periodic spiral model, Sato et al. (2014) measured a pitch angle of ψ = 19.8 • ±3.1 • for the Scutum arm. We have now included G 339.884−1.259 into this model, using a source which extends the Galactocentric azimuth by about 10 • , giving an updated value of ψ = 19.2 • ±4.1 • . 5.3. Updated coordinates for G 339.884−1.259 and J 1654−4812 During the observations we used α = 16 h 52 m 04.6700 s , δ = −46 • 08 ′ 34.200 ′′ as the coordinates for G 339.884−1.259 (Caswell et al. 2011). This is the position of the maser spectral peak at −38.7 km s −1 (Figure 1). When applying phase corrections from the −35.6 km s −1 feature to J 1706−4600, we can assume that any offset of the quasar image from the centre of the field is due to the offset of the phase referenced position with respect to the true maser position. We iteratively corrected the maser coordinates in the AIPS source table using CLCOR until there was no further improvement in the quasar position from the image centre in the 2013 March epoch of observations. It is important to use accurate maser coordinates in the data reduction process, to minimize errors in determining the quasar position during phase referencing (Reid et al. 2009a). The position corrections from the March 2013 epoch of observations were then applied to G 339.884−1.259 for all epochs, and we present the updated coordinates corresponding to the −38.7 km s −1 feature in Fig. 5 . 5-A face-on view of the Milky Way galaxy showing G 339.884−1.259 in relation to other HMSFRs in the Scutum arm (Sato et al. 2014). Sections of arcs of the Perseus (top right), Local (top centre with Solar position) and Sagittarius arms are also shown. The background circular disks are scaled to approximate the Galactic bar region (∼4 kpc) and the solar circle (∼8 kpc) (see Reid et al. (2014)). in the updated maser position remains as 2.1 mas when error contributions are added in quadrature. Observations of J 1654−4812 were made using the coordinates α = 16 h 54 m 18.24 s , δ = −48 • 13 ′ 03.7 ′′ from McConnell et al. (2012). The updated position of this source when measured relative to the corrected G 339.884−1.259 position is also presented in 6 . 6Properties of associated high-mass star formation region6.1. Peculiar motionThe proper motions µ x = −1.6±0.4 mas y −1 , µ y =−1.9±0.4 mas y −1 and of v lsr = −31.6 km s −1 of the CS(2-1) cloud associated with this source(Bronfman et al. 1996), make it possible to determine the full 3-dimensional motion of G 339.884−1.259 with respect to the Galactic centre. The dynamical model of the Galaxy we use assumes a flat rotation curve of the disk with a speed of Θ 0 = 240 km s −1 . This is a reasonable assumption based on the recent analysis in. The distance of the Sun to the Galactic centre is taken to be R 0 = 8.34 kpc, and assumed to have peculiar motion components U ⊙ = 10.70 km s −1 , V ⊙ = 15.60 km s −1 and W ⊙ = 8.90 km s −1. When using this model, we find the peculiar motion for G 339.884−1.259 to be U = −4.0±5.9 km s −1 , V = 6.47±4.6 km s −1 and W = 10.0±1.2 km s −1 , in a reference frame that is rotating with the Galaxy. Hence all components of peculiar velocity are consistent (within estimated uncertainty) with the bulk of HMSFRs in the Milky Way. In there is a good fit to the model of spiral arm motions when a RMS of about 5 -7 km s −1 is assumed for each velocity component of HMSFR. This is reasonable for Virial motions of stars in giant molecular clouds and so there may not be much evidence for large (>10 km s −1 ) demonstrates that high luminosity, multiple transition methanol maser emission need not be associated with the most massive O-type young stars. A single example is insufficient to resolve the issue of whether evolutionary stage or stellar mass plays the primary role in determining class II methanol maser luminosity, however, G 339.884−1.259 shows greater consistency with the expectations of the Breen et al. (2010) evolutionary hypothesis. (Nanoseconds) vs. Universal Time (Hours) Fig. A1.-A sample of multi-band delay solutions from Ceduna (CD), Hobart (HO), Mopra (MP) and Parkes (PA) to the ATCA (AT), determined from the ICRF mode observations in 2013 June. The squares, circles and crosses represent (respectively) the data, the model and residuals between the two. Table 1 : 1Coordinates of observed sources. The separation and position angle columns de-scribe the offset between the respective quasar and G 339.884−1.259 in the sky. The reported positions of G 339.884−1.259 and J 1654−4812 have been revised (see Section 5.3) based on the 2013 March epoch. We failed to detect J 1648−4826, J 1644−4559, J 1648−4521 and J 1649−4536 in our observations. The upper limits for detection is 5 times the image RMS (from a box of size 1.5×1.5 arcsec). Source Correlated Separation Position RA Dec flux angle (mJy) ( • ) ( • ) (h m s) ( • ′ ′′ ) Maser: G 339.884−1.259 − − − 16 52 04.6776 −46 08 34.404 Detected quasars: J 1706−4600 131.1 2.48 88.1 17 06 22.0503 −46 00 17.824 J 1654−4812 3.8 2.11 169.9 16 54 18.2448 −48 13 03.756 Non-detected quasars: J 1648−4826 <0.5 2.36 193.3 16 48 47.9200 −48 26 18.800 J 1644−4559 <0.5 1.27 276.5 16 44 49.2856 −45 59 09.646 J 1648−4521 <0.9 1.02 319.1 16 48 14.2110 −45 21 38.090 J 1649−4536 <0.4 0.72 317.1 16 49 14.7810 −45 36 31.190 ). As the maser emission covers only a small fraction of the 32 MHz of recorded bandwidth, it is not necessary to correlate the entire band. Hence, for the maser dataa 2 MHz zoom-band was correlated with 2048 channels (1 MHz over 1024 channels for the 2014 March epoch), giving a spectral resolution of 0.977 kHz corresponding to a velocity separation of 0.055 km s −1 . For the observations of the background quasar the entire observed bandwidth was correlated. In the 2012 March and 2013 March epochs, 256 spectral channels were used per 16 MHz bandwidth, corresponding to a resolution of 62.5 kHz. In the remaining epochs, 32 spectral channels were used per 16 MHz bandwidth, Table 2 : 2Stations which participated in the observations of G 339.884−1.259 andJ 1706−4600. D.O.Y Start UT Code Epoch Participating stations 67 04:00 v255q 2012 March ATCA, Ceduna, Hobart, Mopra, Parkes 77 04:00 v255r 2013 March ATCA, Ceduna, HartRAO, Hobart, Parkes 168 02:30 v255s 2013 June ATCA, Ceduna, HartRAO, Hobart, Mopra, Parkes 226 18:00 v255t 2013 August ATCA, Ceduna, HartRAO, Hobart, Mopra, Parkes 323 12:00 v255u 2013 November ATCA, Ceduna, Hobart, Mopra, Parkes 60 22:00 v255v 2014 March ATCA, Ceduna, HartRAO, Hobart, Mopra Table 3 : 3Sources in each epoch which were used for ICRF mode delay calibration.Epoch Source RA Dec Scan name (h m s) ( • ′ ′′ ) (min) 2012 March 1349−439 13 52 56.54 −44 12 40.388 1:00 2013 March 0537−441 05 38 50.36 −44 05 08.939 1:00 2013 June 1302−102 13 05 33.02 −10 33 19.428 2:00 2013 August 0013−005 00 16 11.09 −00 15 12.445 2:00 2013 November 1929+226 19 31 24.92 22 43 31.259 1:15 Table 4 : 4Measured fluxes and differential fitted positions between the −35.6 km s −1 featurein G 339.884−1.259 and J 1706−4600 across all epochs after phase referencing. The formal errors in each coordinate are determined from the output of IMFIT. J 1706−4600 G 339.884−1.259 Epoch x Offset Error y Offset Error Flux RMS Flux RMS (mas) (mas) (mJy) (Jy) 2012 March 4.177 0.056 2.998 0.098 30.29 0.55 413.07 0.03 2013 March 2.667 0.026 1.085 0.023 28.98 0.36 418.17 0.02 2013 June 1.654 0.003 0.464 0.002 114.84 0.15 427.49 0.02 2013 August 1.031 0.021 0.208 0.017 61.06 0.65 462.02 0.02 2013 November 0.727 0.016 0.006 0.012 62.25 0.38 268.95 0.02 G339.884-1.259 Peak flux = 2.6895E+02 JY/BEAM Percentage levels = 2.689E-00 * Table 5 : 5Parallax distance and proper motion of G 339.884-1.259.Maser feature Distance µ x µ y (km s −1 ) (kpc) (mas y −1 ) (mas y −1 ) −35.6 2.1 +0.4 −0.3 −1.6±0.1 −1.9±0.1 Table 6 : 6kpc. In combining this result with measurements of other HMSFRs in Sato et al. (2014), we place G 339.884−1.259 at the near edge of the Scutum spiral arm of the Milky Way, and determine an updated pitch angle of ψ = 19.2 • ±4.1 • for this arm.Physical parameters of G 339.884−1.259 as described in Section 6.2 and adjusted to a preferred distance of 2.1 kpc. n e M Hii U log N L Spectral (cm −3 ) (M ⊙ ) (pc cm −2 ) (s −1 ) type 3.1 × 10 4 6 × 10 −4 5.7 45.6 B1 www.atnf.csiro.au/vlbi/documentation/vlbi antennas/index.html AcknowledgementsThis paper is dedicated to the memory of our co-author James Caswell who passed away January 14 th 2015. James was a leading figure in maser astronomy for more than three decades and his work in this field represents a peerless legacy to future researchers. The authors would like to thank the referee for their detailed analysis and comments W Alef, The Impact of VLBI on Astrophysics and Geophysics. M. J. Reid & J. M. Moran129523IAUSAlef, W. 1988, in IAUS, Vol. 129, The Impact of VLBI on Astrophysics and Geophysics, ed. M. J. Reid & J. M. Moran, 523 . Y Asaki, H Sudou, Y Kono, PASJ. 59397Asaki, Y., Sudou, H., Kono, Y., et al. 2007, PASJ, 59, 397 Very Long Baseline Interferometry and the VLBA. A J Beasley, J E Conway, ASPC. J. A. Zensus, P. J. Diamond, & P. J. Napier82327Beasley, A. J., & Conway, J. E. 1995, in ASPC, Vol. 82, Very Long Baseline Interferometry and the VLBA, ed. J. A. Zensus, P. J. Diamond, & P. J. Napier, 327 . I A Bonnell, M R Bate, H Zinnecker, MNRAS. 29893Bonnell, I. A., Bate, M. R., & Zinnecker, H. 1998, MNRAS, 298, 93 . S L Breen, S P Ellingsen, J L Caswell, B E Lewis, MNRAS. 4012219Breen, S. L., Ellingsen, S. P., Caswell, J. L., & Lewis, B. E. 2010, MNRAS, 401, 2219 . L Bronfman, L.-A Nyman, J May, A&AS. 11581Bronfman, L., Nyman, L.-A., & May, J. 1996, A&AS, 115, 81 . A Brunthaler, M J Reid, K M Menten, ApJ. 693461ANBrunthaler, A., Reid, M. J., Menten, K. M., et al. 2009, ApJ, 693, 424 -. 2011, AN, 332, 461 . J L Caswell, R F Haynes, AuJPh. 36361Caswell, J. L., & Haynes, R. F. 1983, AuJPh, 36, 361 . J L Caswell, J D Murray, R S Roger, D J Cole, D J Cooke, A&A. 45239Caswell, J. L., Murray, J. D., Roger, R. S., Cole, D. J., & Cooke, D. J. 1975, A&A, 45, 239 . J L Caswell, J Yi, R S Booth, D M Cragg, MNRAS. 313599Caswell, J. L., Yi, J., Booth, R. S., & Cragg, D. M. 2000, MNRAS, 313, 599 . J L Caswell, G A Fuller, J A Green, MNRAS. 404MNRASCaswell, J. L., Fuller, G. A., Green, J. A., et al. 2010, MNRAS, 404, 1029 -. 2011, MNRAS, 417, 1964 . J O Chibueze, T Omodaka, T Handa, ApJ. 784114Chibueze, J. O., Omodaka, T., Handa, T., et al. 2014, ApJ, 784, 114 . Y K Choi, K Hachisuka, M J Reid, ApJ. 79099Choi, Y. K., Hachisuka, K., Reid, M. J., et al. 2014, ApJ, 790, 99 . R S Cohen, H Cong, T M Dame, P Thaddeus, ApJ. 23953Cohen, R. S., Cong, H., Dame, T. M., & Thaddeus, P. 1980, ApJ, 239, L53 . D M Cragg, K P Johns, P D Godfrey, R D Brown, MNRAS. 259203Cragg, D. M., Johns, K. P., Godfrey, P. D., & Brown, R. D. 1992, MNRAS, 259, 203 . T M Dame, D Hartmann, P Thaddeus, ApJ. 547792Dame, T. M., Hartmann, D., & Thaddeus, P. 2001, ApJ, 547, 792 . J M De Buizer, A J Walsh, R K Piña, C J Phillips, C M Telesco, ApJ. 564327De Buizer, J. M., Walsh, A. J., Piña, R. K., Phillips, C. J., & Telesco, C. M. 2002, ApJ, 564, 327 . A T Deller, S J Tingay, M Bailes, J E Reynolds, ApJ. 7011243Deller, A. T., Tingay, S. J., Bailes, M., & Reynolds, J. E. 2009a, ApJ, 701, 1243 . A T Deller, S J Tingay, W Brisken, ApJ. 690198Deller, A. T., Tingay, S. J., & Brisken, W. 2009b, ApJ, 690, 198 . A T Deller, W F Brisken, C J Phillips, PASP. 123275Deller, A. T., Brisken, W. F., Phillips, C. J., et al. 2011, PASP, 123, 275 . R Dodson, A&A. 480767Dodson, R. 2008, A&A, 480, 767 . R Dodson, D Legge, J E Reynolds, P M Mcculloch, ApJ. 5961137Dodson, R., Legge, D., Reynolds, J. E., & McCulloch, P. M. 2003, ApJ, 596, 1137 . R Dodson, M J Rioja, T.-H Jung, AJ. 14897Dodson, R., Rioja, M. J., Jung, T.-H., et al. 2014, AJ, 148, 97 . S P Ellingsen, S L Breen, A M Sobolev, ApJ. 742109Ellingsen, S. P., Breen, S. L., Sobolev, A. M., et al. 2011, ApJ, 742, 109 . S P Ellingsen, S L Breen, M A Voronkov, J R Dawson, MNRAS. 4293501Ellingsen, S. P., Breen, S. L., Voronkov, M. A., & Dawson, J. R. 2013, MNRAS, 429, 3501 . S P Ellingsen, D M Cragg, J E J Lovell, MNRAS. 354401Ellingsen, S. P., Cragg, D. M., Lovell, J. E. J., et al. 2004, MNRAS, 354, 401 . S P Ellingsen, R P Norris, P M Mcculloch, MNRAS. 279101Ellingsen, S. P., Norris, R. P., & McCulloch, P. M. 1996, MNRAS, 279, 101 . S P Ellingsen, S S Shabala, S E Kurtz, MNRAS. 3571003Ellingsen, S. P., Shabala, S. S., & Kurtz, S. E. 2005, MNRAS, 357, 1003 E B Fomalont, Astrometry for Astrophysics: Methods, Models, and Applications. Cambridge University PressFomalont, E. B. 2013, Astrometry for Astrophysics: Methods, Models, and Applications (Cambridge University Press), 175-198 . G Garay, S Lizano, PASP. 1111049Garay, G., & Lizano, S. 1999, PASP, 111, 1049 . C Goddi, L Moscadelli, A Sanna, A&A. 5358Goddi, C., Moscadelli, L., & Sanna, A. 2011, A&A, 535, L8 . J A Green, N M Mcclure-Griffiths, MNRAS. 4172500Green, J. A., & McClure-Griffiths, N. M. 2011, MNRAS, 417, 2500 . J A Green, J L Caswell, G A Fuller, MNRAS. 4093108MNRASGreen, J. A., Caswell, J. L., Fuller, G. A., et al. 2010, MNRAS, 409, 913 -. 2012, MNRAS, 420, 3108 . E W Greisen, 285109Greisen, E. W. 2003, 2003iha book, 285, 109 . K Hachisuka, A Brunthaler, K M Menten, ApJ. 645337Hachisuka, K., Brunthaler, A., Menten, K. M., et al. 2006, ApJ, 645, 337 . P J Hancock, P Roberts, M J Kesteven, ExA. 32147Hancock, P. J., Roberts, P., Kesteven, M. J., et al. 2011, ExA, 32, 147 . C M Ho, B D Wilson, A J Mannucci, U J Lindqwister, D N Yuan, 321499RaScHo, C. M., Wilson, B. D., Mannucci, A. J., Lindqwister, U. J., & Yuan, D. N. 1997, RaSc, 32, 1499 . M Honma, Y Tamura, M J Reid, PASJ. 60951Honma, M., Tamura, Y., & Reid, M. J. 2008, PASJ, 60, 951 . M Honma, T Bushimata, Y K Choi, R. Bachiller, F. Colomer, J.-F. Desmurs, & P. de Vicentein 2004evn confHonma, M., Bushimata, T., Choi, Y. K., et al. 2004, in 2004evn conf, ed. R. Bachiller, F. Colomer, J.-F. Desmurs, & P. de Vicente, 203-204 . M Honma, T Bushimata, Y K Choi, PASJ. 59889Honma, M., Bushimata, T., Choi, Y. K., et al. 2007, PASJ, 59, 889 . M Honma, T Nagayama, K Ando, PASJ. 64136Honma, M., Nagayama, T., Ando, K., et al. 2012, PASJ, 64, 136 . H Imai, T Kurayama, M Honma, T Miyaji, PASJ. 6528Imai, H., Kurayama, T., Honma, M., & Miyaji, T. 2013, PASJ, 65, 28 . C Jones, J M Dickey, ApJ. 75362Jones, C., & Dickey, J. M. 2012, ApJ, 753, 62 . C Jones, J M Dickey, J R Dawson, ApJ. 774117Jones, C., Dickey, J. M., Dawson, J. R., et al. 2013, ApJ, 774, 117 . V Krishnan, S P Ellingsen, M A Voronkov, S L Breen, MNRAS. 4333346Krishnan, V., Ellingsen, S. P., Voronkov, M. A., & Breen, S. L. 2013, MNRAS, 433, 3346 The Second Realization of the International Celestial Reference Frame by Very Long Baseline Interferometry. C Ma, E F Arias, G Bianco, International Earth Rotation and Reference System Service. 35International VLBI Service for Geodesy and Astrometry. IVSMa, C., Arias, E. F., Bianco, G., et al. 2009, The Second Realization of the International Celestial Reference Frame by Very Long Baseline Interferometry, ITN 35, International Earth Rotation and Reference System Service (IERS), International VLBI Service for Geodesy and Astrometry (IVS) . D Mcconnell, E M Sadler, T Murphy, R D Ekers, MNRAS. 4221527McConnell, D., Sadler, E. M., Murphy, T., & Ekers, R. D. 2012, MNRAS, 422, 1527 . C F Mckee, J C Tan, ApJ. 585850McKee, C. F., & Tan, J. C. 2003, ApJ, 585, 850 . L Moscadelli, C Goddi, A&A. 566150Moscadelli, L., & Goddi, C. 2014, A&A, 566, A150 . L Moscadelli, M J Reid, K M Menten, ApJ. 693406Moscadelli, L., Reid, M. J., Menten, K. M., et al. 2009, ApJ, 693, 406 . R P Norris, J L Caswell, F F Gardner, K J Wellington, ApJ. 321159Norris, R. P., Caswell, J. L., Gardner, F. F., & Wellington, K. J. 1987, ApJ, 321, L159 . R P Norris, J B Whiteoak, J L Caswell, M H Wieringa, R G Gough, ApJ. 412222Norris, R. P., Whiteoak, J. B., Caswell, J. L., Wieringa, M. H., & Gough, R. G. 1993, ApJ, 412, 222 . R P Norris, S E Byleveld, P J Diamond, ApJ. 508275Norris, R. P., Byleveld, S. E., Diamond, P. J., et al. 1998, ApJ, 508, 275 . N Panagia, AJ. 78929Panagia, N. 1973, AJ, 78, 929 . N Panagia, C M Walmsley, A&A. 70411Panagia, N., & Walmsley, C. M. 1978, A&A, 70, 411 . L Petrov, C Phillips, A Bertarini, T Murphy, E M Sadler, MNRAS. 4142528Petrov, L., Phillips, C., Bertarini, A., Murphy, T., & Sadler, E. M. 2011, MNRAS, 414, 2528 . C J Phillips, R P Norris, S P Ellingsen, P M Mcculloch, MNRAS. 3001131Phillips, C. J., Norris, R. P., Ellingsen, S. P., & McCulloch, P. M. 1998, MNRAS, 300, 1131 . M J Reid, M Honma, ARA&A. 52339Reid, M. J., & Honma, M. 2014, ARA&A, 52, 339 . M J Reid, K M Menten, A Brunthaler, ApJ. 693397Reid, M. J., Menten, K. M., Brunthaler, A., et al. 2009a, ApJ, 693, 397 . M J Reid, A C S Readhead, R C Vermeulen, R N Treuhaft, ApJ. 524816Reid, M. J., Readhead, A. C. S., Vermeulen, R. C., & Treuhaft, R. N. 1999, ApJ, 524, 816 . M J Reid, K M Menten, X W Zheng, ApJ. 700137Reid, M. J., Menten, K. M., Zheng, X. W., et al. 2009b, ApJ, 700, 137 . M J Reid, K M Menten, A Brunthaler, ApJ. 783130Reid, M. J., Menten, K. M., Brunthaler, A., et al. 2014, ApJ, 783, 130 . M J Rioja, R Dodson, R Kamohara, PASJ. 601031Rioja, M. J., Dodson, R., Kamohara, R., et al. 2008, PASJ, 60, 1031 . K L J Rygl, A Brunthaler, K M Menten, M J Reid, H J Van Langevelde, Rygl, K. L. J., Brunthaler, A., Menten, K. M., Reid, M. J., & van Langevelde, H. J. 2008, in 2008evn conf . K L J Rygl, A Brunthaler, M J Reid, A&A. 5112Rygl, K. L. J., Brunthaler, A., Reid, M. J., et al. 2010, A&A, 511, A2 . N Sakai, M Honma, H Nakanishi, PASJ. 64108Sakai, N., Honma, M., Nakanishi, H., et al. 2012, PASJ, 64, 108 . A Sanna, M J Reid, L Moscadelli, ApJ. 706464Sanna, A., Reid, M. J., Moscadelli, L., et al. 2009, ApJ, 706, 464 . A Sanna, M J Reid, K M Menten, ApJ. 781108Sanna, A., Reid, M. J., Menten, K. M., et al. 2014, ApJ, 781, 108 . M Sato, Y W Wu, K Immer, ApJ. 79372Sato, M., Wu, Y. W., Immer, K., et al. 2014, ApJ, 793, 72 . M Sewilo, C Watson, E Araya, ApJS. 154553Sewilo, M., Watson, C., Araya, E., et al. 2004, ApJS, 154, 553 . A M Sobolev, D M Cragg, P D Godfrey, A&A. 324211Sobolev, A. M., Cragg, D. M., & Godfrey, P. D. 1997, A&A, 324, 211 . K Sugiyama, K Fujisawa, A Doi, A&A. 56282Sugiyama, K., Fujisawa, K., Doi, A., et al. 2014, A&A, 562, A82 . J S Urquhart, T J T Moore, K M Menten, MNRAS. 4463461Urquhart, J. S., Moore, T. J. T., Menten, K. M., et al. 2015, MNRAS, 446, 3461 Ionospheric Corrections Using GPS Based Models, VLBA Scientific Memo 23. C Walker, S ; A J Chatterjee, M G Burton, A R Hyland, G Robinson, MNRAS. 301640National Radio Astronomy Observatory and Cornell University Walsh,Walker, C., & Chatterjee, S. 1999, Ionospheric Corrections Using GPS Based Models, VLBA Scientific Memo 23, National Radio Astronomy Observatory and Cornell University Walsh, A. J., Burton, M. G., Hyland, A. R., & Robinson, G. 1998, MNRAS, 301, 640 . G Westerhout, BAN. 13201Westerhout, G. 1957, BAN, 13, 201 . Y W Wu, M Sato, M J Reid, A&A. 56617Wu, Y. W., Sato, M., Reid, M. J., et al. 2014, A&A, 566, A17 . Y Xu, M J Reid, K M Menten, ApJ. 693413Xu, Y., Reid, M. J., Menten, K. M., et al. 2009, ApJ, 693, 413 . Y Xu, M J Reid, X W Zheng, K M Menten, Sci. 31154Xu, Y., Reid, M. J., Zheng, X. W., & Menten, K. M. 2006, Sci, 311, 54 . Y Xu, J J Li, M J Reid, ApJ. 76915Xu, Y., Li, J. J., Reid, M. J., et al. 2013, ApJ, 769, 15 . B Zhang, M J Reid, K M Menten, ApJ. 77579Zhang, B., Reid, M. J., Menten, K. M., et al. 2013, ApJ, 775, 79 . B Zhang, X W Zheng, M J Reid, ApJ. 693419Zhang, B., Zheng, X. W., Reid, M. J., et al. 2009, ApJ, 693, 419
[]
[ "Coherence time of a Bose-Einstein condensate in an isolated harmonically trapped gas", "Coherence time of a Bose-Einstein condensate in an isolated harmonically trapped gas" ]
[ "Yvan Castin \nLaboratoire Kastler Brossel\nENS-PSL\nCNRS\nUniversité and Collège de France\nParisSorbonneFrance\n", "Alice Sinatra \nLaboratoire Kastler Brossel\nENS-PSL\nCNRS\nUniversité and Collège de France\nParisSorbonneFrance\n" ]
[ "Laboratoire Kastler Brossel\nENS-PSL\nCNRS\nUniversité and Collège de France\nParisSorbonneFrance", "Laboratoire Kastler Brossel\nENS-PSL\nCNRS\nUniversité and Collège de France\nParisSorbonneFrance" ]
[]
We study the condensate phase dynamics in a low-temperature equilibrium gas of weakly interacting bosons, harmonically trapped and isolated from the environment. We find that at long times, much longer than the collision time between Bogoliubov quasiparticles, the variance of the phase accumulated by the condensate grows with a ballistic term quadratic in time and a diffusive term affine in time. We give the corresponding analytical expressions in the limit of a large system, in the collisionless regime and in the ergodic approximation for the quasiparticle motion. When properly rescaled, they are described by universal functions of the temperature divided by the Thomas-Fermi chemical potential. The same conclusion holds for the mode damping rates. Such universality class differs from the previously studied one of the homogeneous gas.
10.1016/j.crhy.2018.04.001
[ "https://arxiv.org/pdf/1803.03168v2.pdf" ]
73,717,508
1803.03168
0228c64ddddcad47a13a508ab207ddcf8d502820
Coherence time of a Bose-Einstein condensate in an isolated harmonically trapped gas 21 Nov 2018 Yvan Castin Laboratoire Kastler Brossel ENS-PSL CNRS Université and Collège de France ParisSorbonneFrance Alice Sinatra Laboratoire Kastler Brossel ENS-PSL CNRS Université and Collège de France ParisSorbonneFrance Coherence time of a Bose-Einstein condensate in an isolated harmonically trapped gas 21 Nov 2018arXiv:1803.03168v2 [cond-mat.quant-gas]Bose gasesBose-Einstein condensatetemporal coherencetrapped gasesultracold atoms We study the condensate phase dynamics in a low-temperature equilibrium gas of weakly interacting bosons, harmonically trapped and isolated from the environment. We find that at long times, much longer than the collision time between Bogoliubov quasiparticles, the variance of the phase accumulated by the condensate grows with a ballistic term quadratic in time and a diffusive term affine in time. We give the corresponding analytical expressions in the limit of a large system, in the collisionless regime and in the ergodic approximation for the quasiparticle motion. When properly rescaled, they are described by universal functions of the temperature divided by the Thomas-Fermi chemical potential. The same conclusion holds for the mode damping rates. Such universality class differs from the previously studied one of the homogeneous gas. Introduction and overview We consider here an unsolved problem of the theory of quantum gases : the coherence time of a spinless boson gas in the weakly interacting regime, in a harmonic trap. The gas is prepared at thermal equilibrium at a temperature T much lower than the critical temperature T c , that is in the strongly condensed regime, and it is perfectly isolated in its subsequent evolution. The coherence time of the bosonic field is then intrinsic and dominated by that of the condensate. In view of recent technical developments [1,2,3], this question could soon receive an experimental response in cold gases of atoms confined in non-dissipative magnetic potentials [4,5,6] and, unlike other solid state systems [7,8,9,10], well decoupled from their environment and showing only low particle losses. Our theoretical study is also important for future applications in atom optics and matter wave interferometry. Following the pioneering work of references [11,12,13], our previous studies [14,15,16,17] performed in a spatially homogeneous boson gas, rely on the Bogoliubov method, which reduces the system to a weakly interacting quasiparticle gas. They have identified two mechanisms limiting the coherence time, both involving the dynamics of the phase operatorθ(t) of the condensate : -phase blurring : when the conserved quantities (the energy E of the gas and its number of particles N) fluctuate from one experimental realization to another, the average rate of evolution of the phase [θ(t) −θ(0)]/t in one realization, as a function of these conserved quantities, fluctuates too. After averaging over realizations, this induces a ballistic spread of the phase shiftθ(t) −θ(0), that is a quadratic divergence of its variance, with a ballistic coefficient A [14] : Var[θ(t) −θ(0)] ∼ At 2(1) this holds at long times with respect to γ −1 coll , where γ coll is the typical collision rate between thermal Bogoliubov quasiparticles ; -phase diffusion : even if the system is prepared in the microcanonical ensemble, where E and N are fixed, the interactions between quasiparticles cause their occupation numbers, and therefore the instantaneous speed d dtθ of the phase, which depends on them, to fluctuate. This induces a diffusive spread ofθ(t) −θ(0) at the times t ≫ γ −1 coll , with a coefficient D [15,16] : Var mc [θ(t) −θ(0)] ∼ 2Dt(2) In the general case, both mechanisms are present and the variance of the phase shift admits (1) as the dominant term, and (2) as a subdominant term. The condensate phase spreading directly affects its first-order temporal coherence function g 1 (t) = â † 0 (t)â 0 (0) whereâ 0 is the annihilation operator of a boson in the condensate mode, thanks to the the approximate relation g 1 (t) ≃ e −i θ (t)−θ(0) e −Var[θ(t)−θ(0)]/2(4) admitted in reference [17] under the hypothesis of a Gaussian distribution ofθ(t) −θ(0), then justified in reference [18] at sufficiently low temperature under fairly general assumptions. 1 We propose here to generalize these first studies to the experimentally more usual case of a harmonically trapped system (see however reference [19]). As the dependence of the damping rates of the Bogoliubov modes on the energy or the temperature are already very different from those of the homogeneous case, as it was shown in reference [20], it will certainly be the same for the spreading of the condensate phase. The trapped case is non-trivial, since the Bogoliubov modes are not known analytically, and there is no local density approximation applicable to the phase evolution (as verified by reference [21]). Fortunately we have the possibility to consider : -the classical limit for the Bogoliubov quasiparticles motion in the trapped gas. Indeed, at the thermodynamic limit (N → +∞ with constant Gross-Pitaevskii's chemical potential µ GP and constant temperature), the trapping angular frequencies ω α , α ∈ {x, y, z}, tend to zero as 1/N 1/3 : ω α ≪ µ GP , k B T(5) so that we can cleverly reinterpret the thermodynamic limit as a classical limit → 0 ; -the limit of very weak interactions between Bogoliubov quasiparticles : γ coll ≪ ω α (6) This implies that all the modes, even those of weaker angular frequency ≈ ω α , are in the collisionless regime (by opposition to hydrodynamics), and makes it possible to make a secular approximation on the kinetic equations describing the collisions between the quasiparticles ; -ergodicity in a completely anisotropic trap : as shown by references [22,23], the classical motion of quasiparticles in a non isotropic harmonic trap with cylindrical symmetry is highly chaotic at energies ǫ ≈ µ GP but almost integrable when ǫ → 0 or ǫ → +∞. In a completely anisotropic trap, at temperatures neither too small nor too large with respect to µ GP /k B , we can hope to complete the secular approximation by the hypothesis of ergodicity, which we will endeavor to show. Our article is articulated as follows. In section 2, after a few reminders about Bogoliubov's theory in a trap, we specify the state of the system and introduce the quantities to formally describe the phase spread, namely the derivative of the condensate phase operator and its time correlation function. In section 3, we give an expression of the ballistic coefficient A in the thermodynamic limit in any harmonic trap (including isotropic), first in the most general state of system considered here, then in the simpler case of a statistical mixture of canonical ensembles of the same temperature T . In the long section 4, we tackle the heart of the problem, calculating the correlation function C mc (τ) of dθ/dt in the microcanonical ensemble, which gives access in general terms to the sub-ballistic 1. Let us recall the assumptions used in reference [18] to establish equation (4). (i) The relative fluctuations of the modulus ofâ 0 are small, the system being strongly Bose condensed. (ii) The system is close enough to the thermodynamic limit, with normal fluctuations and asymptotically Gaussian laws for the energy and the number of particles. This is used in particular to put the ballistic contribution of the phase shift to g 1 (t) in the form (4). (iii) The diffusion coefficient of the phase (of order 1/N) must be much smaller than the typical collision rate γ coll between Bogolioubov quasi-particles (of order N 0 ) but much larger than the spacing of the energy levels (of order N −2 ) of quasiparticle pairs created or annihilated during Beliaev collision processes. This is used, for a system prepared in the microcanonical ensemble, to show that g 1 (t) is of the form (4) on time intervals t = O(N 0 ) and t = O(N 1 ), with the same diffusion coefficient. (iv) The correlation function of d dtθ is real, as predicted by kinetic equations. (v) We ignore the commutator ofθ(t) withθ(0), which introduces a O(t/N) phase error into the factor exp[−i θ (t) −θ(0) ]. This is an error of order unity at times t ≈ N but g 1 (t) then began to decrease under the effect of phase diffusion in the microcanonical ensemble (and is otherwise already very strongly damped under the effect of ballistic phase blurring after a time t ≈ N 1/2 ). spreading terms of the phase, since they are independent of the state of the system in the thermodynamic limit at fixed average energy and fixed average number of particles. We first introduce the semiclassical limit in subsection 4.1, the Bogoliubov quasiparticles motion being treated classically but the bosonic field of quasiparticles remaining quantum ; the semiclassical form of dθ/dt is deduced from a correspondence principle. We then write, in subsection 4.2, kinetic equations on the quasiparticle occupation numbers in the classical phase space (r, p) and we show how, once linearized they formally lead to C mc (τ). The problem remains formidable, since the occupation numbers depend on the six variables (r, p) and time. In the secular limit γ coll ≪ ω α and in the ergodic approximation for the quasiparticles motion (which excludes isotropic or cylindrically-symmetric traps), we reduce in subsection 4.3 to occupation numbers that are functions only of the energy of the classical motion ǫ and the time, which leads to explicit results on C mc (τ), on phase diffusion and, an interesting by-product, on the damping rate of Bogoliubov's modes in the trap, in subsection 4.4 where we also evaluate the condensate phase shift due to particle losses. Finally, we make a critical discussion of the ergodic approximation in subsection 4.5, estimating in particular the error that it introduces on the quantities controlling the phase diffusion of the condensate. We conclude in section 5. Summary of the formalism and results The derivative of the phase -As we recalled in the introduction, the coherence time of a condensate is controlled by the dynamics of its phase operatorθ(t) at times long with respect to the typical collision time γ −1 coll of quasiparticles. The starting point of our study is therefore the expression of the temporal derivative ofθ(t), smoothed temporally (that is, coarse-grained over a short time with respect to γ −1 coll but long with respect to the typical inverse frequency ǫ th k / of thermal quasiparticles). As it has been established in all generality in reference [18], to order one in the non-condensed fraction : − dθ dt = µ 0 (N) + k∈F + dǫ k dNn k ≡μ(7) Here µ 0 (N) is the chemical potential of the gas in the ground state andN is the total number of particles operator. The sum over the generic quantum number k (it's not a wavenumber) deals with the Bogoliubov modes of eigenenergy ǫ k , andn k is the operator number of quasiparticles in the k mode. The expression (7) is a quantum version of the second Josephson relation : its right-hand side is a chemical potential operatorμ of the gas, since it is the adiabatic derivative (with the occupation numbersn k fixed) with respect to N of the Bogoliubov Hamiltonian H Bog = E 0 (N) + k∈F + ǫ knk(8) The Bogoliubov modes are of the are family F + , according to the terminology of reference [24] in the sense that their modal functions (u k (r)v k (r)) are solutions of the eigenvalue equation ǫ k |u k |v k = H GP + Qgρ 0 (r)Q Qgρ 0 (r)Q −Qgρ 0 (r)Q −[H GP + Qgρ 0 (r)Q] |u k |v k ≡ L(r,p) |u k |v k (9) with the normalization condition d 3 r (|u k (r)| 2 − |v k (r)| 2 ) = 1 > 0. We took the wave function φ 0 (r) of the condensate real, normalized to one ( d 3 r φ 2 0 (r) = 1), and written to the order zero in the non-condensed fraction, that is to the Gross-Pitaevskii approximation : H GP |φ 0 = 0 with H GP =p 2 2m + U(r) + gρ 0 (r) − µ GP (10) so that at this order, the condensed density is ρ 0 (r) = Nφ 2 0 (r). Here, g = 4π 2 a/m is the coupling constant, proportional to the s-wave scattering length a between bosons of mass m, and U(r) = α mω 2 α r 2 α /2 is the trapping potential. The projector Q projects orthogonally to |φ 0 and ensures that |φ 0 ⊥|u k and |φ 0 ⊥|v k as it should be [24]. Since the condensate is in its ground mode (φ 0 minimizes the Gross-Pitaevskii energy functional), the ǫ k are positive. The state of the system -Evaporative cooling in cold atom gases does not lead a priori to any of the usual ensembles of statistical physics. To cover all reasonable cases, we therefore suppose that the gas is prepared at time 0 in a generalized ensemble, statistical mixture of eigenstates |ψ λ of the complete HamiltonianĤ, with N λ particles and energy E λ , hence of density operatorσ = λ Π λ |ψ λ ψ λ | (11) with, as the only restriction, the existence of narrow laws on E λ and N λ , of variances and covariance not growing faster than the averagesĒ andN in the thermodynamic limit. Average phase shift -Let's average expression (7) in the steady state |ψ λ . At the right-hand side appears the expectation in |ψ λ of the chemical potential operator. Because of the interactions between Bogoliubov quasiparticles, the N body system is expected to be ergodic in the quantum sense of the term, that is, to obey to the, so called in the Anglo-American literature, "Eigenstate Thermalisation Hypothesis" (see references [25,26,27]), ψ λ |μ|ψ λ = µ mc (E λ , N λ )(12) where µ mc (E, N) is the chemical potential in the microcanonical ensemble of energy E with N particles. For a large system, it suffices to expand to first order in the fluctuations, to obtain : µ mc (E λ , N λ ) = µ mc (Ē,N) + (E λ −Ē)∂ E µ mc (Ē,N) + (N λ −N)∂ N µ mc (Ē,N) + O(1/N)(13) It remains to average on the states |ψ λ with the weights Π λ as in equation (11) to get the first brick of the time coherence function (4), that is the average phase shift : θ (t) −θ(0) = −µ mc (Ē,N)t/(14) with an error O(1/N) on the coefficient of t. Average quadratic phase shift -Proceeding in the same way for the second moment of the phase shift of the condensate, we find as it is written implicitly in [16,18] that Var [θ(t) −θ(0)] = At 2 + 2 t 0 dτ (t − τ) Re C mc (τ)(15) with the ballistic coefficient (16) and the correlation function of the phase derivative in the microcanonical ensemble of energyĒ andN particles : A = Var[(N λ −N)∂ N µ mc (Ē,N) + (E λ −Ē)∂ E µ mc (Ē,N)]/ 2C mc (τ) = dθ dt (τ) dθ dt (0) mc − dθ dt 2 mc(17) This completes our formal knowledge of g 1 (t). In view of future experimental observations, however, it remains to calculate explicitly A and C mc (τ) for a harmonically trapped system. It will be necessary in particular to verify that C mc (τ) in the trapped case decreases fast enough so that one finds a diffusive law (2) as in the spatially homogeneous case. Calculation of the ballistic coefficient in the phase shift variance In the generalized statistical ensemble -To calculate the average phase shift (14) and the ballistic coefficient (16) in the general case, we must know the microcanonical chemical potential µ mc (Ē,N) and its derivatives in the harmonic trap. At the thermodynamic limit, µ mc coincides with the chemical potential µ can in the canonical ensemble of temperature T and number of particlesN, more convenient to calculate, provided that the temperature T is adjusted so that there is equality of mean energies E can (T,N) andĒ. In other words, µ mc (E can (T,N),N) ∼ µ can (T,N) (18) one just takes the derivative of this relation with respect to T orN to get the useful derivatives of µ mc , then one replaces E can byĒ, to obtain : ∂ E µ mc (Ē,N) ∼ ∂ T µ can (T,N) ∂ T E can (T,N)(19)∂ N µ mc (Ē,N) ∼ ∂ N µ can (T,N) − ∂ N E can (T,N) ∂ T E can (T,N) ∂ T µ can (T,N)(20) At the first order in the non-condensed fraction, the canonical chemical potential is deduced from the free energy F of the ideal gas of Bogoliubov quasiparticles of Hamiltonian (8) by the usual thermodynamic relation µ can = ∂ N F. The free energy is a simple functional of the density of states ρ(ǫ) of quasiparticles, F(T,N) = E 0 (N) + k B T +∞ 0 dǫ ρ(ǫ) ln 1 − e −βǫ(21) with β = 1/k B T . At the thermodynamic limit, the ground state energy E 0 of the gas in the harmonic trap is deduced from that of the homogeneous system [28] by a local density approximation, and the density of states ρ(ǫ) is obtained by taking the classical limit → 0, thanks to inequality (5) [6] : ρ(ǫ) = d 3 rd 3 p (2π ) 3 δ(ǫ − ǫ(r, p))(22) The classical Hamiltonian ǫ(r, p) is the positive eigenvalue of the Bogoliubov's 2 × 2 matrix of equation (9) with the position r and the momentum p treated classically 2 and the condensed density ρ 0 (r) written at the classical limit that is in the Thomas-Fermi approximation : gρ TF 0 (r) = µ TF − U(r) ≡ µ loc (r) if U(r) < µ TF 0 elsewhere(23) Here, the Thomas-Fermi chemical potential, classical limit of µ GP of Gross-Pitaevskii, is µ TF = 1 2 ω[15Na(mω/ ) 1/2 ] 2/5(24) andω = (ω x ω y ω z ) 1/3 is the geometric average of the trapping angular frequencies. We deduce that ǫ(r, p) =                p 2 2m p 2 2m + 2µ loc (r) 1/2 if U(r) < µ TF p 2 2m + U(r) − µ TF elsewhere(25) The six-fold integral (22) has been calculated in reference [29]. 3 Here we give the result in a somewhat more compact form : ρ(ǫ) = µ 2 TF ( ω) 3 f (ǫ ≡ ǫ/µ TF ) (26) f (ǫ) = 1 π       −2 √ 2ǫ 2 acosǫ − 1 (1 +ǫ 2 ) 1/2 + 2 √ 2ǫ ln 1 + √ 2ǫ +ǫ (1 +ǫ 2 ) 1/2 + √ǫ (5ǫ − 1) + (1 +ǫ) 2 acos 1 (1 +ǫ) 1/2       (27) We finally obtain the canonical chemical potential µ can (T,N) = µ 0 (N) + 6k B T 5N µ TF ω 3 +∞ 0 dǫ f (ǫ) ln 1 − e −βǫ + 2µ TF 5N µ TF ω 3 +∞ 0 dǫ f (ǫ)ǫ eβˇǫ − 1(28) 2. The projector Q, projecting on a space of codimension one, can be omitted at the thermodynamic limit. 3. The case of an anisotropic harmonic trap comes down to the isotropic case treated in [29] by performing the change of variable (with unit Jacobian) r α = λ α r ′ α , with ω α λ α =ω, such that U(r) = 1 2 mω 2 r ′2 . with the contribution of the ground state [6] µ 0 (N) = µ TF 1 + π 1/2 µ TF a 3 /g 1/2 (29) When one takes the derivative of (28) with respect to T andN to evaluate expressions (19) and (20), one will remember thatβ = µ TF /k B T depends onN through µ TF . For brevity, we do not give the result here. In a slightly less general ensemble -A simpler expression 4 of the ballistic coefficient A can be obtained when the state of the system is a statistical mixture of canonical ensembles of the same temperature T but of variable number of particles. By expressing the various coefficients in (16,19,20) as derivatives of the free energy F(T,N) with respect toN and T , and remembering the expression Var can E = k B T 2 ∂ T E can of the variance of the energy in the canonical ensemble, we find to the dominant order 1/N that A(T ) = (Var N) ∂ N µ can (T,N) 2 + k B T 2 ∂ T µ can (T,N) 2 2 ∂ T E can (T,N)(30) At zero temperature, only the first term contributes, and we find the prediction of references [30,31] pushed to order one in the non-condensed fraction f nc . A T 0 but in the absence of fluctuations of N, only the second term contributes ; it is none other than the ballistic coefficient A can (T ) in the canonical ensemble. In the validity regime of the Bogoliubov approximation, f nc ≪ 1, the chemical potential µ can (T,N) of the gas remains close to that of a Thomas-Fermi pure condensate, so that ∂ N µ can (T,N) = ∂ N µ TF + O f nc N(31) ∂ T µ can (T,N) is immediately first-order in f nc , and the same goes for the second term in equation (30). It is therefore only for strongly subpoissonian fluctuations of N (Var N ≪ Var Pois N ≡N) that the second term of (30), that is the effect of thermal fluctuations, is not dominated by the first one. Assuming this condition satisfied in the experiment, we represent in figure 1 the canonical coefficient A can (T ) scaled by the A Pois value of A in a pure condensate with Poissonian fluctuations of N, A Pois =N ∂ N µ TF 2(32) all divided by the small parameter of the Bogoliubov theory at zero temperature, 5 proportional to f nc (T = 0) : [ρ 0 (0)a 3 ] 1/2 = 2 √ 2 15π 1/2N µ TF ω 3(33) The ratio thus formed is a universal function of k B T/µ TF . From the low and high energy expansions of the quasiparticle density of states, f (ǫ) = ǫ→0 32 3πǫ 3/2 − 2 √ 2ǫ 2 + O(ǫ 5/2 ) (34) f (ǫ) = ǫ→+∞ 1 2ǫ 2 +ǫ + 1 2 + O(ǫ −1/2 )(35) 4. The general expression (16) of A is a little tricky to grasp. Since the energy of the ground state depends on N, fluctuations of N mechanically cause energy fluctuations. For example, if N fluctuates at T = 0 (in each subspace of fixed N, the system is in the ground state), we can, to find A(T = 0) of equation (30) from equation (16), use the fact that E λ −Ē = (N λ −N)µ 0 (N) + O(N 0 ) and that ∂ E µ mc (Ē,N) ∼ T →0 −2/(25N), whose report in (20) gives ∂ N µ mc (Ē,N) ∼ T →0 ∂ N µ 0 (N) + 2µ 0 (N)/(25N). 5. One sometimes prefers to take as a small parameter 1/[ρ 0 (0)ξ 3 ], where the healing length ξ of the condensate at the center of the trap is such that 2 /(mξ 2 ) = µ TF . One can easily go from one small parameter to the other using the relation [ρ 0 (0)a 3 ] 1/2 ρ 0 (0)ξ 3 = 1/(8π 3/2 ). (1) of the condensate phase in the long time limit with respect to the collision time γ −1 coll of quasiparticles, for a gas ofN bosons prepared in the canonical ensemble in an harmonic trap (isotropic or not), depending on the temperature. The result is at the thermodynamic limit where the trapping angular frequencies ω α are negligible compared to Thomas-Fermi's chemical potential µ TF (24). Full line : second term of equation (30), deduced from the canonical chemical potential (28) to the Bogoliubov approximation (weak interactions, T ≪ T c ). Dashes : equivalents at low and high temperature (dominant terms of equations (36,37)). The division of A can (T ) by the small parameter (33) of Bogoliubov's theory and by the value (32) of the ballistic coefficient for Poissonian fluctuations of N leads to a universal function of k B T/µ TF . we obtain low and high temperature expansions (Ť = k B T/µ TF = 1/β) A can (T ) A Pois [ρ 0 (0)a 3 ] 1/2 = T →0 21ζ(7/2) √ 2Ť 9/2       1 + 4 √ 2π 9/2 525ζ(7/2)Ť 1/2 + O(Ť )       (36) = T →+∞ 15π 1/2 2 √ 2 3ζ(3) 2 4ζ(4)Ť 3 1 +β 4ζ(2) 3ζ(3) − ζ(3) 2ζ(4) + O(β 3/2 )(37) whose dominant terms 6 are shown as dashed lines on figure 1. Let us note a particularly simple and beautiful reworking of the high temperature equivalent, accidentally already operational at k B T/µ TF ≥ 2 : A can (T ) A Pois ∼ k B T ≫µ TF 3ζ(3) 4ζ(4) T T (0) c 3(38) where T (0) c is the critical temperature of an ideal gas of bosons in a harmonic trap at the thermodynamic limit, k B T (0) c = ω[N/ζ(3)] 1/3 . In this limit, A can (T ) is therefore lower than A Pois by a factor proportional to the non-condensed fraction (T/T (0) c ) 3 ≪ 1. Variance of the condensate phase shift in the microcanonical ensemble Here we calculate the correlation function of dθ/dt, namely C mc (τ), for a system prepared in the microcanonical ensemble, using at the thermodynamic limit ω α µ TF → 0 a semiclassical description of the quasiparticles and taking into account the effect of their interaction by quantum Boltzmann kinetic equations on their classical phase space distribution n(r, p). Semi-classical form of Bogoliubov's Hamiltonian and dθ/dt In the semiclassical description, the motion of Bogoliubov quasiparticles is treated classically, that is that they have at each moment a well defined position r and momentum p [6], whose evolution in phase space derives from the Hamiltonian ǫ(r, p) given in equation (25) [22] : 6. In the value window of figure 1, in practice 1/10 ≤Ť ≤ 10, the inclusion of subdominant terms does not usefully approximate the exact result. but we treat in a quantum way the bosonic field of quasiparticles by introducing their occupation numbers operatorŝ n(r, p) in the phase space, which allows us to take into account the discrete nature of the numbers of quasiparticles and the quantum statistical effects (with the Bose law rather than the equipartition law of the classical field at equilibrium). dr dt = ∂ p ǫ(r, p)(39)dp dt = −∂ r ǫ(r, p)(40) In this semiclassical limit, the Bogoliubov Hamiltonian (8) (without interaction between quasiparticles) is written immediately H sc Bog = E 0 (N) + d 3 r d 3 p (2π ) 3 ǫ(r, p)n(r, p)(41) One might think, given formula (7), that dθ/dt admits a similar writing, with ǫ(r, p) replaced by d dN ǫ(r, p). This is not so, the reason being that the derivative d dN ǫ(r, p) is not constant on the classical trajectory. The dθ/dt operator is part of a general class of so-called Fock quantum observables (diagonal in the Fock basis of quasiparticles thus -here linear -functionals of Bogoliubov's occupation numbers) : A = k∈F + a knk with a k = ( u k |, v k |) A(r,p) |u k |v k(42) where A(r,p) is a 2 × 2 hermitian matrix operator and a k its average in the Bogoliubov mode of eigenenergy ǫ k . The observable dθ/dt corresponds to the choice A˙θ = σ z d dN L where σ z is the third Pauli matrix and L(r,p) is the operator appearing in equation (9). By using Hellmann-Feynman's theorem 7 , we have indeed ( u k |, − v k |) d dN L |u k |v k = dǫ k dN(43) For these Fock operators we use the semiclassical correspondence principlê A sc = d 3 r d 3 p (2π ) 3 a(r, p)n(r, p)(44) where a(r, p) = (U(r, p), V(r, p)) A(r, p) U(r,p) V(r,p) , A(r, p) being the classical equivalent of A(r,p), and a(r, p) represents the time average of a(r, p) on the only classical trajectory passing through (r, p) at time t = 0 : a(r, p) ≡ lim t→+∞ 1 t t 0 dτ a(r(τ), p(τ))(45) The vector (U(r, p), V(r, p)), normalized according to the condition U 2 − V 2 = 1, is eigenvector of the classical equivalent L(r, p) of L(r,p) with eigenvalue ǫ(r, p) : U(r, p) V(r, p) =                                                               1 2        p 2 /2m ǫ(r, p) 1/2 + p 2 /2m ǫ(r, p) −1/2        1 2        p 2 /2m ǫ(r, p) 1/2 − p 2 /2m ǫ(r, p) −1/2                               if U(r) < µ TF 1 0 elsewhere(46) At the basis of this correspondence principle lies the idea that the equivalent of a stationary quantum mode (|u k , |v k ) in the classical world is a classical trajectory of the same energy, itself also stationary as a whole by temporal evolution. To the quantum expectation a k of the observable A(r,p) in the mode (|u k , |v k ) thus we must associate an average 7. The theorem is here generalized to the case of a non-Hermitian operator L, ( u k |, − v k |) being the dual vector of the eigenvector (|u k , |v k ) of L. Beliaev direct Beliaev inverse Landau direct Landau inverse amplitude 2gρ 1/2 0 (r) A p q,|p−q| (r) amplitude 2gρ 1/2 0 (r) A |p+q| p,q (r) over a trajectory of the expectation a(r, p) of the classical equivalent A(r, p) in the local mode (U(r, p), V(r, p)). We therefore retain for the semiclassical version of the derivative of the condensate phase operator : − dθ sc dt = µ 0 (N) + d 3 r d 3 p (2π ) 3 dǫ(r, p) dNn (r, p)(47) Here, let us repeat it, the expectation a(r, p) = dǫ(r,p) dN is not a constant of motion, unlike ǫ(r, p), so we can not omit as in (41) the temporal average. About the usefulness of kinetic equations in calculating the correlation function of dθ/dt We must determine, in the semiclassical limit, the correlation function of dθ/dt, for a system prepared in the microcanonical ensemble. Given equations (17) and (47) we must calculate C sc mc (τ) = d 3 r d 3 p (2π ) 3 d 3 r ′ d 3 p ′ (2π ) 3 dǫ(r, p) dN dǫ(r ′ , p ′ ) dN δn(r, p, τ) δn(r ′ , p ′ , 0)(48) where . . . represents the mean in the state of the system and where we introduced the fluctuations of the occupation number operators in phase space at time τ, δn(r, p, τ) =n(r, p, τ) −n(r, p) The microcanonical ensemble can be seen in the semiclassical phase space as a constant-energy statistical mixture of Fock states |F = |n(r ′′ , p ′′ ) (r ′′ ,p ′′ )∈R 6 , eigenstates of H sc Bog , where all n(r ′′ , p ′′ ) are integers. It is assumed at first that the system is prepared in one of such Fock states |F at the initial time t = 0, eigenstate of δn(r ′ , p ′ , 0) with the eigenvalue n(r ′ , p ′ ) −n(r ′ , p ′ ) ; it remains then to calculate in equation (48) the quantity F |δn(r, p, τ)|F = n(r, p, τ) −n(r, p) ≡ δn(r, p, τ) at τ > 0, that is the evolution of the mean occupation numbers n(r, p, τ) in phase space, their initial values being known, taking into account (i) the Hamiltonian quasiparticle transport and (ii) the effect of quasiparticle collisions by the Beliaev or Landau three-quasiparticles processes 8 represented in figure 2. This is exactly what the usual Boltzmanntype quantum kinetic equations can do, with the difference that the semiclassical distribution function n(r, p, τ) does not correspond here to a local thermal equilibrium state of the system, but to the mean occupation number at time τ knowing that the initial state of the system is a quasiparticle Fock state. The evolution equation of the average occupation numbers n(r, p, τ) is of the form D Dτ n(r, p, τ) + I coll (r, p, τ) = 0 (51) The first term is the convective derivative resulting from the classical Hamilton equations : D Dτ = ∂ τ + ∂ p ǫ(r, p) · ∂ r − ∂ r ǫ(r, p) · ∂ p(52) 8. Four-quasiparticle processes, of higher order in the non-condensed fraction, are assumed here negligible. It preserves the density in the phase space along a classical trajectory (Liouville's theorem). The second term describes the effect of collisions between quasiparticles, local in position space, and which can only occur, at the order of Beliaev-Landau, at points where the Thomas-Fermi density of the condensate ρ 0 (r) is nonzero (see the diagrams in figure 2) : 9 I coll (r, p, τ) = 1 2 d 3 q (2π ) 3 2π 2gρ 1/2 0 (r) A p q,|p−q| (r) 2 δ (ǫ(r, q) + ǫ(r, p − q) − ǫ(r, p)) × {−n(r, p, τ)[1 + n(r, q, τ)][1 + n(r, p − q, τ)] + n(r, q, τ)n(r, p − q, τ)[1 + n(r, p, τ)]} + d 3 q (2π ) 3 2π 2gρ 1/2 0 (r) A |p+q| p,q (r) 2 δ (ǫ(r, p) + ǫ(r, q) − ǫ(r, p + q)) × {−n(r, p, τ)n(r, q, τ)[1 + n(r, p + q, τ)] + n(r, p + q, τ)[1 + n(r, p, τ)][1 + n(r, q, τ)]} (53) In this process are involved, at point r, a quasiparticle of momentum p (whose evolution of the average occupation number n(r, p τ) has to be determined), a second outgoing or incoming quasiparticle of momentum q on which it is necessary to integrate, and a third quasiparticle whose momentum is fixed by momentum conservation. In equation (53) the first integral takes into account the Beliaev processes ; it shows a factor of 1/2 to avoid double counting of the final or initial two quasiparticle states (q, p − q) and (p − q, q) ; the second integral takes into account the Landau processes. Note in both cases : (i) the factor 2π , from Fermi's golden rule, (ii) the − sign taking direct processes into account (they depopulate the p mode at point r) and the + sign for inverse processes, with the bosonic amplification factors 1 + n, (iii) the presence of an energy conservation Dirac function at r. The reduced coupling amplitudes for three-quasiparticles processes at point r are given by [14,32] A p 1 p 2 ,p 3 (r) = s 2 (r, p 2 ) + s 2 (r, p 3 ) − s 2 (r, p 1 ) 4s(r, p 1 )s(r, p 2 )s(r, p 3 ) + 3 4 s(r, p 1 )s(r, p 2 )s(r, p 3 )(54) with s(r, p) = U(r, p) + V(r, p). As expected, the kinetic equations admit the average thermal equilibrium occupation numbers as a stationary solution 10n (r, p) = 1 e βǫ(r,p) − 1 (55) The well-known property of the Bose law 1 +n = e βǫn allows to verify it easily : supplemented with energy conservation, it leads to the perfect compensation in all points of direct and inverse processes, that is to the cancellation of the quantities between curly brackets in equation (53), following the principle of microreversibility ; we also have D Dτn = 0 sincen(r, p) is a function of ǫ(r, p), a quantity that is conserved by the Hamiltonian transport. As our system fluctuates weakly around the equilibrium, we linearise the kinetic equations around n =n as in reference [16] to get D Dτ δn(r, p, τ) = −Γ(r, p, τ)δn(r, p, τ) + d 3 q (2π ) 3 K(r, p, q)δn(r, q, τ) the diagonal term comes from the fluctuation δn(r, p, τ) in the right-hand side of equation (53), and the non-local momentum term comes from fluctuations δn(r, q, τ) and δn(r, p ± q, τ) whose contributions are collected by changing the variables q ′ = p ± q in d 3 q. The expression of K(r, p, q) is not useful for the following, so let us only give the expression of the local damping rate of the Bogoliubov quasiparticles of momentum p at point r : Γ(r, p) = 4πρ 0 (r)g 2 d 3 q (2π ) 3 A p q,|p−q| (r) 2 δ (ǫ(r, q) + ǫ(r, p − q) − ǫ(r, p)) 1 +n(r, q) +n(r, p − q) + 8πρ 0 (r)g 2 d 3 q (2π ) 3 A |p+q| p,q (r) This expression coincides with the damping rate of a momentum p mode in a spatially homogeneous condensed gas of density gρ 0 (r) [32]. Just like δn(r, p, τ), F |δn(r, p, τ)δn(r ′ , p ′ , 0)|F considered as a function of (r, p, τ), obeys equation (56) ; the same is true for its average δn(r, p, τ)δn(r ′ , p ′ , 0) over all initial Fock states |F , since the coefficients Γ and K do not depend on |F . Let's contract the latter by the quantity B(r ′ , p ′ ) ≡ 1 dǫ(r ′ , p ′ ) dN (58) as in equation (48) to form the auxiliary unknown X(r, p, τ) = d 3 r ′ d 3 p ′ (2π ) 3 B(r ′ , p ′ ) δn(r, p, τ)δn(r ′ , p ′ , 0)(59) Then X(r, p, τ) evolves according to the linear kinetic equations (56) with the initial condition X(r, p, 0) = d 3 r ′ d 3 p ′ (2π ) 3 Q(r, p; r ′ , p ′ ) B(r ′ , p ′ )(60) where the matrix of covariances at equal times of the number of quasiparticles has been introduced : Q(r, p; r ′ , p ′ ) = δn(r, p, 0)δn(r ′ , p ′ , 0)(61) whose expression in the microcanonical ensemble will be connected to that in the canonical ensemble in due time, in sub-section 4.3. The sought microcanonical correlation function of dθ sc /dt is then C sc mc (τ) = d 3 rd 3 p (2π ) 3 B(r, p)X(r, p, τ)(62) Solution in the secular-ergodic approximation Our study restricts to the collisionless regime Γ th ≪ ω α where Γ th is the typical thermal value of the quasiparticles damping rate Γ(r, p) and ω α are the trapping angular frequencies. The quasiparticles then have time to perform a large number of Hamiltonian oscillations in the trap before undergoing a collision. We can therefore perform the secular approximation that consists in replacing the coefficients of the linearized kinetic equation (56) by their temporal average over a trajectory. Thus, Γ(r, p) secular → approx. Γ(r, p) = lim t→+∞ 1 t t 0 dτ Γ(r(τ), p(τ))(63) the auxiliary unknown X(r, p, τ) of equation (59), just like the fluctuations of the occupation numbers δn(r, p, t), depend only on the trajectory τ → (r(τ), p(τ)) passing through (r, p) and on time. Still the problem remains formidable. Fortunately, as we have said, in a completely anisotropic trap, the Hamiltonian dynamics of quasiparticles should be highly chaotic, except within the limits of very low energy ǫ ≪ µ TF or very high energy ǫ ≫ µ TF [22,23]. We thus use the ergodic hypothesis, by identifying the temporal average on an trajectory of energy ǫ to the "uniform" mean in the phase space on the energy shell ǫ : Γ(r, p) ergodic = hypothesis Γ(ǫ) = Γ(r, p) ǫ ≡ 1 ρ(ǫ) d 3 rd 3 p (2π ) 3 Γ(r, p)δ(ǫ − ǫ(r, p))(64) where the density of states ρ(ǫ) is given by equation (22). We will come back to this hypothesis in section 4.5. In this case, the function X(r, p, τ) depends only on the energy ǫ = ǫ(r, p) and time : X(r, p, τ) ergodic = hypothesis X(ǫ, τ)(65) We obtain the evolution equation of X(ǫ, τ) by averaging that of X(r, p, τ) on the energy shell ǫ : 11 ∂ τ X(ǫ, τ) = −Γ(ǫ)X(ǫ, τ) − 1 2ρ(ǫ) ǫ 0 dǫ ′ L(ǫ − ǫ ′ , ǫ ′ ){X(ǫ ′ , τ)[n(ǫ) −n(ǫ − ǫ ′ )] + X(ǫ − ǫ ′ , τ)[n(ǫ) −n(ǫ ′ )]} − 1 ρ(ǫ) +∞ 0 dǫ ′ L(ǫ, ǫ ′ ){X(ǫ ′ , τ)[n(ǫ) −n(ǫ + ǫ ′ )] − X(ǫ + ǫ ′ , τ)[1 +n(ǫ) +n(ǫ ′ )]} (66) with Γ(ǫ) = 1 2ρ(ǫ) ǫ 0 dǫ ′ L(ǫ − ǫ ′ , ǫ ′ )[1 +n(ǫ ′ ) +n(ǫ − ǫ ′ )] + 1 ρ(ǫ) +∞ 0 dǫ ′ L(ǫ, ǫ ′ )[n(ǫ ′ ) −n(ǫ + ǫ ′ )](67) In these expressions, the first integral, limited to energies ǫ ′ lower than the energy of the quasiparticle ǫ considered, corresponds to Beliaev processes, and the second integral to Landau processes. The integral kernel 12 L(ǫ, ǫ ′ ) = d 3 r d 3 p d 3 q (2π ) 6 8πg 2 ρ 0 (r) A ǫ+ǫ ′ ǫ,ǫ ′ (r) 2 δ(ǫ − ǫ(r, p))δ(ǫ ′ − ǫ(r, q))δ(ǫ + ǫ ′ − ǫ(r, p + q)) (68) = 32 √ 2 π 1/2 [ρ 0 (0)a 3 ] 1/2 µ TF µ TF ω 3 µ TF 0 µ 0 dµ 0 (µ TF − µ 0 ) 1/2 ǫǫ ′ (ǫ + ǫ ′ ) A ǫ+ǫ ′ ǫ,ǫ ′ (µ 0 ) 2 µ 5/2 TF (ǫ 2 + µ 2 0 ) 1/2 (ǫ ′2 + µ 2 0 ) 1/2 [(ǫ + ǫ ′ ) 2 + µ 2 0 ] 1/2 (69) uses the reduced coupling amplitude (54) at point r, reparametrized in terms of energies ǫ i = ǫ(r, p i ) (1 ≤ i ≤ 3) or even in terms of the local Gross-Pitaevskii chemical potential µ 0 = gρ 0 (r). It has the symmetry property L(ǫ, ǫ ′ ) = L(ǫ ′ , ǫ). We write the result before giving some indications on its obtention (one will also consult reference [16]). In the secularo-ergodic approximation, the microcanonical correlation function of dθ sc /dt is C ergo mc (τ) = +∞ 0 dǫ ρ(ǫ)B(ǫ)X(ǫ, τ)(70) Here B(ǫ) is the ergodic average of the quantity B(r, p) introduced in equation (58) : B(ǫ) = 1 ρ(ǫ) d 3 r d 3 p (2π ) 3 dǫ(r, p) dN δ(ǫ − ǫ(r, p)) (71) = dµ TF /dN π f (ǫ) 2ǫ 1/2 (ǫ + 1) − √ 2(ǫ 2 + 1) argsh (2ǫ) 1/2 (1 +ǫ 2 ) 1/2 −ǫ 1/2 (ǫ − 1) − (1 +ǫ) 2 acos 1 (1 +ǫ) 1/2 (72) B(ǫ) = ǫ→0 dµ TF dN −ǫ 5 − 3π 40 √ 2ǫ 3/2 + O(ǫ 2 ) , B(ǫ) = ǫ→+∞ dµ TF dN −1 + 32 3πǫ −3/2 + O(ǫ −5/2 )(73) withǫ = ǫ/µ TF and f (ǫ) the reduced density of states (27). The auxiliary unknown X(ǫ, τ) is a solution of the linear equation (66) with the initial condition X(ǫ, 0) =n(ǫ)[1 +n(ǫ)][B(ǫ) − Λǫ](74) where Λ is the derivative of the microcanonical chemical potential with respect to to the total energy E of the gas 13 , as in equation (19) : Λ = +∞ 0 dǫ ρ(ǫ)ǫB(ǫ)n(ǫ)[1 +n(ǫ)] +∞ 0 dǫ ρ(ǫ)ǫ 2n (ǫ)[1 +n(ǫ)](75) The equation (70) is the ergodic rewriting of equation (62). The initial condition (74) is the difference of two contributions : 11. The simplest is to average the complete kinetic equations (51), then linearize the result around the stationary solution (55). 12. To get (69), we reduced equation (68) to a single integral on the r modulus (after having formally reduced to the case of an isotropic trap as in note 3) by integrating in spherical coordinates on p, q and u, the cosine of the angle between p and q. In 2m ( p 2 2m + 2µ 0 )] 1/2 , ∀µ 0 ≥ 0. 13. The deep reason for the appearance of this derivative is given in reference [16]. It explains why the kinetic equations allow to find in the canonical ensemble the ballistic term At 2 of equation (15) with the correct expression of the coefficient A = (∂ E µ mc / ) 2 Var E. -the first is the one that one would obtain in the canonical ensemble. The ergodic mean of the covariance matrix (61) would then be simply Q can (ǫ, ǫ ′ ) =n(ǫ)[1 +n(ǫ)]δ(ǫ − ǫ ′ )/ρ(ǫ) ; -the second comes from a projection of δn canonical fluctuations on the subspace of the δn fluctuations of zero energy, +∞ 0 dǫ ρ(ǫ)ǫδn(ǫ) = 0, the only one eligible in the microcanonical ensemble. Only subtle point, this projection must be carried out parallel to the stationary solution e 0 (ǫ) = ǫn(ǫ)[1 +n(ǫ)] of the linearized kinetic equations (66). 14 We then check that, for the value of Λ given, X(ǫ, 0) is in the subspace of zero energy fluctuations. Results and discussion We present some results in graphic form, after a clever scaling making them independent of the trapping angular frequencies (provided they are quite distinct two by two to allow the ergodic hypothesis) and of the strength of the interactions 15 ; it is enough to know the temperature in units of Thomas-Fermi's chemical potential µ TF . These results illustrate the universality class of the completely anisotropic harmonic traps, different from that of the spatially homogeneous systems of reference [16]. An interesting by-product of our study is shown in figure 3 : it is the damping rate Γ(ǫ) in the secularo-ergodic approximation of the Bogoliubov modes of energy ǫ. In a cold atom experiment it is possible to excite such modes and to follow their decay in time. The rate we predict is then measurable and it can be compared to the experiments, at least in its validity regime of classical motion ǫ ≫ ω α (deviations from the ergodic hypothesis are discussed in section 4.5). The limiting behaviors Γ(ǫ) ∼ ǫ→0 3I 4 ǫ µ TF 1/2 k B T [ρ 0 (0)a 3 ] 1/2 with I = 4.921 208 . . . (76) Γ(ǫ) ∼ ǫ→+∞ 128 √ 2 15 √ π µ 2 TF ǫ [ρ 0 (0)a 3 ] 1/2 (77) shown in dashed lines in figure 3, 16 are derived in Appendix A. They are very different from the spatially homogeneous case, where the damping rate vanishes linearly in ǫ at low energy and diverges as ǫ 1/2 at high energy. In particular, the behavior (76) in ǫ 1/2 results from the existence of the Thomas-Fermi edge of the condensate. Let's go back to the condensate phase spreading in the microcanonical ensemble. In figure 4, we represent in black solid line the variance of the phase shiftθ(t)−θ(0) of the condensate as a function of time t in the ergodic approximation (70) at the temperatures T = µ TF /k B and T = 10µ TF /k B . The variance has a parabolic departure in time, which corresponds to the precollisional regime t ≪ t coll , where t coll is the typical collision time between quasiparticles : we can then assume that C mc (τ) ≃ C mc (0), so that the integral contribution to equation (15) is ≃ C mc (0)t 2 . At long times, t ≫ t coll , the correlation function of dθ/dt seems to quickly reach zero (red solid line) ; a more detailed numerical study (see the inset in figure 4 b) reveals however the presence of a power law tail t −α , C mc (t) ∼ t→+∞ C t 5(78) the exponent α = 5 is greater than the one α h = 3 of the decay law of C mc (t) in the spatially homogeneous case [16]. Its value can be found by a rough heuristic approximation, called rate approximation or projected Gaussian [15], already used for α h with success in this same reference [16] : we keep in the linearized kinetic equations (66) only 14. For this projection to be compatible with linearized kinetic evolution, it is necessary that the projection direction and the hyperplane on which we project be invariant by temporal evolution, the second point being ensured by conservation of energy. The form of e 0 (ǫ) derives from the fact that (55) remains a stationary solution for an infinitesimal variation of β, β → β + δβ, around its physical value. 15. In a first step, we show that the results can depend on the trapping frequencies ω α only through their geometric meanω. This is a direct consequence of the ergodic hypothesis and the fact that the observables involved here, including the Hamiltonian, depend only on the position r of the quasiparticles via the trapping potential U(r) = 1 2 m α ω 2 α r 2 α . In the integral d 3 r participating in the ergodic mean, one can then perform the isotropising change of variables of note 3. 16. For k B T = µ TF , Γ(ǫ)/ǫ 1/2 has a deceptive maximum in the neighborhood of ǫ/µ TF = 0.02 of about 5% above its limit in ǫ = 0. k B T = 10µ TF , where µ TF is the Thomas-Fermi's chemical potential of the condensate. Thanks to the chosen units, the curve is universal ; in particular, it does not depend on the trapping angular frequencies ω α . The Bogoliubov modes considered must be in the classical motion regime ǫ ≫ ω α and the system must be in the regime of an almost pure condensate, [ρ 0 (0)a 3 ] 1/2 ≪ 1 and T ≪ T c , where ρ 0 (0) = µ TF /g is the density of the condensate in the center of the trap and T c the critical temperature. In dashed lines, the equivalents (76) and (77) of Γ(ǫ) at low and high energy. Figure 4: In the conditions of figure 3, for a system prepared at t = 0 in the microcanonical ensemble at temperature (a) k B T = µ TF or (b) k B T = 10µ TF , and isolated from its environment in its subsequent evolution, variance of the condensate phase shiftθ(t) −θ(0) as a function of time t (solid line black) and its asymptotic diffusive behavior (80) (dashed). The correlation function C mc (t) of dθ/dt is shown on the same figure in the secular-ergodic approximation (70) as a function of time (red solid line, ticks on the right) and, for (b), in an inset in log-log scale at long times (black solid line) to show that after a quasi-exponential decay in the square root of time (fit with t 6 exp(−C √ t) in red dashed line) it follows a power law ∝ t −5 (blue dashed). As in figure 3, the multiplication of the quantities on the axes by well-chosen factors makes these results universal. the term of pure decay −Γ(ǫ)X(ǫ, τ) in the right-hand side, which makes them immediately integrable and leads to the estimate 17 C mc (t) ≈ +∞ 0 dǫ ρ(ǫ)[B(ǫ) − Λǫ] 2n (ǫ)[1 +n(ǫ)]e −Γ(ǫ)t(79) The power law behavior of the density of states ρ(ǫ) at low energy [see (34)], of the coefficients B(ǫ) in dθ/dt [see (73)], of the occupation numbers n(ǫ) ∼ k B T/ǫ and of the damping rate Γ(ǫ) [see (76)] then reproduce the exponent α = 5 found numerically. 18 Since C mc (t) tends to zero faster than 1/t 2+η , for some η > 0, we obtain the following important result : the variance of the condensate phase shift Var mc [θ(t) −θ(0)] exhibits at long times the typical affine growth of a diffusive regime with delay : represented by a dashed line on figure 4. The delay t 0 is due to the non-zero width of the correlation function C mc (τ) : Var mc [θ(t) −θ(0)] = t≫t coll 2D(t − t 0 ) + o(1)(80)D = +∞ 0 dτ C mc (τ) (81) t 0 = +∞ 0 dτ τC mc (τ) +∞ 0 dτ C mc (τ)(82) We represent the diffusion coefficient D of the condensate phase as a function of temperature in figure 5 a. It exhibits a high temperature growth (k B T > µ TF ) much faster than in the spatially homogeneous case : it was only linear (up to logarithmic factors), it seems to scale here as T 4 (dotted in the figure). The diffusion delay time t 0 is plotted as a function of temperature in figure 5 b. We compare it to the estimate t coll ≃ 1/Γ(ǫ = k B T ) of the collision time between quasiparticles, in dashed line : this one gives a good account of the sudden rise of t 0 at low temperatures, but reproduces with much delay and greatly underestimating that at high temperatures. The rise of t 0 is well represented by a T −3/2 low temperature law, and appears to be linear in T at high temperature (see dashed line). Let us find by a simple reasoning the observed power laws. If a scaling law exists, it should survive the rate approximation on the linearized kinetic equations ; we can therefore take the approximate expression (79) of C mc (t) as a starting point and put it in the expressions (81) and (82) of D and t 0 . At high temperature, the integrals on ǫ giving D and t 0 in the rate approximation are dominated by energies of order k B T ; we set ǫ = k B Tǭ and send T to +∞ at fixedǭ under the integral. The behaviors of ρ(ǫ) and B(ǫ) at high energy are known. Only that of Γ(k B Tǭ) is missing ; to get it, we notice on (69) that L(k B Tǭ, k B Tǭ ′ ) tends to a constant when T → +∞. The approximation L(ǫ, ǫ ′ ) ≃ L(ǫ −ǫ ′ , ǫ ′ ) ≃ const, however, triggers a logarithmic infrared divergence in the integrals on ǫ ′ in (67), which stops at ǫ ′ µ TF , so that 19 Γ(k B Tǭ) µ TF [ρ 0 (0)a 3 ] 1/2 ∼ k B T/µ TF →+∞ 512 √ 2 15π 1/2 1 ǫ 2 µ TF k B T ln k B T µ TF(83) All this leads to the scaling laws D ≈ T 4 and t 0 ≈ T at high temperature, up to logarithmic factors. At low temperatures, we proceed in the same way. The behavior of Γ(k B Tǭ) is as T 3/2 when T → 0 at fixedǭ, as one could expect it from the equivalent (76) and as it is confirmed by a calculation. The only trap to avoid is that 19. A more accurate calculation leads to replace in (83) the symbol ∼ by = and the factor ln k B T µ TF by [ln k B T µ TF +ǭ 4 + ln(1 − e −ǭ ) + 31 15 − 3 ln 2 + O(µ TF /k B T )]. B(k B Tǭ) − Λk B Tǭ scales as T 3/2 when T → 0, not as T as one might think, because the dominant terms of B(k B Tǭ) and Λk B Tǭ, both linear in k B Tǭ, exactly compensate each other, see equation (75). This leads to the exact power laws (without logarithmic corrections) D ∝ T 4 and t 0 ∝ T −3/2 at low temperature ; only the second one is accessible on the temperature interval of figure 5, but we checked the first one numerically on a larger temperature range. To encourage an experimental study with cold atoms, let's finish with a small study of the fundamental limits to the observability of phase diffusion of a trapped condensate. There are, of course, several practical difficulties to overcome, such as (i) significant reduction of the fluctuations in the energy and number of particles in the gas to mitigate the ballistic blurring of the phase, which is a dangerous competitor of the diffusion, (ii) the introduction of a sensitive and unbiased detection scheme for the condensate phase shift or coherence function g 1 (t), of the Ramsey type as proposed in references [17,18], (iii) reduction of the technical noise of the experimental device, (iv) trapping of the atoms in a cell with a sufficiently high vacuum to make cold atom losses by collision with the residual hot gas negligible (one-body losses) : lifetimes of the order of the hour are possible under cryogenic environment [33,34]. These practical aspects vary according to the experimental groups and are beyond the scope of this article. In contrast, particle losses due to three-body collisions, with the formation of a dimer and a fast atom, are intrinsic to alkaline atoms and constitute a fundamental limit. Each atom loss changes, at a random time, the rate of variation of the phase d dtθ , since this is a function of N, which adds a stochastic component to its evolution [16,35]. To calculate the variance of the phase shift of the condensate induced by the three-body losses, we place ourselves at zeroth order in the uncondensed fraction, that is in the case of a pure condensate at zero temperature prepared at time 0 with an initially well defined number of particles N, as in reference [16] from which we can recycle (adapting them to the trapped case and to three-body losses) expressions (G7) and (64) : Var losses [θ(t) −θ(0)] = dµ TF dN 2 t 0 dτ t 0 dτ ′ N (τ)N(τ ′ ) − N (τ) N (τ ′ ) ∼ Γ 3 t→0 dµ TF dN 2 NΓ 3 t 3(84) We introduced the Γ 3 particle number decay rate, related as follows to the K 3 rate constant of the three-body losses and to the Thomas-Fermi ρ 0 (r) density profile of the condensate : d dt N ≡ −Γ 3 N = −K 3 d 3 r [ρ 0 (r)] 3(85) We obtain a more meaningful writing, directly comparable to our results without losses, by writing (84) in adimensioned form : Var losses [θ(t) −θ(0)] ∼ Γ 3 t→0 8 525πK 3t 3(86) where Var andt are the variance of the phase shift and the elapsed time in the units of figure 4, andK 3 = mK 3 /( a 4 ). The reduced constantK 3 is an intrinsic property of the atomic species used in the experiment (even if it can be varied using a magnetic Feshbach resonance [36]). To estimate the order of magnitude ofK 3 in a gas of cold atoms, consider the example of rubidium 87 in the ground hyperfine sublevel |F = 1, m F = −1 at vanishing magnetic field : measurements give K 3 = 6 × 10 −42 m 6 /s and a = 5.31nm [37] soK 3 ≃ 10. In figure 4a (k B T = µ TF ), at the reduced entrance timet = 5 in the asymptotic regime of phase diffusion we see that the loss-induced parasitic variance for this value ofK 3 is about three times the useful variance ; their very different time dependencies should, however, make it possible to separate them. The situation is much more favorable at higher temperature, k B T ≫ µ TF , the effect of the losses on the phase shift variance being for example still negligible at the reduced timet = 100 in figure 4b (k B T = 10µ TF ). Discussion of ergodic hypothesis As shown by references [22,23] in the case of a harmonic trap with cylindrical symmetry, the classical motion of Bogoliubov quasiparticles is highly chaotic at energies ǫ ≃ µ TF but even at this energy, the Poincaré sections reveal patches of stability in the phase space, which are not crossed by the trajectories of the chaotic sea : there is no ergodicity in the strict sense. What about the case of a completely anisotropic trap ? We want to test the ergodic hypothesis for two physical quantities. The first one appears in our linearized kinetic equations, it is the Γ(r, p) damping rate. The second appears in the initial conditions of the C mc (τ) correlation function of dθ/dt, it is dǫ(r, p)/dµ TF . For a uniform sampling of the energy surface ǫ, that is with the probability distribution δ(ǫ − ǫ(r, p))/ρ(ǫ) in the phase space, we show in figure 6 the histograms of these quantities after time averaging on each trajectory over times t = 0, t = 5000/ω and t = 250 000/ω, at the energy ǫ = k B T = µ TF for incommensurable trapping frequencies. 20 The temporal averaging leads to a spectacular narrowing of the probability distribution, which peaks around the ergodic mean (dashed line on the left), which goes in the direction of the ergodic hypothesis. This narrowing dynamics continues over very long times, but never eliminates a small lateral peak far from the ergodic average. An inspection of the trajectories contributing to the lateral peak shows that they are small perturbations of the stable linear trajectories along the most confining trap axis. The temporal average value of the two quantities considered on these linear trajectories is represented by the right dashed vertical lines in figure 6, it is actually close to the peak in 20. Equations of motion (39,40), put together as d dt X = f(X), are numerically integrated with a semi-implicit scheme of the second order, X(t + dt) = X(t) + dt[1 − dt 2 M] −1 f(X(t)) where M is the first differential of f(X) in X(t) [38]. If the trajectory crosses the surface of the condensate between t and t + dt, we must determine the crossing time t s with an error O(dt) 3 , then apply the diagram semi-implicit successively on [t, t s ] and [t s , t + dt], to overcome the discontinuity of f(X) and its derivatives. √ 3, √ 3 : 1, 1 : √ 5 − 1, √ 5 − 1 : 1, √ 3 : √ 5 − 1 and √ 5 − 1 : √ 3. The sections are ordered by increasing ratio ω x /ω y from left to right and from top to bottom (given that 1/ √ 3 < ( √ 5 − 1)/ √ 3 < 1/( √ 5 − 1) < 1) . This shows that the Poincaré section is more chaotic as the ratio ω x /ω y is larger. r x is in units of (µ TF /mω 2 ) 1/2 and p x in units of (mµ TF ) 1/2 . question. The stability diagram of a linear trajectory along an eigenaxis α of the trap, with respect to a perturbation along another eigenaxis β is shown in figure 7, in the plane (energy, ratio ω β /ω α ). It shows that the linear trajectory along the most confining axis is stable at all energies. 21 The Poincaré sections of the planar trajectories in the αOβ planes in figure 8 specify the width of the stability island and reveal the existence of secondary islands, etc. There is therefore no full ergodicity of our classical dynamics, even at the energies ǫ ≈ µ TF , even in the completely anisotropic case. To quantitatively measure the error made by the ergodic hypothesis in the computation of C mc (0) and C mc (τ > 0), we consider the differences between dǫ(r, p) dµ TF 2 ǫ and dǫ(r, p) dµ TF 2 ǫ (87) 1 Γ(r, p) ǫ and 1 Γ(r, p) ǫ = 1 Γ(ǫ)(88) where we recall that the horizontal bar O(r, p) above a physical quantity represents the time average on the trajectory 21. The linear trajectory of a quasiparticle of energy ǫ along the proper axis Oα of the trap is written m 1/2 ω α r α (t) = |µ TF + iǫ| sin( √ 2ω α t)/ √ G(t) and p α (t)/(2m) 1/2 = ǫ/ √ G(t) with G(t) = µ TF + |µ TF + iǫ| cos( √ 2ω α t) . This corresponds to the choice r α (0) = 0, p α (0) ≥ 0 and is for −t s ≤ t ≤ t s , where the time to reach the surface of the condensate is given by √ 2ω α t s = acos ǫ−µ TF |µ TF +iǫ| . Outside the condensate, the quasiparticle oscillates harmonically like a free particle for a time 2ω −1 α atan[(ǫ/µ TF ) 1/2 ] before regaining the condensate, to cross it in a time 2t s , and so on. The knowledge of the trajectory makes immediate the linear analysis of numerical stability. It also allows to calculate analytically the time average of dǫ(r, p)/dµ TF on the linear trajectory ; if we putǫ = ǫ/µ TF , the result is written going through (r, p) in phase space, as in equation (63), and the brackets O(r, p) ǫ represent the uniform mean over the energy shell ǫ as in equation (64). In equations (87, 88), the left column contains the quantities appearing in C mc (0) or in the secular kinetic equations before the ergodic approximation, and the right column what they become after ergodic approximation. Importantly, we consider in equation (88) 1/Γ rather thanΓ because it is the inverse M −1 and M −2 that appear in expressions (81, 82) of the diffusion coefficient D and the delay time t 0 , M being the operator representing the right-hand side of linearized kinetic equations (66). 22 The quantities to be compared (87, 88) are represented as functions of the energy ǫ in figure 9 at temperature T = µ TF /k B . There is a remarkable agreement on a wide range of energies around ǫ = µ TF . Deviations from the ergodic approximation at very low energy and very high energy were expected : within these limits, the classical dynamics becomes integrable [22]. At high energy, we obtain for the quantity Γ(r, p) the following analytic prediction : 23 1 dǫ(r, p) dµ TF = ln 1+ǫ+ √ 2ǫ (1+ǫ 2 ) 1/2 − √ 2 atan √ǫ acosǫ −1 (1+ǫΓ(r, p) ǫ ∼ ǫ→+∞ π 5/2 56 √ 2 ǫ µ 2 TF 1 [ρ 0 (0)a 3 ] 1/2(89) It differs from the ergodic prediction (77) by a numerical coefficient, and reproduces well the results of the numerical simulations (see the red dashed line in figure 9 b). This prevents us from calculating the diffusion coefficient D and the diffusion delay t 0 in the secularo-ergodic approximation at a too high temperature. Concerning dǫ(r, p)/dµ TF , which tends to −1 at high energy, the deviation can only be significant at low energy ; in fact it does so only at very low energy, and it would be a problem for our ergodic calculation of D and t 0 only at temperatures k B T ≪ µ TF rarely reached in cold atom experiments. 22. Should it be recalled, Γ(r, p) ǫ = Γ(ǫ), the uniform mean being invariant by time evolution. So, the inequality between arithmetic mean and harmonic mean imposes 1/Γ(r, p) ǫ ≥ 1/Γ(ǫ). 23. At the dominant order in ǫ, we get Γ(r, p) using the equivalent (97) (in which µ 0 = gρ 0 (r)) on a harmonic trajectory undisturbed by the condensate, r α (t) = A α cos(ω α t + φ α ), ∀α ∈ {x, y, z}. Let us consider cleverly the quantity gρ 0 (r) to be averaged as a function f (θ) of the angles θ α = ω α t + φ α . It is a periodic function of period 2π in each direction, expandable in Fourier series, f (θ) = n∈Z 3 c n e in·θ . In the incommensurable case, n · ω 0 and the time average of e in·θ is zero ∀n ∈ Z 3 * , so that f (θ) = c 0 . In the usual integral expression of c 0 , we make the change of variable x α = X α cos θ α , where X α = (ǫ α /µ TF ) 1/2 and ǫ α is the energy of the motion along Oα. It remains to take the limit of all X α tending to +∞ under the integral sign to get Γ(r, p) µ TF [ρ 0 (0)a 3 ] 1/2 ∼ ǫ→+∞ 32 √ 2 15π 3/2 ǫ µ TF 1/2 α µ TF ǫ α 1/2 By averaging the inverse of this equivalent over the probability distribution 2ǫ −2 δ(ǫ − α ǫ α ) of the energies per direction for a harmonic oscillator of total energy ǫ, we find (89). Conclusion Motivated by recent experimental advances in the manipulation of trapped cold atom gases [1,2,3], we theoretically studied the coherence time and the phase dynamics of a Bose-Einstein condensate in an isolated and harmonically trapped boson gas, a fundamental problem important for interferometric applications. The variance of the phase shift experienced by the condensate after a time t increases indefinitely with t, which limits the intrinsic coherence time of the gas. For t ≫ t coll , where t coll is the typical collision time between Bogoliubov quasiparticles, it becomes a quadratic function of time, Var[θ(t) −θ(0)] = At 2 + 2D(t − t 0 ) + o(1)(90) whereθ is the phase operator of the condensate. This asymptotic law has the same form as in the spatially homogeneous case previously studied [16], which was not guaranteed, but the coefficients of course differ. To calculate them, we consider the thermodynamic limit in the trap, in which the number of particles tends to infinity, N → +∞, at fixed temperature T and fixed Gross-Pitaevskii chemical potential µ GP . This requires that the reduced trapping frequencies tend to zero, ω α /µ GP → 0, which we reinterpret as a classical limit → 0. The dominant term At 2 is due to the fluctuations in the initial state of the quantities conserved by temporal evolution, N and E, where E is the total energy of the gas. We give an explicit expression (16) -(19) -(20) of the coefficient A in a generalized ensemble, any statistical mixture of microcanonical ensembles with at most normal fluctuations of N and E. In this case, A = O(1/N). We obtain a simpler form (30) in the case of a statistical mixture of canonical ensembles of the same temperature but of variable number of particles. At usual temperatures, larger than µ GP /k B , and for Poissonian particle number fluctuations, the contribution to A of thermal fluctuations of E is rendered negligible by a factor of order the non-condensed fraction ∝ (T/T c ) 3 . The variance of N must be reduced to see the effect of thermal fluctuations on the ballistic spread of the condensate phase. The subdominant term 2D(t − t 0 ) does not depend on the ensemble in which the system is prepared, at least to the first non-zero order 1/N at the thermodynamic limit, and it is the only one that remains in the microcanonical ensemble. The calculation of its two ingredients, the diffusion coefficient D of the phase and the diffusion delay t 0 , requires the knowledge at all times of the correlation function of dθ/dt in the microcanonical ensemble, and thus the resolution of linearized kinetic equations on the Bogoliubov quasiparticle occupation numbers. It is indeed the temporal fluctuations of these occupation numbers for a given realization of the system which stochastise the evolution of the phase of the condensate. To this end, we adopt a semiclassical description, in which the motion of quasiparticles in the trapped gas is treated classically in the phase space (r, p), but the quasiparticle bosonic field is still quantum, through the occupation number operatorsn(r, p). In quantum observables of the form = k a knk , such as dθ/dt, the average a k and the sum on the Bogoliubov quantum modes k are then replaced, according to a correspondence principle, by a temporal mean and an integral on the classical trajectories (see equations (42)-(44)). The linearized kinetic equations on the fluctuations δn(r, p) include a transport part, according to the classical Hamiltonian motion of quasiparticles, and a collision integral, local in position, which describes the Beliaev-Landau interaction processes among three quasiparticles. They take the same form as the linearized quantum Boltzmann equations on the semiclassical distribution function n(r, p, t) of quasiparticles in phase space. We simplify them in the secular limit ω α t coll ≫ 1 and under the assumption of a classical ergodic motion of quasiparticles. This hypothesis, according to which the fluctuations δn(r, p) averaged on a trajectory depend only on the energy of the trajectory, only holds if the trap is completely anisotropic ; in this case we give a careful numerical justification. The desired quantities D and t 0 , correctly adimensioned, are universal functions of k B T/µ TF where µ TF is the Thomas-Fermi limit of µ GP , and are in particular independent of the ratios ω α /ω β of the trapping frequencies. They are represented on figure 5. An interesting and more directly measurable by-product of our study are the Γ(ǫ) damping rates of the Bogoliubov modes of energy ǫ in the trap. Once the adimensioned temperature k B T/µ TF is fixed, the rate is also described by a universal function of ǫ/µ TF independent of the trapping frequencies, see figure 3. These results are part of a new class of universality, that of the completely anisotropic harmonic traps, very different from that, theoretically better explored, of spatially homogeneous systems, and will hopefully receive an experimental confirmation soon. Acknowledgments We thank the members of the cold fermions and the atom chips teams of LKB, especially Christophe Salomon, for useful discussions. Appendix A. Behavior of Γ(ǫ) at low and high energy To get the limiting behaviors (76) and (77) of the Γ(ǫ) damping rates of the Bogoliubov modes in a trap in the secularo-ergodic approximation, we rewrite the integral in phase space (64) as an average on the local Gross-Pitaevskii chemical potential µ 0 = gρ 0 (r) of the damping rate Γ h (ǫ, µ 0 , k B T ) of a mode of energy ǫ in a homogeneous system of density µ 0 /g and temperature T : Γ(ǫ) = µ TF 0 dµ 0 P ǫ (µ 0 )Γ h (ǫ, µ 0 , k B T )(91) with P ǫ (µ 0 ) ≡ 1 ρ(ǫ) d 3 r d 3 p (2π ) 3 δ(ǫ − ǫ(r, p))δ(µ 0 − gρ 0 (r)) = 4 πρ(ǫ) 1 ( ω) 3 ǫ 2 (µ TF − µ 0 ) 1/2 (µ 2 0 + ǫ 2 ) 1/2 [(µ 2 0 + ǫ 2 ) 1/2 + µ 0 ] 1/2(92) In the ǫ → 0 limit, we first heuristically replace the integrand in equation (91) with a low energy equivalent, using : P ǫ (µ 0 ) ∼ ǫ→0 3 8 √ 2 ǫ 1/2 (µ TF − µ 0 ) 1/2 µ 1/2 TF µ 3/2 0 (93) Γ h (ǫ, µ 0 , k B T ) 2 ∼ ǫ→0 ǫ µ 0 a 3 g 1/2 F(k B T/µ 0 )(94) In equation (93), we used equation (34) ; the result (94) is in reference [32], where the function F is computed and studied. As F(θ) ∼ θ→+∞ 3π 3/2 4 θ, this causes in equation (91) the divergent integral ǫ 3/2 µ TF 0 dµ 0 /µ 2 0 to appear. It is clear, however, that one should cut this integral to µ 0 > ǫ so that the equivalent (94) remains usable, hence the scaling law Γ(ǫ) ≈ ǫ 1/2 , dominated by the edge of the trapped condensate and very different from the linear law of the homogeneous case. To find the prefactor in the law, we simply make the change of scale µ 0 = ǫν 0 in the integral and use the "high temperature " approximation of reference [39] on Γ h , uniformly valid near the edge of the trapped condensate, Γ h (ǫ, µ 0 , k B T ) 2 ∼ k B T ≫ǫ,µ 0 k B T µ 0 a 3 g 1/2 φ(ǫ/µ 0 )(95) before going to the ǫ → 0 limit under the integral sign, which leads to the sought equation (76) with 24 I = +∞ 0 dν 0 ν 1/2 0 φ(1/ν 0 ) (1 + ν 2 0 ) 1/2 [ν 0 + (1 + ν 2 0 ) 1/2 ] 1/2 = 4.921 208 . . .(96) In the limit ǫ → +∞, we use the fact that, in the homogeneous case, the damping rate of quasiparticles is reduced to the collision rate ρ 0 σv of a particle of velocity v = (2ǫ/m) 1/2 with condensate particles, with zero velocity and density ρ 0 , with the cross section σ = 8πa 2 for indistinguishable bosons (this is a Beliaev process) : Γ h (ǫ, µ 0 , k B T ) 2 ∼ ǫ→+∞ µ 0 a (2mǫ) 1/2(97) Using the same high energy expansion (35) for ρ(ǫ), we find that P ǫ (µ 0 ) ∼ (8/π)(µ TF − µ 0 ) 1/2 ǫ −3/2 . The insertion of these equivalents in equation (91) gives (77). 24. In practice, the function φ is deduced from equation (57) by doing the classical field approximation 1 +n(r, q) ≃n(r, q) ≃ k B T/ǫ(r, q). In the numerical calculation of I, done by takingǫ = 1/ν 0 as the integration variable, we reduce the effect of digital truncation with the help of the development asymptotic φ(ǫ) = Abstract We study the condensate phase dynamics in a low-temperature equilibrium gas of weakly interacting bosons, harmonically trapped and isolated from the environment. We find that at long times, much longer than the collision time between Bogoliubov quasi-particles, the variance of the phase accumulated by the condensate grows with a ballistic term quadratic in time and a diffusive term affine in time. We give the corresponding analytical expressions in the limit of a large system, in the collisionless regime and in the ergodic approximation for the quasi-particle motion. When properly rescaled, they are described by universal functions of the temperature divided by the Thomas-Fermi chemical potential. The same conclusion holds for the mode damping rates. Such universality class differs from the previously studied one of the homogeneous gas. Introduction et vue d'ensemble Nous considérons ici un problème encore non résolu de la théorie des gaz quantiques, celui du temps de cohérence d'un gaz de bosons sans spin en interaction faible de portée négligeable, préparé dans un piège harmonique à l'équilibre thermique à une température T très inférieure à la température critique T c , c'est-à-dire dans un régime fortement condensé de Bose, et parfaitement isolé dans son évolution ultérieure. Le temps de cohérence du champ bosonique est alors intrinsèque et dominé par celui du condensat. Au vu de progrès techniques récents [1,2,3], cette question pourrait recevoir bientôt une réponse expérimentale dans les gaz d'atomes froids confinés dans des potentiels magnétiques non dissipatifs [4,5,6] et, contrairement à d'autres systèmes de gaz de bosons en physique du solide [7,8,9,10], bien découplés de leur environnement et ne présentant que de faibles pertes de particules. Aussi notre étude théorique est-elle importante pour les applications à venir en optique atomique et interférométrie à ondes de matière. Faisant suite aux travaux pionniers des références [11,12,13], nos études théoriques [14,15,16,17], effectuées dans un gaz de bosons spatialement homogène, s'appuyent sur la méthode de Bogolioubov, qui réduit le système à un gaz de quasi-particules en interaction faible. Elles ont identifié deux mécanismes limitant le temps de cohérence et faisant intervenir tous deux la dynamique de l'opérateur phaseθ(t) du condensat : -le brouillage de phase : lorsque les quantités conservées (l'énergie E du gaz et son nombre de particules N) fluctuent d'une réalisation expérimentale à l'autre, la vitesse moyenne d'évolution de la phase [θ(t) −θ(0)]/t sur une réalisation, fonction de ces quantités conservées, fluctue elle aussi. Après moyenne sur les réalisations, ceci induit un étalement balistique du déphasageθ(t) −θ(0), c'est-à-dire une divergence quadratique de sa variance, avec un coefficient balistique A [14] : Var[θ(t) −θ(0)] ∼ At 2(1) ceci aux temps longs devant γ −1 coll , où γ coll est le taux de collision typique entre les quasi-particules de Bogolioubov thermiques ; -la diffusion de phase : même si le système est préparé dans l'ensemble microcanonique, où E et N sont fixés, les interactions entre quasi-particules font fluctuer leurs nombres d'occupation, et donc la vitesse instantanée d dtθ de la phase, qui en dépend. Ceci induit un étalement diffusif deθ(t) −θ(0) aux temps t ≫ γ −1 coll , avec un coefficient de diffusion D [15,16] : Var mc [θ(t) −θ(0)] ∼ 2Dt(2) Dans le cas général, les deux mécanismes sont présents et la variance du déphasage admet (1) comme terme dominant, (2) comme terme sous-dominant. L'étalement de la phase du condensat renseigne directement sur sa fonction de cohérence temporelle du premier ordre, g 1 (t) = â † 0 (t)â 0 (0)(3) oùâ 0 est l'opérateur d'annihilation d'un boson dans le mode du condensat, en vertu de la relation approchée g 1 (t) ≃ e −i θ (t)−θ(0) e −Var[θ(t)−θ(0)]/2(4) admise dans la référence [17] sous l'hypothèse d'une distribution gaussienne deθ(t) −θ(0), puis justifiée dans la référence [18] à suffisamment basse température sous des conditions assez générales. 1 Nous nous proposons ici de généraliser ces premières études au cas expérimentalement plus habituel d'un système harmoniquement piégé (voir cependant la référence [19]). Les lois de dépendance des taux d'amortissement des modes de Bogolioubov en l'énergie propre des modes ou en la température sont déjà très différentes de celles du cas homogène, comme l'a montré la référence [20]. Il en ira certainement de même pour l'étalement de la phase du condensat. Le cas piégé est non trivial, puisque les modes de Bogolioubov ne sont pas connus analytiquement, et qu'il n'y a pas d'approximation d'homogénéité locale applicable à l'évolution de la phase (comme l'a vérifié la référence [21]). Nous disposons heureusement d'échappatoires : -la limite classique pour le mouvement des quasi-particules de Bogolioubov dans le gaz piégé. En effet, à la limite thermodynamique (N → +∞ à potentiel chimique de Gross-Pitaevskii µ GP et température fixés), les pulsations de piégeage ω α , α ∈ {x, y, z}, tendent vers zéro comme 1/N 1/3 si bien que ω α ≪ µ GP , k B T(5) ce que l'on peut réinterpréter astucieusement comme une limite classique → 0 ; 1. Rappelons les hypothèses utilisées dans la référence [18] pour établir l'équation (4). (i) Les fluctuations relatives du module deâ 0 sont faibles, le système étant fortement condensé de Bose. (ii) Le système est suffisamment proche de la limite thermodynamique, avec des fluctuations normales et des lois asymptotiquement gaussiennes pour l'énergie et le nombre de particules. Ceci sert en particulier à mettre la contribution balistique du déphasage à g 1 (t) sous la forme (4). (iii) Le coefficient de diffusion de la phase (d'ordre 1/N) doit être beaucoup plus faible que le taux de collision typique γ coll entre quasi-particules de Bogolioubov (d'ordre N 0 ) mais beaucoup plus grand que l'espacement des niveaux d'énergie (d'ordre N −2 ) des paires de quasi-particules créées ou annihilées lors des processus de collision Beliaev-Landau. Ceci sert, pour un système préparé dans l'ensemble microcanonique, à montrer que g 1 (t) est de la forme (4) sur les intervalles de temps t = O(N 0 ) et t = O(N 1 ), avec le même coefficient de diffusion. (iv) La fonction de corrélation de d dtθ est réelle, comme le prédisent les équations cinétiques. (v) On néglige le commutateur deθ(t) avecθ(0), ce qui introduit une erreur de phase O(t/N) dans le facteur exp[−i θ (t) −θ(0) ]. C'est une erreur d'ordre unité aux temps t ≈ N mais g 1 (t) a alors commencé à décroître sous l'effet de la diffusion de phase dans l'ensemble microcanique (et s'est sinon déjà très fortement amortie sous l'effet du brouillage de phase balistique au bout d'un temps t ≈ N 1/2 ). -la limite d'interactions très faibles entre les quasi-particules de Bogolioubov : γ coll ≪ ω α(6) Ceci implique que tous les modes du condensat, même ceux de plus basse pulsation ≈ ω α , sont dans le régime faiblement collisionnel (par opposition à hydrodynamique), et permet d'effectuer une approximation séculaire sur les équations cinétiques décrivant les collisions entre les quasi-particules ; -l'ergodicité dans un piège complètement anisotrope : comme l'ont montré les références [22,23], le mouvement classique des quasi-particules dans un piège harmonique non isotrope à symétrie de révolution est fortement chaotique aux énergies ǫ ≈ µ GP mais quasi intégrable lorsque ǫ → 0 ou ǫ → +∞. Dans un piège complètement anisotrope, aux températures ni trop petites ni trop grandes devant µ GP /k B , on peut espérer compléter l'approximation séculaire par l'hypothèse d'ergodicité, ce que nous nous attacherons à montrer. Notre article est articulé comme suit. Dans la section 2, après quelques rappels minimaux sur la théorie de Bogolioubov dans un piège, nous spécifions l'état du système et introduisons les quantités permettant de décrire formellement l'étalement de la phase, à savoir la dérivée de l'opérateur phase du condensat et sa fonction de corrélation temporelle. Dans la section 3, nous donnons une expression du coefficient balistique A à la limite thermodynamique dans un piège harmonique quelconque (y compris isotrope), d'abord dans l'état du système le plus général considéré ici puis dans le cas plus simple d'un mélange statistique d'ensembles canoniques de même température T . Dans la longue section 4, nous nous attaquons au coeur du problème, le calcul de la fonction de corrélation C mc (τ) de dθ/dt dans l'ensemble microcanonique, qui donne accès en toute généralité aux termes d'étalement sous-balistiques de la phase, puisqu'ils sont indépendants de l'état du système à la limite thermodynamique à énergie moyenne et nombre moyen de particules fixés. Nous passons d'abord à la limite semi-classique dans la sous-section 4.1, le mouvement des quasi-particules de Bogolioubov étant traité classiquement mais le champ des quasi-particules restant quantique bosonique ; la forme semi-classique de dθ/dt est déduite d'un principe de correspondance. Nous écrivons ensuite, dans la sous-section 4.2, des équations cinétiques sur les nombres d'occupation des quasi-particules dans l'espace des phases classique (r, p) et nous montrons comment, une fois linéarisées, elles conduisent formellement à C mc (τ). Le problème reste formidable, puisque les nombres d'occupation dépendent des six variables (r, p) et du temps. À la limite séculaire γ coll ≪ ω α et dans l'approximation ergodique sur le mouvement des quasi-particules (ce qui exclut le piège isotrope ou à symétrie de révolution), nous nous ramenons dans la sous-section 4.3 à des nombres d'occupation fonctions de la seule énergie ǫ du mouvement classique et du temps, ce qui conduit à des résultats explicites sur C mc (τ), sur la diffusion de phase et, sous-produit intéressant, sur le taux d'amortissement des modes de Bogolioubov dans le piège, dans la sous-section 4.4 où nous évaluons également le déphasage du condensat dû aux pertes de particules. Enfin, nous procédons à une discussion critique de l'approximation ergodique dans la sous-section 4.5, en estimant en particulier l'erreur qu'elle introduit sur les quantités pilotant la diffusion de phase du condensat. Nous concluons dans la section 5. Rappels sur le formalisme et les résultats La dérivée de la phase -Comme nous l'avons rappelé dans l'introduction, le temps de cohérence d'un condensat est contrôlé par la dynamique de son opérateur phaseθ(t) aux temps longs devant le temps de collision γ −1 coll typique des quasi-particules. Le point de départ de notre étude est donc l'expression de la dérivée temporelle deθ(t), lissée temporellement (c'est-à-dire moyennée sur un temps court devant γ −1 coll mais long devant l'inverse de la pulsation typique ǫ th k / des quasi-particules thermiques), telle que l'a établie en toute généralité la référence [18] à l'ordre un en la fraction non condensée : − dθ dt = µ 0 (N) + k∈F + dǫ k dNn k ≡μ(7) Ici µ 0 (N) est le potentiel chimique du gaz dans l'état fondamental etN est l'opérateur nombre total de particules. La somme sur le nombre quantique générique k (ce n'est pas un nombre d'onde) porte sur les modes de Bogolioubov d'énergie propre ǫ k , etn k est l'opérateur nombre de quasi-particules dans le mode k. L'expression (7) est une version quantique de la seconde relation de Josephson : son second membre est un opérateur potentiel chimiqueμ du gaz, puisque c'est la dérivée adiabatique (aux nombres d'occupationn k fixés) par rapport à N du hamiltonien de BogolioubovĤ Bog = E 0 (N) + k∈F + ǫ knk (8) Les modes de Bogolioubov sont de la famille F + , suivant la terminologie de la référence [24], au sens où leurs fonctions modales (u k (r), v k (r)) sont solutions de l'équation aux valeurs propres ǫ k |u k |v k = H GP + Qgρ 0 (r)Q Qgρ 0 (r)Q −Qgρ 0 (r)Q −[H GP + Qgρ 0 (r)Q] |u k |v k ≡ L(r,p) |u k |v k(9) avec la condition de normalisation d 3 r (|u k (r)| 2 − |v k (r)| 2 ) = 1 > 0. On a pris la fonction d'onde φ 0 (r) du condensat réelle, normalisée à l'unité ( d 3 r φ 2 0 (r) = 1), et écrite à l'ordre zéro en la fraction non condensée, c'est-à-dire à l'approximation de Gross-Pitaevskii : H GP |φ 0 = 0 avec H GP =p 2 2m + U(r) + gρ 0 (r) − µ GP(10) si bien qu'à cet ordre, la densité condensée vaut ρ 0 (r) = Nφ 2 0 (r). Ici, g = 4π 2 a/m est la constante de couplage, proportionnelle à la longueur de diffusion a dans l'onde s entre les bosons de masse m, et U(r) = α mω 2 α r 2 α /2 est leur potentiel de piégeage. Le projecteur Q projette orthogonalement à |φ 0 et assure que |φ 0 ⊥|u k et |φ 0 ⊥|v k comme il se doit [24]. Comme le condensat est dans son mode fondamental (φ 0 minimise la fonctionnelle énergie de Gross-Pitaevskii), les ǫ k sont positifs. L'état du système -Le refroidissement des gaz d'atomes froids par évaporation ne conduit a priori à aucun des ensembles habituels de la physique statistique. Pour couvrir tous les cas raisonnables, nous supposons donc que le gaz est préparé à l'instant 0 dans un ensemble généralisé, mélange statistique d'états propres |ψ λ du hamiltonien complet H à N λ corps et d'énergie E λ , donc d'opérateur densité σ = λ Π λ |ψ λ ψ λ |(11) avec comme seule restriction l'existence de lois étroites sur E λ et N λ , de variances et covariance ne croissant pas plus vite que les moyennesĒ etN à la limite thermodynamique. Déphasage moyen -Moyennons l'expression (7) dans l'état stationnaire |ψ λ . Au second membre apparaît l'espérance de l'opérateur potentiel chimique dans |ψ λ . À cause des interactions entre les quasi-particules de Bogolioubov, on s'attend à ce que le système à N corps soit ergodique au sens quantique du terme, c'est-à-dire qu'il souscrive au principe de microcanonicité des états propres (Eigenstate Thermalisation Hypothesis dans la littérature angloaméricaine, voir les références [25,26,27]), soit ψ λ |μ|ψ λ = µ mc (E λ , N λ )(12) où µ mc (E, N) est le potentiel chimique dans l'ensemble microcanonique d'énergie E à N particules. Pour un grand système, il suffit de développer au premier ordre en les fluctuations, étant donné la faiblesse de leurs valeurs relatives : µ mc (E λ , N λ ) = µ mc (Ē,N) + (E λ −Ē)∂ E µ mc (Ē,N) + (N λ −N)∂ N µ mc (Ē,N) + O(1/N)(13) Il reste à moyenner sur les états |ψ λ avec les poids Π λ comme dans l'équation (11) pour obtenir la première brique à la fonction de cohérence temporelle (4), le déphasage moyen : θ (t) −θ(0) = −µ mc (Ē,N)t/(14) avec une erreur O(1/N) sur le coefficient de t. Déphasage quadratique moyen -En procédant de la même manière pour le second moment du déphasage du condensat, nous trouvons comme il est écrit un peu implicitement dans [16,18] que (16) et la fonction de corrélation de la dérivée de la phase dans l'ensemble microcanonique d'énergieĒ àN particules : Var [θ(t) −θ(0)] = At 2 + 2 t 0 dτ (t − τ) Re C mc (τ)(15) avec le coefficient balistique A = Var[(N λ −N)∂ N µ mc (Ē,N) + (E λ −Ē)∂ E µ mc (Ē,N)]/ 2C mc (τ) = dθ dt (τ) dθ dt (0) mc − dθ dt 2 mc(17) Voici qui complète notre connaissance formelle de g 1 (t). En vue des observations expérimentales à venir, il reste cependant à calculer explicitement A et C mc (τ) pour un système harmoniquement piégé. Il faudra en particulier vérifier que C mc (τ) dans le cas piégé décroît assez vite pour que l'on retrouve une loi diffusive (2) comme dans le cas spatialement homogène. Calcul du coefficient balistique dans la variance du déphasage Dans l'ensemble statistique généralisé -Pour calculer le déphasage moyen (14) et le coefficient balistique (16) dans le cas général, nous devons connaître le potentiel chimique microcanonique µ mc (Ē,N) et ses dérivées dans le piège harmonique. À la limite thermodynamique, µ mc coïncide avec le potentiel chimique µ can dans l'ensemble canonique de température T et nombre de particulesN, plus commode à calculer, si la température T est ajustée pour qu'il y ait égalité des énergies moyennes E can (T,N) etĒ. En d'autres termes, µ mc (E can (T,N),N) ∼ µ can (T,N)(18) Il suffit de dériver cette relation par rapport à T ouN pour obtenir les dérivées utiles de µ mc , puis de remplacer E can parĒ, ce qui donne : ∂ E µ mc (Ē,N) ∼ ∂ T µ can (T,N) ∂ T E can (T,N) (19) ∂ N µ mc (Ē,N) ∼ ∂ N µ can (T,N) − ∂ N E can (T,N) ∂ T E can (T,N) ∂ T µ can (T,N)(20) Au premier ordre en la fraction non condensée, le potentiel chimique canonique se déduit de l'énergie libre F du gaz parfait de quasi-particules de Bogolioubov de hamiltonien (8) par la relation thermodynamique habituelle µ can = ∂ N F. L'énergie libre est une fonctionnelle simple de la densité d'états ρ(ǫ) des quasi-particules, F(T,N) = E 0 (N) + k B T +∞ 0 dǫ ρ(ǫ) ln 1 − e −βǫ(21) avec β = 1/k B T . À la limite thermodynamique, l'énergie E 0 de l'état fondamental du gaz dans le piège harmonique se déduit de celle du système homogène [28] par une approximation d'homogénéité locale, et la densité d'états ρ(ǫ) s'obtient par prise de la limite classique → 0, en vertu de l'inégalité (5) [6] : ρ(ǫ) = d 3 rd 3 p (2π ) 3 δ(ǫ − ǫ(r, p))(22) Le hamiltonien classique ǫ(r, p) est la valeur propre positive de la matrice 2 × 2 de Bogolioubov de l'équation (9) avec la position r et l'impulsion p traitées classiquement 2 et la densité condensée ρ 0 (r) écrite à la limite classique c'est-à-dire dans l'approximation de Thomas-Fermi : gρ TF 0 (r) = µ TF − U(r) ≡ µ loc (r) si U(r) < µ TF 0 sinon(23) 2. Le projecteur Q, projetant sur un espace de codimension un, peut être omis à la limite thermodynamique. Ici, le potentiel chimique de Thomas-Fermi, limite classique de celui µ GP de Gross-Pitaevskii, vaut µ TF = 1 2 ω[15Na(mω/ ) 1/2 ] 2/5 (24) etω = (ω x ω y ω z ) 1/3 est la moyenne géométrique des pulsations de piégeage. On en déduit que ǫ(r, p) =                p 2 2m p 2 2m + 2µ loc (r) 1/2 si U(r) < µ TF p 2 2m + U(r) − µ TF sinon(25) L'intégrale sextuple (22) a été calculée dans la référence [29]. 3 Nous donnons ici le résultat sous une forme un peu plus compacte : ρ(ǫ) = µ 2 TF ( ω) 3 f (ǫ ≡ ǫ/µ TF ) (26) f (ǫ) = 1 π       −2 √ 2ǫ 2 acosǫ − 1 (1 +ǫ 2 ) 1/2 + 2 √ 2ǫ ln 1 + √ 2ǫ +ǫ (1 +ǫ 2 ) 1/2 + √ǫ (5ǫ − 1) + (1 +ǫ) 2 acos 1 (1 +ǫ) 1/2       (27) Nous obtenons finalement le potentiel chimique canonique µ can (T,N) = µ 0 (N) + 6k B T 5N µ TF ω 3 +∞ 0 dǫ f (ǫ) ln 1 − e −βǫ + 2µ TF 5N µ TF ω 3 +∞ 0 dǫ f (ǫ)ǫ eβˇǫ − 1(28) avec la contribution de l'état fondamental [6] µ 0 (N) = µ TF 1 + π 1/2 µ TF a 3 /g 1/2 (29) Lorsqu'on dérivera (28) par rapport à T etN pour évaluer les expressions (19) et (20), on se souviendra queβ = µ TF /k B T dépend deN au travers de µ TF . Pour abréger, nous ne donnons pas le résultat ici. Dans un ensemble un peu moins général -Une expression plus simple 4 du coefficient balistique A peut être obtenue lorsque l'état du système est un mélange statistique d'ensembles canoniques de même température T mais de nombre de particules variable. En exprimant les différents coefficients dans (16,19,20) comme des dérivées de l'énergie libre F(T,N) par rapport àN et T , et en nous souvenant de l'expression Var can E = k B T 2 ∂ T E can de la variance de l'énergie dans l'ensemble canonique, nous trouvons à l'ordre dominant 1/N que A(T ) = (Var N) ∂ N µ can (T,N) 2 + k B T 2 ∂ T µ can (T,N) 2 2 ∂ T E can (T,N)(30) À température nulle, seul le premier terme contribue, et l'on retrouve la prédiction des références [30,31] poussée à l'ordre un en la fraction non condensée f nc . À T 0 mais en l'absence de fluctuations de N, seul le second terme contribue ; il n'est autre que le coefficient balistique A can (T ) dans l'ensemble canonique. Dans le régime de validité de l'approximation de Bogolioubov, f nc ≪ 1, le potentiel chimique µ can (T,N) du gaz reste proche de celui de Thomas-Fermi du condensat pur, si bien que ∂ N µ can (T,N) = ∂ N µ TF + O f nc N(31) En revanche, ∂ T µ can (T,N) est immédiatement du premier ordre en f nc , et il en va de même pour le second terme dans l'équation (30). C'est donc seulement pour des fluctuations de N fortement subpoissoniennes (Var N ≪ Var Pois N ≡N) que le second terme de (30), c'est-à-dire l'effet des fluctuations thermiques, n'est pas dominé par le premier. En supposant cette condition satisfaite dans l'expérience, nous représentons sur la figure 1 le coefficient canonique A can (T ) adimensionné par la valeur A Pois de A dans un condensat pur avec des fluctuations de N poissoniennes, A Pois =N ∂ N µ TF 2(32) le tout divisé par le petit paramètre de la théorie de Bogolioubov à température nulle, 5 proportionnel à f nc (T = 0) : [ρ 0 (0)a 3 ] 1/2 = 2 √ 2 15π 1/2N µ TF ω 3(33) Le rapport ainsi formé est une fonction universelle de k B T/µ TF . À partir des développements à basse et à haute énergie de la densité d'états des quasi-particules, f (ǫ) = ǫ→0 32 3πǫ 3/2 − 2 √ 2ǫ 2 + O(ǫ 5/2 ) (34) f (ǫ) = ǫ→+∞ 1 2ǫ 2 +ǫ + 1 2 + O(ǫ −1/2 )(35) nous obtenons les développements à basse et à haute température (Ť = k B T/µ TF = 1/β) A can (T ) A Pois [ρ 0 (0)a 3 ] 1/2 = T →0 21ζ(7/2) √ 2Ť 9/2       1 + 4 √ 2π 9/2 525ζ(7/2)Ť 1/2 + O(Ť )       (36) = T →+∞ 15π 1/2 2 √ 2 3ζ(3) 2 4ζ(4)Ť 3 1 +β 4ζ(2) 3ζ(3) − ζ(3) 2ζ(4) + O(β 3/2 )(37) dont les termes dominants 6 sont représentés en tireté sur la figure 1. Signalons une récriture particulièrement simple et belle de l'équivalent à haute température, accidentellement opérationnel déjà à k B T/µ TF ≥ 2 : A can (T ) A Pois ∼ k B T ≫µ TF 3ζ(3) 4ζ(4) T T (0) c 3(38) où T (0) c est la température critique du gaz parfait de bosons dans un piège harmonique à la limite thermodynamique, k B T (0) c = ω[N/ζ(3)] 1/3 . Dans cette limite, A can (T ) est donc plus faible que A Pois par un facteur proportionnel à la fraction non condensée (T/T (0) c ) 3 ≪ 1. Variance du déphasage du condensat dans l'ensemble microcanonique Nous calculons ici la fonction de corrélation de dθ/dt, à savoir C mc (τ), pour un système préparé dans l'ensemble microcanonique, en utilisant à la limite thermodynamique ω α µ TF → 0 une description semi-classique des quasiparticules et en prenant en compte l'effet de leur interaction par des équations cinétiques de type Boltzmann quantique sur leur distribution dans l'espace des phases classique (r, p). Figure 1: Coefficient de l'étalement balistique (1) de la phase du condensat aux temps longs devant le temps de collision γ −1 coll des quasi-particules, pour un gaz deN bosons préparé dans l'ensemble canonique dans un piège harmonique isotrope ou pas, en fonction de la température. Le résultat vaut à la limite thermodynamique où les pulsations de piégeage ω α sont négligeables par rapport au potentiel chimique µ TF de Thomas-Fermi (24). Trait plein : second terme de l'équation (30), déduit du potentiel chimique canonique (28) à l'approximation de Bogolioubov (interactions faibles, T ≪ T c ). Tiretés : équivalents à basse et à haute température (termes dominants des équations (36,37)). La division de A can (T ) par le petit paramètre (33) de la théorie de Bogolioubov et par la valeur (32) du coefficient balistique pour des fluctuations de N poissoniennes conduit à une fonction universelle de k B T/µ TF . Forme semi-classique du Hamiltonien de Bogolioubov et de dθ/dt Dans la description semi-classique, le mouvement des quasi-particules de Bogolioubov est traité classiquement, c'est-à-dire qu'elles ont à chaque instant une position r et une impulsion p bien définies [6], dont l'évolution dans l'espace des phases dérive du hamiltonien ǫ(r, p) donné dans l'équation (25) [22] : dr dt = ∂ p ǫ(r, p)(39)dp dt = −∂ r ǫ(r, p)(40) mais l'on traite quantiquement le champ bosonique des quasi-particules en introduisant leurs opérateurs nombres d'occupationn(r, p) dans l'espace des phases, ce qui permet de prendre en compte la nature discrète des nombres de quasi-particules et les effets de statistique quantique (loi de Bose plutôt que loi d'équipartition du champ classique à l'équilibre). Dans cette limite semi-classique, le hamiltonien de Bogolioubov (8) (sans interaction entre les quasi-particules) s'écrit immédiatement H sc Bog = E 0 (N) + d 3 r d 3 p (2π ) 3 ǫ(r, p)n(r, p)(41) On pourrait croire, au vu de la formule (7), que dθ/dt admet une écriture similaire, avec ǫ(r, p) remplacé par d dN ǫ(r, p). Il n'en est rien, la raison étant que la dérivée d dN ǫ(r, p) n'est pas constante sur la trajectoire classique. L'opérateur dθ/dt fait partie d'une classe générale d'observables quantiques dites de Fock (diagonales dans la base de Fock des quasiparticules donc fonctionnelles -ici linéaires -des nombres d'occupation des modes de Bogolioubov) : A = k∈F + a knk avec a k = ( u k |, v k |) A(r,p) |u k |v k(42) où A(r,p) est un opérateur matriciel 2 × 2 hermitien et a k sa moyenne dans le mode de Bogolioubov d'énergie propre ǫ k . L'observable dθ/dt correspond au choix A˙θ = σ z d dN L où σ z est la troisième matrice de Pauli et L(r,p) est l'opérateur apparaissant dans l'équation (9) : en vertu du théorème de Hellmann-Feynman 7 , on a en effet ( u k |, − v k |) d dN L |u k |v k = dǫ k dN(43) 7. Le théorème est ici généralisé au cas d'un opérateur L non hermitien, ( u k |, − v k |) étant le vecteur dual du vecteur propre (|u k , |v k ) de L. Pour ces opérateurs de Fock nous utilisons le principe de correspondance semi-classiquê A sc = d 3 r d 3 p (2π ) 3 a(r, p)n(r, p)(44) où a(r, p) = (U(r, p), V(r, p)) A(r, p) U(r,p) V(r,p) , A(r, p) étant l'équivalent classique de A(r,p), et a(r, p) représente la moyenne temporelle de a(r, p) sur l'unique trajectoire classique passant par (r, p) à l'instant t = 0 : a(r, p) ≡ lim t→+∞ 1 t t 0 dτ a(r(τ), p(τ))(45) Le vecteur (U(r, p), V(r, p)), normalisé selon la condition U 2 − V 2 = 1, est vecteur propre de l'équivalent classique L(r, p) de L(r,p) avec la valeur propre ǫ(r, p) ; d'où U(r, p) V(r, p) =                                                               1 2        p 2 /2m ǫ(r, p) 1/2 + p 2 /2m ǫ(r, p) −1/2        1 2        p 2 /2m ǫ(r, p) 1/2 − p 2 /2m ǫ(r, p) −1/2                               si U(r) < µ TF 1 0 sinon(46) À la base de ce principe de correspondance réside l'idée que l'équivalent d'un mode quantique stationnaire (|u k , |v k ) dans le monde classique est une trajectoire classique de même énergie, elle aussi stationnaire dans son ensemble par évolution temporelle. À l'espérance quantique a k de l'observable A(r,p) dans le mode (|u k , |v k ) il faut donc associer une moyenne sur une trajectoire de l'espérance a(r, p) de l'équivalent classique A(r, p) dans le mode local (U(r, p), V(r, p)). Nous retenons donc pour la version semi-classique de la dérivée de l'opérateur phase du condensat : − dθ sc dt = µ 0 (N) + d 3 r d 3 p (2π ) 3 dǫ(r, p) dNn (r, p)(47) Ici, répétons-le, l'espérance a(r, p) = dǫ(r,p) dN n'est pas une constante du mouvement, au contraire de ǫ(r, p), donc on ne peut pas faire comme dans (41) l'économie de la moyenne temporelle. De l'utilité des équations cinétiques dans le calcul de la fonction de corrélation de dθ/dt Nous devons déterminer, dans la limite semi-classique, la fonction de corrélation de dθ/dt pour un système préparé dans l'ensemble microcanonique. Compte tenu des équations (17) et (47) il faut calculer C sc mc (τ) = d 3 r d 3 p (2π ) 3 d 3 r ′ d 3 p ′ (2π ) 3 dǫ(r, p) dN dǫ(r ′ , p ′ ) dN δn(r, p, τ) δn(r ′ , p ′ , 0)(48) où . . . représente la moyenne dans l'état du système et où l'on a introduit les fluctuations des opérateurs nombres d'occupation dans l'espace des phases à l'instant τ, δn(r, p, τ) =n(r, p, τ) −n(r, p) L'ensemble microcanonique peut être vu semi-classiquement comme un mélange statistique à énergie constante d'états de Fock |F = |n(r ′′ , p ′′ ) (r ′′ ,p ′′ )∈R 6 dans l'espace des phases, états propres de H sc Bog , où tous les n(r ′′ , p ′′ ) sont entiers. On suppose dans un premier temps que le système est préparé dans un tel état de Fock |F à l'instant initial t = 0. C'est un état propre de δn(r ′ , p ′ , 0) avec la valeur propre n(r ′ , p ′ ) −n(r ′ , p ′ ) ; il reste donc à calculer dans l'équation (48) la quantité F |δn(r, p, τ)|F = n(r, p, τ) −n(r, p) ≡ δn(r, p, τ) à τ > 0, c'est-à-dire l'évolution des nombres moyens d'occupation n(r, p, τ) dans l'espace des phases, leurs valeurs initiales étant connues, en tenant compte (i) du transport hamiltonien des quasi-particules et (ii) de l'effet des collisions entre quasi-particules par les processus à trois quasi-particules de Beliaev ou de Landau 8 représentés sur la figure 2. C'est exactement ce que les équations cinétiques habituelles de type Boltzmann quantique savent faire, à la différence que la fonction de distribution semi-classique n(r, p, τ) ne correspond pas ici à un état d'équilibre thermique local du système, mais au nombre moyen d'occupation à l'instant τ conditionné au fait que l'état initial du système est un état de Fock de quasi-particules. L'équation d'évolution des nombres d'occupation moyens n(r, p, τ) est de la forme D Dτ n(r, p, τ) + I coll (r, p, τ) = 0 (51) Le premier terme est la dérivée convective résultant des équations de Hamilton classiques : D Dτ = ∂ τ + ∂ p ǫ(r, p) · ∂ r − ∂ r ǫ(r, p) · ∂ p(52) Il conserve la densité dans l'espace des phases le long d'une trajectoire classique (théorème de Liouville). Le second terme décrit l'effet des collisions entre les quasi-particules, locales dans l'espace des positions, et qui ne peuvent se produire, à l'ordre de Beliaev-Landau, qu'aux points où la densité de Thomas-Fermi du condensat ρ 0 (r) est non nulle (voir les diagrammes sur la figure 2) : 9 I coll (r, p, τ) = 1 2 d 3 q (2π ) 3 2π 2gρ 1/2 0 (r) A p q,|p−q| (r) 2 δ (ǫ(r, q) + ǫ(r, p − q) − ǫ(r, p)) × {−n(r, p, τ)[1 + n(r, q, τ)][1 + n(r, p − q, τ)] + n(r, q, τ)n(r, p − q, τ)[1 + n(r, p, τ)]} + d 3 q (2π ) 3 2π 2gρ 1/2 0 (r) A |p+q| p,q (r) 2 δ (ǫ(r, p) + ǫ(r, q) − ǫ(r, p + q)) × {−n(r, p, τ)n(r, q, τ)[1 + n(r, p + q, τ)] + n(r, p + q, τ)[1 + n(r, p, τ)][1 + n(r, q, τ)]} (53) Dans ce processus sont mis en jeu, au point r, une quasi-particule d'impulsion p (dont il faut déterminer l'évolution du nombre moyen n(r, p τ)), une deuxième quasi-particule sortante ou entrante d'impulsion q sur laquelle il faut intégrer, et une troisième quasi-particule dont l'impulsion est fixée par la conservation de la quantité de mouvement. Dans l'équation (53) la première intégrale prend en compte les processus de Beliaev ; elle est affecté d'un facteur 1/2 pour éviter le double comptage des états finals ou initiaux à deux quasi-particules (q, p − q) et (p − q, q) ; la deuxième intégrale prend en compte les processus de Landau. On notera dans les deux cas : (i) le facteur 2π , provenant de la règle d'or de Fermi, (ii) la prise en compte des processus directs avec un signe − (ils dépeuplent le mode p au point r) et des processus inverses avec un signe +, avec les facteurs d'amplification bosonique 1+n, (iii) la présence d'un Dirac de conservation de l'énergie au point r. Les amplitudes de couplage réduites à trois quasi-particules sont données au point r par [14,32] A p 1 p 2 ,p 3 (r) = s 2 (r, p 2 ) + s 2 (r, p 3 ) − s 2 (r, p 1 ) 4s(r, p 1 )s(r, p 2 )s(r, p 3 ) + 3 avec s(r, p) = U(r, p) + V(r, p). Les équations cinétiques admettent bien comme solution stationnaire les nombres d'occupation moyens de l'équilibre thermique 10n (r, p) = 1 e βǫ(r,p) − 1 La propriété bien connue de la loi de Bose 1 +n = e βǫn permet de le vérifier aisément : jointe à la conservation de l'énergie, elle conduit à la compensation parfaite en tout point des processus directs et inverses, c'est-à-dire à l'annulation des quantités entre accolades dans l'équation (53), suivant le principe de microréversibilité ; on a également D Dτn = 0 puisquen(r, p) est une fonction de ǫ(r, p), quantité conservée par le transport hamiltonien. Comme notre système fluctue faiblement autour de l'équilibre, nous linéarisons les équations cinétiques autour de n =n comme dans la référence [16] pour obtenir D Dτ δn(r, p, τ) = −Γ(r, p, τ)δn(r, p, τ) + d 3 q (2π ) 3 K(r, p, q)δn(r, q, τ) Le terme diagonal provient de la fluctuation δn(r, p, τ) au second membre de l'équation (53), et le terme non local en impulsion provient des fluctuations δn(r, q, τ) et δn(r, p ± q, τ) dont on regroupe les contributions grâce aux changements de variables q ′ = p ± q dans d 3 q. L'expression de K(r, p, q) n'est pas utile pour la suite, donnons donc seulement celle du taux d'amortissement local des quasi-particules de Bogolioubov d'impulsion p au point r : Γ(r, p) = 4πρ 0 (r)g 2 d 3 q (2π ) 3 A p q,|p−q| (r) 2 δ (ǫ(r, q) + ǫ(r, p − q) − ǫ(r, p)) 1 +n(r, q) +n(r, p − q) + 8πρ 0 (r)g 2 d 3 q (2π ) 3 A |p+q| p,q (r) B(r ′ , p ′ ) ≡ 1 dǫ(r ′ , p ′ ) dN(58) comme dans l'équation (48) pour former l'inconnue auxiliaire X(r, p, τ) = d 3 r ′ d 3 p ′ (2π ) 3 B(r ′ , p ′ ) δn(r, p, τ)δn(r ′ , p ′ , 0)(59) Alors X(r, p, τ) évolue selon les équations cinétiques linéarisées (56) avec la condition initiale X(r, p, 0) = d 3 r ′ d 3 p ′ (2π ) 3 Q(r, p; r ′ , p ′ ) B(r ′ , p ′ )(60) où l'on a introduit la matrice des covariances des nombres de quasi-particules aux temps égaux : Q(r, p; r ′ , p ′ ) = δn(r, p, 0)δn(r ′ , p ′ , 0)(61) dont l'expression dans l'ensemble microcanonique sera reliée à celle dans l'ensemble canonique en temps utile, dans la sous-section 4.3. La fonction de corrélation microcanonique de dθ sc /dt cherchée vaut alors C sc mc (τ) = d 3 rd 3 p (2π ) 3 B(r, p)X(r, p, τ)(62) Solution dans l'approximation sécularo-ergodique Notre étude se place dans le régime faiblement collisionnel Γ th ≪ ω α où Γ th est la valeur thermique typique du taux d'amortissement Γ(r, p) des quasi-particules et ω α sont les pulsations de piégeage. Les quasi-particules ont alors le temps d'effectuer un grand nombre d'oscillations hamiltoniennes dans le piège avant de subir une collision. Nous pouvons donc effectuer l'approximation séculaire consistant à remplacer les coefficients de l'équation cinétique linéarisée (56) par leur moyenne temporelle sur une trajectoire. Ainsi Γ(r, p) approx. → séculaire Γ(r, p) = lim t→+∞ 1 t t 0 dτ Γ(r(τ), p(τ))(63) et l'inconnue auxiliaire X(r, p, τ) de l'équation (59), tout comme les fluctuations des nombres d'occupation δn(r, p, t), dépendent seulement de la trajectoire τ → (r(τ), p(τ)) passant par (r, p) et du temps. Le problème reste formidable. Heureusement, comme nous l'avons dit, dans un piège complètement anisotrope, la dynamique hamiltonienne des quasi-particules devrait être fortement chaotique, sauf dans les limites de très basse énergie ǫ ≪ µ TF ou de très haute énergie ǫ ≫ µ TF [22,23]. Nous effectuons donc l'hypothèse ergodique, en identifiant la moyenne temporelle sur une trajectoire d'énergie ǫ à la moyenne « uniforme » dans l'espace des phases sur la couche d'énergie ǫ : Γ(r, p) hypothèse = ergodique Γ(ǫ) = Γ(r, p) ǫ ≡ 1 ρ(ǫ) d 3 rd 3 p (2π ) 3 Γ(r, p)δ(ǫ − ǫ(r, p))(64) où la densité d'états ρ(ǫ) est donnée par l'équation (22). Nous reviendrons sur cette hypothèse dans la section 4.5. Dans ce cas, la fonction X(r, p, τ) dépend seulement de l'énergie ǫ = ǫ(r, p) et du temps : X(r, p, τ) hypothèse = ergodique X(ǫ, τ)(65) Nous obtenons l'équation d'évolution de X(ǫ, τ) en moyennant celle de X(r, p, τ) sur la couche d'énergie ǫ : 11 ∂ τ X(ǫ, τ) = −Γ(ǫ)X(ǫ, τ) − 1 2ρ(ǫ) ǫ 0 dǫ ′ L(ǫ − ǫ ′ , ǫ ′ ){X(ǫ ′ , τ)[n(ǫ) −n(ǫ − ǫ ′ )] + X(ǫ − ǫ ′ , τ)[n(ǫ) −n(ǫ ′ )]} − 1 ρ(ǫ) +∞ 0 dǫ ′ L(ǫ, ǫ ′ ){X(ǫ ′ , τ)[n(ǫ) −n(ǫ + ǫ ′ )] − X(ǫ + ǫ ′ , τ)[1 +n(ǫ) +n(ǫ ′ )]} (66) avec Γ(ǫ) = 1 2ρ(ǫ) ǫ 0 dǫ ′ L(ǫ − ǫ ′ , ǫ ′ )[1 +n(ǫ ′ ) +n(ǫ − ǫ ′ )] + 1 ρ(ǫ) +∞ 0 dǫ ′ L(ǫ, ǫ ′ )[n(ǫ ′ ) −n(ǫ + ǫ ′ )](67) Dans ces expressions, la première intégrale, limitée à des énergies ǫ ′ inférieures à l'énergie de la quasi-particule ǫ considérée, correspond aux processus de Beliaev, et la deuxième intégrale aux processus de Landau. Le noyau intégral 12 L(ǫ, ǫ ′ ) = d 3 r d 3 p d 3 q (2π ) 6 8πg 2 ρ 0 (r) A ǫ+ǫ ′ ǫ,ǫ ′ (r) 2 δ(ǫ − ǫ(r, p))δ(ǫ ′ − ǫ(r, q))δ(ǫ + ǫ ′ − ǫ(r, p + q)) (68) = 32 √ 2 π 1/2 [ρ 0 (0)a 3 ] 1/2 µ TF µ TF ω 3 µ TF 0 µ 0 dµ 0 (µ TF − µ 0 ) 1/2 ǫǫ ′ (ǫ + ǫ ′ ) A ǫ+ǫ ′ ǫ,ǫ ′ (µ 0 ) 2 µ 5/2 TF (ǫ 2 + µ 2 0 ) 1/2 (ǫ ′2 + µ 2 0 ) 1/2 [(ǫ + ǫ ′ ) 2 + µ 2 0 ] 1/2(69) fait intervenir l'amplitude de couplage réduite (54) au point r, reparamétrée en termes des énergies ǫ i = ǫ(r, p i ) (1 ≤ i ≤ 3) ou même du potentiel chimique de Gross-Pitaevskii local µ 0 = gρ 0 (r). Il jouit de la propriété de symétrie L(ǫ, ǫ ′ ) = L(ǫ ′ , ǫ). Écrivons d'abord le résultat avant de donner quelques indications sur son obtention (on consultera aussi la référence [16]). Dans l'approximation sécularo-ergodique, la fonction de corrélation microcanonique de dθ sc /dt vaut C ergo mc (τ) = +∞ 0 dǫ ρ(ǫ)B(ǫ)X(ǫ, τ)(70) Ici B(ǫ) est la moyenne ergodique de la quantité B(r, p) introduite dans l'équation (58) : B(ǫ) = 1 ρ(ǫ) d 3 r d 3 p (2π ) 3 dǫ(r, p) dN δ(ǫ − ǫ(r, p)) (71) = dµ TF /dN π f (ǫ) 2ǫ 1/2 (ǫ + 1) − √ 2(ǫ 2 + 1) argsh (2ǫ) 1/2 (1 +ǫ 2 ) 1/2 −ǫ 1/2 (ǫ − 1) − (1 +ǫ) 2 acos 1 (1 +ǫ) 1/2 (72) B(ǫ) = ǫ→0 dµ TF dN −ǫ 5 − 3π 40 √ 2ǫ 3/2 + O(ǫ 2 ) , B(ǫ) = ǫ→+∞ dµ TF dN −1 + 32 3πǫ −3/2 + O(ǫ −5/2 )(73) avecǫ = ǫ/µ TF et f (ǫ) la densité d'états réduite (27). L'inconnue auxiliaire X(ǫ, τ) est solution de l'équation linéaire (66) avec la condition initiale X(ǫ, 0) =n(ǫ)[1 +n(ǫ)][B(ǫ) − Λǫ](74) où Λ est la dérivée du potentiel chimique microcanonique par rapport à l'énergie totale E du gaz 13 , comme dans l'équation (19) : Λ = +∞ 0 dǫ ρ(ǫ)ǫB(ǫ)n(ǫ)[1 +n(ǫ)] +∞ 0 dǫ ρ(ǫ)ǫ 2n (ǫ)[1 +n(ǫ)](75) L'équation (70) est la récriture ergodique de l'équation (62). La condition initiale (74) est la différence de deux contributions : -la première est celle qu'on obtiendrait dans l'ensemble canonique. La moyenne ergodique de la matrice des covariances (61) serait en effet simplement Q can (ǫ, ǫ ′ ) =n(ǫ)[1 +n(ǫ)]δ(ǫ − ǫ ′ )/ρ(ǫ) ; -la seconde provient d'une projection des fluctuations δn canoniques sur le sous-espace des fluctuations δn d'énergie nulle, +∞ 0 dǫ ρ(ǫ)ǫδn(ǫ) = 0, seules admissibles dans l'ensemble microcanonique. Seul point subtil, cette projection doit être effectuée parallèlement à la solution stationnaire e 0 (ǫ) = ǫn(ǫ)[1 +n(ǫ)] des équations cinétiques linéarisées (66). 14 On vérifie alors que, pour la valeur de Λ donnée, X(ǫ, 0) est bien dans le sousespace des fluctuations d'énergie nulle. Résultats et discussion Nous présentons quelques résultats sous forme graphique, après un adimensionnement astucieux les rendant indépendants des pulsations de piégeage (pourvu qu'elles soient assez distinctes deux à deux pour autoriser l'hypothèse ergodique) et de la force des interactions 15 ; il suffit de connaître la température en unités du potentiel chimique de Thomas-Fermi µ TF . Ces résultats témoignent donc de la classe d'universalité des pièges harmoniques complètement anisotropes, différente de celle des systèmes spatialement homogènes de la référence [16]. Un sous-produit intéressant de notre étude est présenté sur la figure 3 : il s'agit du taux d'amortissement Γ(ǫ) à l'approximation sécularo-ergodique des modes de Bogolioubov d'énergie ǫ. Comme on sait dans une expérience d'atomes froids exciter de tels modes et suivre leur décroissance en temps, ce taux est mesurable et notre prédiction 13. La raison profonde de l'apparition de cette dérivée est donnée dans la référence [16]. Elle explique pourquoi les équations cinétiques permettent de retrouver dans l'ensemble canonique le terme balistique At 2 de l'équation (15) avec la bonne expression du coefficient A = (∂ E µ mc / ) 2 Var E. 14. Pour que cette projection soit compatible avec l'évolution cinétique linéarisée, il faut en effet que la direction de projection ainsi que l'hyperplan sur lequel on projette soient invariants par évolution temporelle, le second point étant assuré par la conservation de l'énergie. La forme de e 0 (ǫ) découle du fait que (55) reste une solution stationnaire pour une variation infinitésimale de β, β → β + δβ, autour de sa valeur physique. 15. Dans une première étape, on montre que les résultats ne peuvent dépendre des pulsations de piégeage ω α que par l'intermédiaire de leur moyenne géométriqueω. Ceci est une conséquence assez directe de l'hypothèse ergodique et du fait que les observables mises en jeu ici, dont le hamiltonien, dépendent seulement de la position r des quasi-particules via le potentiel de piégeage U(r) = 1 2 m α ω 2 α r 2 α . Dans l'intégrale d 3 r participant à la moyenne ergodique, on peut alors effectuer le changement de variables isotropisant de la note 3. . On a représenté sur la même figure la fonction de corrélation C mc (t) de dθ/dt dans l'approximation sécularo-ergodique (70) en fonction du temps (trait plein rouge, graduations à droite) et, pour (b), dans une vignette en échelle log-log aux temps longs (trait plein noir) pour montrer qu'à une loi de décroissance quasi-exponentielle en la racine du temps (ajustement en t 6 exp(−C √ t) en tireté rouge) succède une loi de puissance ∝ t −5 (tireté bleu). Comme dans la figure 3, la multiplication des quantités sur les axes par des facteurs bien choisis rend ces résultats universels. peut être comparée aux expériences, au moins dans son régime de validité, en particulier de mouvement classique ǫ ≫ ω α (les écarts à l'hypothèse ergodique sont discutés dans la section 4.5). Les comportements aux limites Γ(ǫ) ∼ ǫ→0 3I 4 ǫ µ TF 1/2 k B T [ρ 0 (0)a 3 ] 1/2 avec I = 4, 921 208 . . .(76)Γ(ǫ) ∼ ǫ→+∞ 128 √ 2 15 √ π µ 2 TF ǫ [ρ 0 (0)a 3 ] 1/2(77) représentés en tireté sur la figure 3, 16 sont établis dans l'Annexe A. Ils sont fort différents du cas spatialement homogène, où le taux d'amortissement s'annule linéairement en ǫ à basse énergie et diverge comme ǫ 1/2 à haute énergie. En particulier, le comportement (76) en ǫ 1/2 résulte de l'existence du bord Thomas-Fermi du condensat. Revenons à l'étalement de phase du condensat dans l'ensemble microcanonique. Sur la figure 4, nous représentons en trait plein noir la variance du déphasageθ(t) −θ(0) du condensat en fonction du temps t dans l'approximation 16. Pour k B T = µ TF , Γ(ǫ)/ǫ 1/2 présente un maximum trompeur au voisinage de ǫ/µ TF = 0, 02 d'environ 5% supérieur à sa limite en ǫ = 0. ergodique (70) aux températures T = µ TF /k B et T = 10µ TF /k B . La variance a un départ parabolique en temps, qui correspond au régime précollisionnel t ≪ t coll , où t coll est le temps de collision typique entre les quasi-particules : on peut alors supposer que C mc (τ) ≃ C mc (0), si bien que la contribution intégrale à l'équation (15) est ≃ C mc (0)t 2 . Aux temps longs, t ≫ t coll , la fonction de corrélation de dθ/dt semble tendre rapidement vers zéro (trait plein rouge) ; une étude numérique plus poussée (voir la vignette incluse dans la figure 4b) révèle cependant la présence d'une queue en loi de puissance t −α , C mc (t) ∼ t→+∞ C t 5(78) L'exposant α = 5 est supérieur à celui, α h = 3, de la loi de décroissance de C mc (t) dans le cas spatialement homogène [16]. Sa valeur peut être retrouvée par l'approximation heuristique grossière, dite de taux ou gaussienne projetée [15], déjà utilisée pour α h avec succès dans cette même référence [16] : on ne garde dans les équations cinétiques linéarisées (66) que le terme de décroissance pure −Γ(ǫ)X(ǫ, τ) au second membre, ce qui rend leur intégration immédiate et conduit à l'estimation 17 C mc (t) ≈ +∞ 0 dǫ ρ(ǫ)[B(ǫ) − Λǫ] 2n (ǫ)[1 +n(ǫ)]e −Γ(ǫ)t(79)Var mc [θ(t) −θ(0)] = t≫t coll 2D(t − t 0 ) + o(1)(80) en tireté sur la figure 4, le retard t 0 étant dû à la largeur non nulle de la fonction de corrélation C mc (τ) : D = +∞ 0 dτ C mc (τ) (81) t 0 = +∞ 0 dτ τC mc (τ) +∞ 0 dτ C mc (τ)(82) Nous représentons le coefficient de diffusion D de la phase du condensat en fonction de la température sur la figure 5a. Il présente une croissance à haute température (k B T > µ TF ) bien plus rapide que dans le cas spatialement homogène : elle n'était que linéaire (à des facteurs logarithmiques près), elle semble ici être en T 4 (pointillé sur la figure). Le temps de retard à la diffusion t 0 est porté en fonction de la température sur la figure 5b. Nous le comparons à l'estimation t coll ≃ 1/Γ(ǫ = k B T ) du temps de collision entre quasi-particules, en tireté : celle-ci rend bien compte de la remontée brutale de t 0 à basse température mais reproduit avec beaucoup de retard et en la sous-estimant grandement celle à haute température. La remontée de t 0 est bien représentée par une loi en T −3/2 à basse température, et semble être linéaire en T à haute température (voir les pointillés). Cherchons à retrouver par un raisonnement simple les lois de puissance constatées. Si loi d'échelle il y a, elle doit survivre à l'approximation de taux sur les équations cinétiques linéarisées ; nous pouvons donc prendre l'expression approchée (79) ǫ ′ , ǫ ′ ) ≃ cte déclenche cependant une divergence infrarouge logarithmique dans les intégrales sur ǫ ′ dans (67), qui s'interrompt à ǫ ′ µ TF , si bien que 19 Γ(k B Tǭ) µ TF [ρ 0 (0)a 3 ] 1/2 ∼ k B T/µ TF →+∞ 512 √ 2 15π 1/2 1 ǫ 2 µ TF k B T ln k B T µ TF(83) Tout ceci conduit bien aux lois d'échelle D ≈ T 4 et t 0 ≈ T à haute température, à des facteurs logarithmiques près. À basse température, nous procédons de même. Le comportement de Γ(k B Tǭ) est en T 3/2 lorsque T → 0 àǭ fixé, comme l'équivalent (76) le laissait entrevoir et comme un calcul le confirme. Le seul piège à éviter est que B(k B Tǭ) − Λk B Tǭ est d'ordre T 3/2 lorsque T → 0, et non pas T comme on pourrait le croire, car les termes dominants de B(k B Tǭ) et de Λk B Tǭ, tous deux linéaires en k B Tǭ, se compensent exactement, voir l'équation (75). Ceci conduit aux lois de puissance exactes (sans facteur correctif logarithmique) D ∝ T 4 et t 0 ∝ T −3/2 à basse température ; seule la seconde est accessible sur l'intervalle en température de la figure 5, mais nous avons vérifié la première numériquement. Afin d'encourager une étude expérimentale avec des atomes froids, terminons par une petite étude des limites fondamentales à l'observabilité de la diffusion de phase d'un condensat piégé. Il existe bien entendu plusieurs difficultés pratiques à surmonter, comme (i) la réduction significative des fluctuations de l'énergie et du nombre de particules dans le gaz pour atténuer le brouillage balistique de la phase, concurrent dangereux de la diffusion, (ii) la mise en place d'un système de détection sensible et non biaisé du déphasage du condensat ou de la fonction de cohérence g 1 (t) de type Ramsey comme il est proposé dans les références [17,18], (iii) la réduction des bruits techniques du dispositif expérimental, (iv) le piégeage des atomes dans une cellule sous un vide suffisamment poussé pour rendre négligeables les pertes d'atomes froids par collision avec le gaz chaud résiduel (pertes à un corps) : des durées de vie de l'ordre de l'heure sont envisageables sous environnement cryogénique [33,34]. Ces aspects pratiques varient suivant les équipes et sortent du cadre de cet article. En revanche, les pertes de particules dues aux collisions à trois corps, avec formation d'un dimère et d'un atome rapide, sont intrinsèques aux gaz d'alcalins et constituent une limite fondamentale. Chaque perte d'un atome change, à un instant aléatoire, la vitesse de variation de la phase d dtθ , puisque celle-ci est fonction de N, ce qui ajoute une composante stochastique à son évolution [16,35]. Pour calculer la variance du déphasage du condensat induit par les pertes à trois corps, nous nous plaçons à l'ordre zéro en la fraction non condensée, c'est-à-dire dans le cas d'un condensat pur à température nulle préparé à l'instant 0 avec un nombre initialement bien défini N de 19. Un calcul plus précis conduit à remplacer dans (83) La température, qui intervient dans le taux d'amortissement Γ(r, p), vaut T = µ TF /k B . Tiretés verticaux noirs : à gauche, la valeur ergodique (moyenne de la grandeur sur la couche d'énergie ǫ) ; à droite, la moyenne temporelle de la grandeur sur une période de la trajectoire linéaire d'énergie ǫ selon la direction Oy (axe du piège le plus confinant), obtenue analytiquement pour (a) (voir la note 21) et numériquement pour (b). particules, comme dans la référence [16] dont nous pouvons récupérer (en les adaptant au cas piégé et aux pertes à trois corps) les expressions (G7) et (64) : Var pertes [θ(t) −θ(0)] = dµ TF dN 2 t 0 dτ t 0 dτ ′ N (τ)N(τ ′ ) − N (τ) N (τ ′ ) ∼ Γ 3 t→0 dµ TF dN 2 NΓ 3 t 3(84) Nous avons introduit le taux de décroissance Γ 3 du nombre de particules, relié comme suit à la constante K 3 des pertes à trois corps et au profil de densité Thomas-Fermi ρ 0 (r) du condensat : Qu'en est-il dans le cas d'un piège complètement anisotrope ? Nous souhaitons tester l'hypothèse ergodique pour deux grandeurs physiques. La première apparaît dans nos équations cinétiques linéarisées, il s'agit du taux d'amortissement Γ(r, p). La seconde intervient dans les conditions initiales de la fonction de corrélation C mc (τ) de dθ/dt, il s'agit de dǫ(r, p)/dµ TF . Pour un échantillonnage uniforme de la surface d'énergie ǫ, c'est-à-dire avec la distribution de probabilité δ(ǫ − ǫ(r, p))/ρ(ǫ) dans l'espace des phases, nous montrons sur la figure 6 les histogrammes de ces grandeurs après moyennage temporel sur chaque trajectoire pendant des temps respectivement de t = 0, t = 5000/ω et t = 250 000/ω, à l'énergie ǫ = k B T = µ TF pour des pulsations de piégeage incommensurables. 20 Le moyennage temporel conduit bien à un rétrécissement spectaculaire de la distribution de probabilité, qui se pique autour de la moyenne ergodique (ligne tiretée à gauche), ce qui va dans le sens de l'hypothèse ergodique. Cette dynamique de rétrécissement se poursuit sur des temps très longs, mais ne vient jamais à bout d'un petit pic latéral éloigné de la moyenne ergodique. d dt N ≡ −Γ 3 N = −K 3 d 3 r [ρ 0 (r)] 3(85) Un examen des trajectoires contribuant au pic latéral montre qu'il s'agit de perturbations de trajectoires linéaires stables le long de l'axe propre du piège de raideur maximale. La valeur moyenne temporelle des deux grandeurs considérées sur ces trajectoires linéaires est représentée par le tireté à droite sur la figure 6, elle est effectivement proche du pic en question. Le diagramme de stabilité d'une trajectoire linéaire le long d'un axe propre α quelconque du piège vis-à-vis d'une perturbation le long d'un autre axe propre β est représenté sur la figure 7, dans le plan (énergie, rapport ω β /ω α ). Il montre que la trajectoire linéaire selon l'axe le plus confinant est bien stable, à toute énergie. 21 Les sections de Poincaré des trajectoires planes dans les plans αOβ sur la figure 8 précisent la largeur de l'îlot de stabilité et révèlent l'existence d'îlots secondaires, etc. Il n'y a donc pas pleine ergodicité de notre dynamique classique, même aux énergies ǫ ≈ µ TF , même dans le cas complètement anisotrope. Pour mesurer quantitativement l'erreur commise par l'hypothèse ergodique dans le calcul de C mc (0) et de C mc (τ > 20. Les équations du mouvement (39,40), mises dans leur ensemble sous la forme d dt X = f(X), sont intégrées numériquement avec un schéma semi-implicite du second ordre, X(t + dt) = X(t) + dt[1 − dt 2 M] −1 f(X(t)) où M est la différentielle première de f(X) en X(t) [38]. Si la trajectoire traverse la surface du condensat entre t et t + dt, il faut déterminer l'instant t s de la traversée avec une erreur O(dt) 3 , puis appliquer le schéma semi-implicite successivement sur [t, t s ] et [t s , t + dt], pour pallier la discontinuité de f(X) et de ses dérivées. 21. La trajectoire linéaire d'une quasi-particule d'énergie ǫ selon l'axe propre Oα du piège s'écrit m 1/2 ω α r α (t) = |µ TF + iǫ| sin( √ 2ω α t)/ √ G(t) et p α (t)/(2m) 1/2 = ǫ/ √ G(t) avec G(t) = µ TF + |µ TF + iǫ| cos( √ 2ω α t). Ceci correspond au choix r α (0) = 0, p α (0) ≥ 0 et vaut pour −t s ≤ t ≤ t s , où le temps d'atteinte de la surface du condensat est donné par √ 2ω α t s = acos ǫ−µ TF |µ TF +iǫ| . À l'extérieur du condensat, la quasi-particule oscille harmoniquement comme une particule libre pendant un temps 2ω −1 α atan[(ǫ/µ TF ) 1/2 ] avant de regagner le condensat, pour le traverser en un temps 2t s , et ainsi de suite. La connaissance de la trajectoire rend immédiate l'analyse linéaire de stabilité numérique. Elle permet aussi de calculer analytiquement la moyenne temporelle de dǫ(r, p)/dµ TF sur la trajectoire linéaire ; si l'on poseǫ = ǫ/µ TF , le résultat s'écrit dǫ(r, p) dµ TF = ln 1+ǫ+ √ 2ǫ (1+ǫ 2 ) 1/2 − √ 2 atan √ǫ acosǫ −1 (1+ǫ 2 ) 1/2 + √ 2 atan √ǫ , on compare comme dans l'équation (88) 1/Ō ǫ (cercles rouges) à son approximation ergodique 1/ O ǫ (trait plein noir). La moyenne temporelleŌ est calculée sur une durée d'évolution t = 5 × 10 4 /ω des quasi-particules dans le piège de la figure 6 ; la moyenne sur la couche d'énergie ǫ est prise sur 200 trajectoires indépendantes, avec des conditions initiales (r, p) tirées selon la loi uniforme δ(ǫ − ǫ(r, p))/ρ(ǫ), ce qui conduit à une incertitude statistique représentée par les barres d'erreur sur la figure. Droites tiretées dans (b) : équivalents asymptotiques (89) de 1/Γ ǫ (en rouge) et (77) de 1/ Γ ǫ (en noir). obtenons pour la grandeur Γ(r, p) la prédiction analytique suivante : 23 1 Γ(r, p) ǫ ∼ ǫ→+∞ π 5/2 56 √ 2 ǫ µ 2 TF 1 [ρ 0 (0)a 3 ] 1/2 (89) Elle diffère de la prédiction ergodique (77) par un coefficient numérique, et reproduit bien les résultats des simulations numériques (voir le tireté rouge sur la figure 9b). Ceci nous interdit de calculer le coefficient de diffusion D et le retard à la diffusion t 0 dans l'approximation sécularo-ergodique à trop haute température. En ce qui concerne la grandeur dǫ(r, p)/dµ TF , qui tend vers −1 à haute énergie, l'écart ne peut être significatif qu'à basse énergie ; il ne le devient en fait qu'à très basse énergie, et ne poserait problème à notre calcul ergodique de D et t 0 qu'à des températures k B T ≪ µ TF rarement atteintes dans les expériences sur les atomes froids. Conclusion Motivés par des progrès expérimentaux récents dans la manipulation des gaz d'atomes froids piégés [1,2,3], nous avons étudié théoriquement le temps de cohérence et la dynamique de phase d'un condensat de Bose-Einstein dans un gaz de bosons isolé et piégé harmoniquement, un problème fondamental important pour les applications interférométriques. La variance du déphasage subi par le condensat au bout d'un temps t croît indéfiniment avec t, ce qui limite le temps de cohérence intrinsèque du gaz. Pour t ≫ t coll , où t coll est le temps de collision typique entre les quasi-particules de Bogolioubov, elle devient une fonction quadratique du temps, 23. À l'ordre dominant en ǫ, on obtient Γ(r, p) en moyennant l'équivalent (97) (dans lequel µ 0 = gρ 0 (r)) sur une trajectoire harmonique non perturbée par le condensat, r α (t) = A α cos(ω α t + φ α ), ∀α ∈ {x, y, z}. Considérons astucieusement la quantité gρ 0 (r) à moyenner comme une fonction f (θ) des angles θ α = ω α t + φ α . C'est une fonction périodique de période 2π selon chaque direction, décomposable en série de Fourier, f (θ) = n∈Z 3 c n e in·θ . Dans le cas incommensurable, n · ω 0 et la moyenne temporelle de e in·θ est nulle ∀n ∈ Z 3 * , si bien que f (θ) = c 0 . Dans l'habituelle expression intégrale de c 0 , on effectue le changement de variable x α = X α cos θ α , où X α = (ǫ α /µ TF ) 1/2 et ǫ α est l'énergie du mouvement selon Oα. Il reste à faire tendre les X α vers +∞ sous le signe intégral pour obtenir Γ(r, p) µ TF [ρ 0 (0)a 3 ] 1/2 ∼ En moyennant l'inverse de cet équivalent sur la distribution de probabilité 2ǫ −2 δ(ǫ− α ǫ α ) des énergies par direction pour un oscillateur harmonique d'énergie totale ǫ, on tombe sur (89). oùθ est l'opérateur phase du condensat. Cette loi asymptotique a la même forme que dans le cas spatialement homogène précédemment étudié [16], ce qui n'était pas garanti, mais les coefficients diffèrent bien entendu. Pour les calculer, nous considérons la limite thermodynamique dans le piège, dans laquelle le nombre de particules tend vers l'infini, N → +∞, à température T et potentiel chimique de Gross-Pitaevskii µ GP fixés. Ceci impose que les pulsations de piégeage réduites tendent vers zéro, ω α /µ GP → 0, ce que nous réinterprétons avantageusement comme une limite classique → 0. Le terme dominant At 2 est dû aux fluctuations dans l'état initial des quantités conservées par évolution temporelle, N et E, où E est l'énergie totale du gaz. Nous donnons une expression explicite (16)-(19)- (20) du coefficient A dans un ensemble généralisé, mélange statistique quelconque d'ensembles microcanoniques avec des fluctuations au plus normales de N et de E. Dans ce cas, A = O(1/N). Nous obtenons une forme plus simple (30) dans le cas d'un mélange statistique d'ensembles canoniques de même température mais de nombre de particules variable. Aux températures habituelles, plus grandes que µ GP /k B , et pour des fluctuations poissoniennes du nombre de particules, la contribution à A des fluctuations thermiques de E est rendue négligeable par un facteur d'ordre la fraction non condensée ∝ (T/T c ) 3 . Il faut réduire d'autant la variance de N pour voir l'effet des fluctuations thermiques sur l'étalement balistique de la phase du condensat. Var[θ(t) −θ(0)] = At 2 + 2D(t − t 0 ) + o(1)(90) Le terme sous-dominant 2D(t − t 0 ) ne dépend pas de l'ensemble dans lequel le système est préparé, du moins au premier ordre non nul 1/N à la limite thermodynamique, et il est le seul qui subsiste dans l'ensemble microcanonique. Le calcul de ses deux ingrédients, le coefficient de diffusion D de la phase et le retard à la diffusion t 0 , nécessite la connaissance à tout temps de la fonction de corrélation de dθ/dt dans l'ensemble microcanonique, et donc la résolution d'équations cinétiques linéarisées sur les nombres d'occupation des quasi-particules de Bogolioubov. Ce sont en effet les fluctuations temporelles de ces nombres d'occupation pour une réalisation donnée du système qui stochastisent l'évolution de la phase du condensat. Nous adoptons pour ce faire une description semi-classique, dans laquelle le mouvement des quasi-particules dans le gaz piégé est traité classiquement dans l'espace des phases (r, p), mais le champ bosonique des quasi-particules est traité quantiquement, au travers des opérateurs nombres d'occupation n(r, p). Dans les observables quantiques de la forme = k a knk , comme dθ/dt, la moyenne a k et la somme sur les modes quantiques k de Bogolioubov sont alors remplacées, selon un principe de correspondance, par une moyenne temporelle et une intégrale sur les trajectoires classiques (voir les équations (42)-(44)). Les équations cinétiques linéarisées sur les fluctuations δn(r, p) comportent une partie de transport, selon le mouvement hamiltonien classique des quasi-particules, et une intégrale de collision, locale en position, qui décrit les processus d'interaction Beliaev-Landau à trois quasi-particules. Elles prennent la même forme que les équations de Boltzmann quantiques linéarisées sur la fonction de distribution semi-classique n(r, p, t) des quasi-particules dans l'espace des phases. On les simplifie dans la limite séculaire ω α t coll ≫ 1 et sous l'hypothèse d'un mouvement classique ergodique des quasi-particules. Cette hypothèse, selon laquelle les fluctuations δn(r, p) moyennées sur une trajectoire ne dépendent plus que de l'énergie de cette dernière, ne tient que si le piège est complètement anisotrope ; nous lui apportons dans ce cas une justification numérique soigneuse. Les quantités cherchées D et t 0 , correctement adimensionnées, sont des fonctions universelles de k B T/µ TF où µ TF est la limite Thomas-Fermi de µ GP , et sont en particulier indépendantes des rapports ω α /ω β des pulsations de piégeage. Elles sont représentées sur la figure 5. Un sous-produit intéressant et plus directement mesurable de notre étude est le taux d'amortissement Γ(ǫ) des modes de Bogolioubov d'énergie ǫ dans le piège. À la température adimensionnée k B T/µ TF fixée, il est lui aussi décrit par une fonction universelle de ǫ/µ TF indépendante des pulsations de piégeage, voir la figure 3. Ces résultats participent d'une nouvelle classe d'universalité, celle des pièges harmoniques complètement anisotropes, fort différente de celle théoriquement mieux défrichée des systèmes spatialement homogènes, et recevront, espérons-le, une confirmation expérimentale prochaine. Figure 1 : 1Coefficient of ballistic spreading Figure 2 : 2Beliaev and Landau process involving three quasiparticles and the corresponding coupling amplitudes. 1 − 1 = 11du, the argument of the third Dirac vanishes at a point u 0 and only one, given the inequalities ǫ [ p 2 Figure 3 : 3Beliaev-Landau damping rate Γ(ǫ) of Bogoliubov modes of a condensate in a completely anisotropic harmonic trap as a function of the mode energy ǫ, at the thermodynamic limit, in the secularo-ergodic approximation (63, 64, 67), at temperature (a) k B T = µ TF and (b) Figure 5 : 5In the conditions of the previous figures 3 and 4, (a) diffusion coefficient D of the phase of the harmonically trapped condensate and (b) delay time t 0 to the phase diffusion as a function of temperature (in black solid line) deduced from equations (81, 82) and (70). These quantities are independent of the statistical ensemble. With the chosen scalings, the curves are universal. Dotted lines : in (a) fit by a T 4 law at high temperature and in (b) fit by a law linear in T at high temperature and by a T −3/2 law at low temperature. The power laws represented at high temperature are only indicative because they certainly omit logarithmic factors in k B T/µ TF . In (b) the dashed line gives the estimate 1/Γ(ǫ = k B T ) of the typical collision time t coll between quasiparticles. Figure 6 : 6Histogram of time averages of the physical quantities (a) dǫ(r, p)/dµ TF and (b) Γ(r, p) for 10 4 independent initial values (r(0), p(0)) drawn uniformly on the energy shell ǫ = µ TF and a Hamiltonian evolution(39, 40) of Bogoliubov quasiparticles for a variable duration t : t = 0 (hollow bars, black line), t = 5 × 10 3 /ω (solid red bars), t = 2.5 × 10 5 /ω (black solid bars). The harmonic potential is completely anisotropic, with incommensurate trapping frequencies (ratios ω x : ω y : ω z = 1 : √ 3 : √ 5 − 1). The temperature, which enters in the Γ(r, p) damping rate, is T = µ TF /k B . Black vertical dashed lines : on the left, the ergodic value (average of the physical quantity on the energy shell ǫ) ; on the right, the temporal average of the quantity over a period of the linear trajectory of energy ǫ along the direction Oy (most confining axis of the trap), obtained analytically for (a) (see note 21) and numerically for (b). Figure 7 : 7For the classical Hamiltonian dynamics of Bogoliubov quasiparticles in a harmonic potential, stability of the linear motion along an eigenaxis Oα of the trap for an infinitesimal initial perturbation (a displacement) along another eigenaxis Oβ, as a function of the energy ǫ of the trajectory and the ratio ω β /ω α of the trapping frequencies (the shaded areas are stable). Figure 8 : 8For the classical Hamiltonian dynamics of Bogoliubov quasiparticles in a harmonic potential, Poincaré sections in the plane (r y = 0, p y (ǫ) > 0) of planar trajectories in xOy (200 independent trajectories, evolution time 5000/ω), with a ratio ω x : ω y taking all possible values in the trap of figure 6 : 1 : Figure 9 : 9Visualization of the error introduced by the ergodic hypothesisŌ = O ǫ on two physical quantities O(r, p) involved in the diffusion of the condensate phase, as a function of the energy ǫ : (a) for O(r, p) = dǫ(r, p)/dµ TF , we compare as in equation (87) Ō 2 ǫ (red circles) to its ergodic approximation O 2 ǫ from (72) (solid line black) ; (b) for O(r, p) = Γ(r, p), we compare as in equation (88) 1/Ō ǫ (red circles) to its ergodic approximation 1/ O ǫ (solid line black). The time averageŌ is computed over an evolution time t = 5 × 10 4 /ω of the quasiparticles in the trap of figure 6 ; the average on the energy shell ǫ is taken on 200 independent trajectories, with initial conditions (r, p) drawn according to the uniform law δ(ǫ − ǫ(r, p))/ρ(ǫ), which leads to a statistical uncertainty represented by the error bars in the figure. Dashed lines in (b) : asymptotic equivalents (89) of 1/Γ ǫ (in red) and (77) of 1/ Γ ǫ (in black). + O( lnǫ ǫ 3 ) , which corrects and improves equation(35) of reference[39]. Figure 2 : 2Processus de Beliaev et de Landau à trois quasi-particules et amplitudes de couplage correspondantes. 11 . 11Le plus simple est de moyenner les équations cinétiques complètes (51), puis de linéariser le résultat autour de la solution stationnaire (55). 12. Pour obtenir (69), on a réduit l'équation (68) à une intégrale simple sur le module r (après s'être ramené formellement au cas d'un piège isotrope comme dans la note 3) en intégrant en coordonnées sphériques sur p, q et sur u, le cosinus de l'angle entre p et q. Dans1 −1 du, l'argument du troisième Dirac s'annule en un point u 0 et un seul, compte tenu des inégalités ǫ Bog |p−q| ≤ ǫ Bog p + ǫ Bog q ≤ ǫ Bog p+q satisfaites par la relation de dispersion de Bogolioubov ǫ Bog p = [ p 2 2m ( p 2 2m + 2µ 0 )] 1/2 , ∀µ 0 ≥ 0. Figure 3 :Figure 4 : 34À la limite thermodynamique, taux d'amortissement Beliaev-Landau Γ(ǫ) des modes de Bogolioubov d'un condensat dans un piège harmonique complètement anisotrope en fonction de leur énergie ǫ, dans l'approximation sécularo-ergodique (63,64,67), à la température (a) k B T = µ TF et (b) k B T = 10µ TF , où µ TF est le potentiel chimique de Thomas-Fermi du condensat. Grâce à l'adimensionnement choisi, la courbe est universelle ; en particulier, elle ne dépend pas des pulsations de piégeage ω α . Les modes de Bogolioubov considérés doivent être dans le régime de mouvement classique ǫ ≫ ω α et le système doit être dans le régime d'un condensat presque pur, [ρ 0 (0)a 3 ] 1/2 ≪ 1 et T ≪ T c , où ρ 0 (0) = µ TF /g est la densité du condensat au centre du piège et T c la température critique. En tireté, les équivalents (76) et (77) de Γ(ǫ) à basse et à haute énergie. Dans les conditions de la figure 3, pour un système préparé à l'instant 0 dans l'ensemble microcanonique à la température (a) k B T = µ TF ou (b) k B T = 10µ TF , et isolé de son environnement dans son évolution ultérieure, variance du déphasageθ(t) −θ(0) du condensat en fonction du temps t (trait plein noir) et son comportement diffusif asymptotique (80) (tireté) Figure 5 : 5de C mc (t) comme point de départ et la reporter dans les expressions (81) et (82) de D et t 0 . À haute température, les intégrales sur ǫ donnant D et t 0 dans l'approximation de taux sont dominées par les énergies d'ordre k B T ; nous posons donc ǫ = k B Tǭ et faisons tendre T vers +∞ àǭ fixé sous le signe intégral. Les comportements de ρ(ǫ) et B(ǫ) à haute énergie sont connus. Seul celui de Γ(k B Tǭ) manque ; pour l'obtenir, nous remarquons sur (69) que L(k B Tǭ, k B Tǭ ′ ) tend vers une constante lorsque T → +∞. L'approximation L(ǫ, ǫ ′ ) ≃ L(Dans les conditions des figures 3 et 4 précédentes, (a) coefficient de diffusion D de la phase du condensat harmoniquement piégé et (b) temps de retard t 0 à la diffusion de phase en fonction de la température (en trait plein noir) déduits des équations (81,82) et (70). Ces quantités sont indépendantes de l'ensemble statistique. Avec les adimensionnements choisis, les courbes sont universelles. On a représenté en pointillé en (a) un ajustement par une loi en T 4 à haute température et en (b) un ajustement par une loi en T à haute température et par une loi en T −3/2 à basse température. Les lois de puissance représentées à haute température sont seulement indicatives car elles omettent certainement des facteurs logarithmiques en k B T/µ TF . En (b) le tireté donne l'estimation 1/Γ(ǫ = k B T ) du temps de collision t coll typique entre les quasi-particules. le symbole ∼ par = et le facteur ln k B T µ TF par [ln k B T µ TF +¯ǫ 4 + ln(1 − e −ǭ ) + 31 15 − 3 ln 2 + O(µ TF /k B T )]. Figure 6 : 6Histogramme des moyennes temporelles des grandeurs physiques (a) dǫ(r, p)/dµ TF et (b) Γ(r, p) pour 10 4 valeurs initiales indépendantes (r(0), p(0)) tirées uniformément sur la couche d'énergie ǫ = µ TF et une évolution hamiltonienne(39,40) des quasi-particules de Bogolioubov pendant une durée t variable : t = 0 (barres creuses, trait noir), t = 5 × 10 3 /ω (barres pleines rouges), t = 2, 5 × 10 5 /ω (barres pleines noires). Le potentiel harmonique est complètement anisotrope, avec des pulsations de piégeage incommensurables (rapports ω x : ω y : ω z = 1 : √ 3 : √ 5 − 1). Figure 7 : 7Pour la dynamique hamiltonienne classique des quasi-particules de Bogolioubov dans un potentiel harmonique, stabilité du mouvement linéaire selon un axe propre Oα du piège vis-à-vis d'une perturbation initiale infinitésimale (un déplacement) selon un autre axe propre Oβ, en fonction de l'énergie ǫ de la trajectoire et du rapport ω β /ω α des pulsations de piégeage (les zones hachurées sont stables). Figure 8 : 6 √ 3 .Figure 9 : 8639Pour la dynamique hamiltonienne classique des quasi-particules de Bogolioubov dans un potentiel harmonique, sections de Poincaré dans le plan de coupe (r y = 0, p y (ǫ) > 0) de trajectoires planes dans xOy (200 trajectoires indépendantes, temps d'évolution 5000/ω), avec un rapport ω x : ω y prenant toutes les valeurs possibles dans le piège de la figure Les sections ont été ordonnées dans le sens d'un rapport ω x /ω y croissant de gauche à droite et de haut en bas (en effet, 1/ √ 3 <( √ 5 − 1)/ √ 3 < 1/( √ 5 − 1) < 1). Ceci révèle que la section de Poincaré est d'autant plus chaotique que le rapport ω x /ω y est plus grand. r x est en unités de (µ TF /mω 2 ) 1/2 et p x en unités de (mµ TF ) Visualisation de l'erreur introduite par l'hypothèse ergodiqueŌ = O ǫ sur deux grandeurs physiques O(r, p) intervenant dans la diffusion de la phase du condensat, en fonction de l'énergie ǫ : (a) pour O(r, p) = dǫ(r, p)/dµ TF , on compare comme dans l'équation (87) Ō 2 ǫ (cercles rouges) à son approximation ergodique O 2 ǫ tirée de (72) (trait plein noir) ; (b) pour O(r, p) = Γ(r, p) Temps de cohérence d'un condensat de Bose-Einstein dans un gaz isolé harmoniquement piégé Coherence time of a Bose-Einstein condensate in an isolated harmonically trapped gasYvan Castin, Alice Sinatra Laboratoire Kastler Brossel, ENS-PSL, CNRS, Sorbonne Université et Collège de France, Paris, France Keywords: Bose gases; Bose-Einstein condensate; temporal coherence; trapped gases; ultracold atoms Nous étudions la dynamique de phase à l'équilibre d'un condensat dans un gaz de bosons en interaction faible harmoniquement piégé et isolé de l'environnement. Nous trouvons qu'au bout d'un temps long devant le temps de collision typique entre les quasi-particules de Bogolioubov, la variance du déphasage du condensat comporte en général un terme balistique quadratique en temps et un terme diffusif affine en temps. Nous donnons des expressions analytiques des coefficients correspondants, à la limite d'un grand système, dans le régime faiblement collisionnel et dans l'approximation ergodique pour le mouvement des quasi-particules. Correctement adimensionnés, ils sont décrits, tout comme les taux d'amortissement des modes, par des fonctions universelles de la température ramenée au potentiel chimique de Thomas-Fermi du condensat. Cette classe d'universalité diffère de celle précédemment étudiée du gaz spatialement homogène.Résumé Les comportements en loi de puissance à basse énergie de la densité d'états ρ(ǫ) [voir(34)], des coefficients B(ǫ) dans dθ/dt [voir (73)], des nombres d'occupation n(ǫ) ∼ k B T/ǫ et du taux d'amortissement Γ(ǫ) [voir (76)] reproduisent alors l'exposant α = 5 constaté numériquement.18 Comme C mc (t) tend vers zéro plus vite que 1/t 2+η , pour un certain η > 0, nous obtenons le résultat important suivant : la variance du déphasage du condensat Var mc [θ(t) −θ(0)] présente aux temps longs une croissance affine typique d'un régime diffusif avec retard : Nous obtenons une écriture plus parlante, directement comparable à nos résultats sans pertes, en récrivant (84) sous forme adimensionnée :Var pertes [θ(t) −θ(0)] ∼Γ 3 t→0 δ (ǫ(r, p) + ǫ(r, q) − ǫ(r, p + q)) n(r, q) −n(r, p + q) (57)9. These diagrams imply a hidden process of absorption or stimulated emission in the condensate mode. 10. Strictly speaking, this stationary solution corresponds to the average occupation numbers in the canonical ensemble, rather than in the microcanonical one. The difference, computable as in Appendix C of reference[16], but out of reach of our kinetic equations, tends to zero at the thermodynamic limit and is negligible here. It should also be noted that the non-conservation of the total number of quasiparticles by the Beliaev-Landau processes requires the Bose'sn law to have unit fugacity. . Care has been taken to account for the projection on the microcanonical subspace of zero energy fluctuations not only in the initial condition (74), but also in the contraction by B(ǫ) in (70), replacing B(ǫ) with B(ǫ) − Λǫ ; this precaution, optional in the exact formulation, is necessary here since the rate approximation violates conservation of energy.18. In contrast, the predicted value for the coefficient C in (78) for k B T = 10µ TF , that is ≃ 10 −5 , differs significantly from the numerical value ≃ 7 × 10 −5 . Mots-clés : gaz de bosons ; condensat de Bose-Einstein ; cohérence temporelle ; gaz piégés ; atomes froids . Le cas d'un piège harmonique anisotrope se ramène au cas isotrope traité dans[29] par le changement de variable de jacobien unité r α = λ α r ′ α , avec ω α λ α =ω, tel que U(r) = 1 2 mω 2 r ′2 . 4. L'expression générale(16) de A est un peu délicate à appréhender. En effet, comme l'énergie du fondamental dépend de N, des fluctuations de N entraînent mécaniquement des fluctuations d'énergie. Par exemple, si N fluctue à T = 0 (dans chaque sous-espace à N fixé, le système est dans l'état fondamental), on peut, pour retrouver A(T = 0) de l'équation (30) à partir de l'équation (16), utiliser le fait que E λ −Ē = (N λ −N)µ 0 (N)+O(N 0 ) et que ∂ E µ mc (Ē,N) ∼ T →0 −2/(25N), dont le report dans (20) donne ∂ N µ mc (Ē,N) ∼ T →0 ∂ N µ 0 (N) + 2µ 0 (N)/(25N). . On préfère parfois prendre comme petit paramètre 1/[ρ 0 (0)ξ 3 ], où la longueur de relaxation ξ du condensat au centre du piège est telle que 2 /(mξ 2 ) = µ TF . On peut passer aisément d'un petit paramètre à l'autre à l'aide de la relation [ρ 0 (0)a 3 ] 1/2 ρ 0 (0)ξ 3 = 1/(8π 3/2 ). 6. Dans la fenêtre de valeurs de la figure 1, en pratique 1/10 ≤Ť ≤ 10, l'inclusion des termes sous-dominants ne rapproche pas utilement du résultat exact. s(r, p 1 )s(r, p 2 )s(r, p 3 ) (54)8. Les processus à quatre quasi-particules, d'ordre supérieur en la fraction non condensée, sont supposés ici négligeables. 9. Ces diagrammes font intervenir de manière cachée des processus d'absorption ou d'émission stimulée dans le mode du condensat. δ (ǫ(r, p) + ǫ(r, q) − ǫ(r, p + q)) n(r, q) −n(r, p + q)(57)Cette expression coïncide avec le taux d'amortissement d'un mode d'impulsion p dans un gaz spatialement homogène de densité condensée gρ 0 (r)[32]. Tout comme δn(r, p, τ), F |δn(r, p, τ)δn(r ′ , p ′ , 0)|F considéré comme une fonction de (r, p, τ), obéit à l'équation (56) ; il en va de même pour sa moyenne δn(r, p, τ)δn(r ′ , p ′ , 0) sur tous les états de Fock initiaux |F , puisque les coefficients Γ et K ne dépendent pas de |F . Contractons cette dernière par . À strictement parler, cette solution stationnaire correspond aux nombres d'occupation moyens dans l'ensemble canonique, plutôt que dans l'ensemble microcanonique. La différence, calculable comme dans l'appendice C de la référence[16], mais hors de portée de nos équations cinétiques, tend vers zéro à la limite thermodynamique et est négligeable ici. On notera aussi que la non-conservation du nombre total de quasiparticules par les processus Beliaev-Landau impose à la loi de Bosen d'avoir une fugacité unité. . On a pris soin de tenir compte de la projection sur le sous-espace microcanonique des fluctuations d'énergie nulle non seulement dans la condition initiale (74), mais aussi dans la contraction par B(ǫ) dans (70), en y remplaçant B(ǫ) par B(ǫ) − Λǫ ; cette précaution, optionnelle dans la formulation exacte, ne l'est plus ici puisque l'approximation viole la conservation de l'énergie.18. En revanche, la valeur prédite pour le coefficient C dans (78) pour k B T = 10µ TF , soit ≃ 10 −5 , diffère significativement de la valeur ≃ 7 × 10 −5 constatée numériquement. ǫ (87) 1 Γ(r, p) ǫ et 1 Γ(r, p) ǫ = 1 Γ(ǫ) (88) où, rappelons-le, la barre horizontale O(r, p) au-dessus d'une grandeur physique représente la moyenne temporelle sur la trajectoire passant par (r, p) dans l'espace des phases comme dans l'équation (63), et les crochets O(r, p) ǫ représentent la moyenne uniforme sur la couche d'énergie ǫ comme dans l'équation (64). Dans le tableau d'équations (87,88), la colonne de gauche contient les grandeurs apparaissant dans C mc (0) ou dans les équations cinétiques séculaires avant l'approximation ergodique, et la colonne de droite ce qu'elles deviennent après approximation ergodique. Point important, nous considérons dans l'équation (88) 1/Γ plutôt queΓ car ce sont les inverses M −1 et M −2 qui apparaissent dans l'expression (81,82) du coefficient de diffusion D et du temps de retard t 0 , M étant l'opérateur représentant le second membre des équations cinétiques linéarisées (66).22 Les quantités à comparer (87,88) sont représentées en fonction de l'énergie ǫ sur la figure 9 à la température T = µ TF /k B . Il y a un accord remarquable sur une large plage d'énergie autour de ǫ = µ TF . Les écarts à l'approximation ergodique à très basse énergie et à très haute énergie étaient attendus : dans ces limites, la dynamique classique devient intégrable[22]. À haute énergie, nous22. Faut-il le rappeler, Γ(r, p) ǫ = Γ(ǫ), la moyenne uniforme étant invariante par évolution temporelle. Du coup, l'inégalité entre moyenne harmonique et moyenne arithmétique impose 1/Γ(r, p) ǫ ≥ 1/Γ(ǫ). RemerciementsNous remercions les membres de l'équipe « fermions froids » et « puces à atomes » du LKB, en particulier Christophe Salomon, pour d'utiles discussions.Annexe A. Comportement de Γ(ǫ) à basse et à haute énergiePour obtenir les comportements aux limites (76) et (77) du taux d'amortissement Γ(ǫ) des modes de Bogolioubov dans un piège à l'approximation sécularo-ergodique, nous récrivons l'intégrale dans l'espace des phases (64) comme une moyenne sur le potentiel chimique de Gross-Pitaevskii local µ 0 = gρ 0 (r) du taux d'amortissement Γ h (ǫ, µ 0 , k B T ) des modes d'énergie ǫ dans un système homogène de densité µ 0 /g et de température T :Dans la limite ǫ → 0, nous remplaçons d'abord heuristiquement l'intégrande dans l'équation (91) par un équivalent à basse énergie, en utilisant :Dans l'équation (93), nous avons utilisé l'équation(34); le résultat (94) se trouve dans la référence[32], où la fonc-On voit bien cependant qu'il faut couper cette intégrale à µ 0 > ǫ pour que l'équivalent (94) reste utilisable, d'où la loi d'échelle Γ(ǫ) ≈ ǫ 1/2 , dominée par le bord du condensat piégé et très différente de la loi d'annulation linéaire du cas homogène. Pour trouver le préfacteur dans la loi, il suffit de faire le changement d'échelle µ 0 = ǫν 0 dans l'intégrale et d'utiliser l'approximation de « haute température » de la référence[39]sur Γ h , uniformément valable près du bord du condensat piégé,avant de passer à la limite ǫ → 0 sous le signe intégral, ce qui conduit à l'équation cherchée (76) avec 24Dans la limite ǫ → +∞, nous utilisons le fait que, dans le cas homogène, le taux d'amortissement des quasiparticules se réduit au taux de collision ρ 0 σv d'une particule de vitesse v = (2ǫ/m) 1/2 avec les particules du condensat, de vitesse nulle et de densité ρ 0 , avec la section efficace σ = 8πa 2 des bosons indiscernables (c'est un processus de Beliaev) :En utilisant de même le développement à haute énergie (35) de ρ(ǫ), nous trouvons que P ǫ (µ 0 ) ∼ (8/π)(µ TF − µ 0 ) 1/2 ǫ −3/2 . Le report de ces équivalents dans l'équation (91) donne bien (77).24. En pratique, la fonction φ se déduit de l'équation (57) en effectuant l'approximation de champ classique 1 +n(r, q) ≃n(r, q) ≃ k B T/ǫ(r, q). Dans le calcul numérique de I, effectué en prenantǫ = 1/ν 0 comme variable d'intégration, on réduit l'effet de la troncature numérique à l'aide du développement asymptotique φ(ǫ) = ǫ→+∞ 4 2π ǫ 1/2 2 lnǫ 2 + 1−ln(ǫ/2) ǫ + 23+6 ln(ǫ/2) 8ǫ 2 + O( lnǫ ǫ 3 ) , qui corrige et améliore celui de l'équation (35) de la référence[39]. Bell correlations in a Bose-Einstein condensate. R Schmied, J.-D Bancal, B Allard, M Fadel, V Scarani, P Treutlein, N Sangouard, Science. 352441R. Schmied, J.-D. Bancal, B. Allard, M. Fadel, V. Scarani, P. Treutlein, N. Sangouard, "Bell correlations in a Bose-Einstein condensate", Science 352, 441 (2016). Scalable Spin Squeezing for Quantum-Enhanced Magnetometry with Bose-Einstein Condensates. W Muessel, H Strobel, D Linnemann, D B Hume, M K Oberthaler, Phys. Rev. Lett. 113103004W. Muessel, H. Strobel, D. Linnemann, D.B. Hume, M.K. Oberthaler, "Scalable Spin Squeezing for Quantum-Enhanced Magnetometry with Bose-Einstein Condensates", Phys. Rev. Lett. 113, 103004 (2014). Integrated Mach-Zehnder interferometer for Bose-Einstein condensates. T Berrada, S Van Frank, R Bücker, T Schumm, J.-F Schaff, J Schmiedmayer, Nature Comm. 42077T. Berrada, S. van Frank, R. Bücker, T. Schumm, J.-F. Schaff, J. Schmiedmayer, "Integrated Mach-Zehnder interferometer for Bose-Einstein condensates", Nature Comm. 4, 2077 (2013). Observation of Bose-Einstein Condensation in a Dilute Atomic Vapor. M H Anderson, J R Ensher, M R Matthews, C E Wieman, E A Cornell, Science. 269198M.H. Anderson, J.R. Ensher, M.R. Matthews, C.E. Wieman, E.A. Cornell, "Observation of Bose-Einstein Condensation in a Dilute Atomic Vapor", Science 269, 198 (1995). Bose-Einstein condensation in a gas of sodium atoms. K B Davis, M.-O Mewes, M R Andrews, N J Van Druten, D S Durfee, D M Kurn, W Ketterle, Phys. Rev. Lett. 753969K.B. Davis, M.-O. Mewes, M.R. Andrews, N.J. van Druten, D.S. Durfee, D.M. Kurn, W. Ketterle, "Bose-Einstein condensation in a gas of sodium atoms", Phys. Rev. Lett. 75, 3969 (1995). Theory of Bose-Einstein condensation in trapped gases. F Dalfovo, S Giorgini, L P Pitaevskii, S Stringari, Rev. Mod. Phys. 71463F. Dalfovo, S. Giorgini, L.P. Pitaevskii, S. Stringari, "Theory of Bose-Einstein condensation in trapped gases", Rev. Mod. Phys. 71, 463 (1999). Condensation of semiconductor microcavity exciton polaritons. H Deng, G Weihs, C Santori, J Bloch, Y Yamamoto, Science. 298199H. Deng, G. Weihs, C. Santori, J. Bloch, Y. Yamamoto, "Condensation of semiconductor microcavity exciton polaritons", Science 298, 199 (2002). Bose-Einstein condensation of exciton polaritons. J Kasprzak, M Richard, S Kundermann, A Baas, P Jeambrun, J M J Keeling, F M Marchetti, M H Szymańska, R André, J L Staehli, V Savona, P B Littlewood, B Deveaud, Le Si Dang, Nature. 443409J. Kasprzak, M. Richard, S. Kundermann, A. Baas, P. Jeambrun, J.M.J. Keeling, F.M. Marchetti, M.H. Szymańska, R. André, J.L. Staehli, V. Savona, P.B. Littlewood, B. Deveaud, Le Si Dang, "Bose-Einstein condensation of exciton polaritons", Nature 443, 409 (2006). Superfluidity of Polaritons in Semiconductor Microcavities. A Amo, J Lefrère, S Pigeon, C Adrados, C Ciuti, I Carusotto, R Houdré, E Giacobino, A Bramati, Nature Phys. 5805A. Amo, J. Lefrère, S. Pigeon, C. Adrados, C. Ciuti, I. Carusotto, R. Houdré, E. Giacobino, A. Bramati, "Superfluidity of Polaritons in Semiconductor Microcavities", Nature Phys. 5, 805 (2009). Evidence for a Bose-Einstein condensate of excitons. M Alloing, M Beian, M Lewenstein, D Fuster, Y González, L González, R Combescot, M Combescot, F Dubin, Europhys. Lett. 10710012M. Alloing, M. Beian, M. Lewenstein, D. Fuster, Y. González, L. González, R. Combescot, M. Combescot, F. Dubin, "Evidence for a Bose- Einstein condensate of excitons", Europhys. Lett. 107, 10012 (2014). Quantum kinetic theory. IV. Intensity and amplitude fluctuations of a Bose-Einstein condensate at finite temperature including trap loss. D Jaksch, C W Gardiner, K M Gheri, P Zoller, Phys. Rev. A. 581450D. Jaksch, C.W. Gardiner, K.M. Gheri, P. Zoller, "Quantum kinetic theory. IV. Intensity and amplitude fluctuations of a Bose-Einstein conden- sate at finite temperature including trap loss", Phys. Rev. A 58, 1450 (1998). Decoherence of Bose-Einstein Condensates in Traps at Finite Temperature. R Graham, Phys. Rev. Lett. 815262R. Graham, "Decoherence of Bose-Einstein Condensates in Traps at Finite Temperature", Phys. Rev. Lett. 81, 5262 (1998). Orthogonality catastrophe and decoherence of a confined Bose-Einstein condensate at finite temperature. A B Kuklov, J L Birman, Phys. Rev. A. 6313609A.B. Kuklov, J.L. Birman, "Orthogonality catastrophe and decoherence of a confined Bose-Einstein condensate at finite temperature", Phys. Rev. A 63, 013609 (2000). Nondiffusive phase spreading of a Bose-Einstein condensate at finite temperature. A Sinatra, Y Castin, E Witkowska, Phys. Rev. A. 7533616A. Sinatra, Y. Castin, E. Witkowska, "Nondiffusive phase spreading of a Bose-Einstein condensate at finite temperature", Phys. Rev. A 75, 033616 (2007). Genuine phase diffusion of a Bose-Einstein condensate in the microcanonical ensemble: A classical field study. A Sinatra, Y Castin, Phys. Rev. A. 7853615A. Sinatra, Y. Castin, "Genuine phase diffusion of a Bose-Einstein condensate in the microcanonical ensemble: A classical field study", Phys. Rev. A 78, 053615 (2008). Coherence time of a Bose-Einstein condensate. A Sinatra, Y Castin, E Witkowska, Phys. Rev. A. 8033614A. Sinatra, Y. Castin, E. Witkowska, "Coherence time of a Bose-Einstein condensate", Phys. Rev. A. 80, 033614 (2009). Spatial and temporal coherence of a Bose-condensed gas. A Sinatra, Y Castin, Physics of Quantum Fluids : new trends and hot topics in atomic and polariton condensates, édité par M. Modugno, A. Bramati. BerlinSpringer177A. Sinatra, Y. Castin, "Spatial and temporal coherence of a Bose-condensed gas", in Physics of Quantum Fluids : new trends and hot topics in atomic and polariton condensates, édité par M. Modugno, A. Bramati, Springer Series in Solid-State Sciences 177 (Springer, Berlin, 2013). Brouillage thermique d'un gaz cohérent de fermions. H Kurkjian, Y Castin, A Sinatra, 10.1016/j.crhy.2016.02.005Comptes Rendus Physique. 17open accessH. Kurkjian, Y. Castin, A. Sinatra, "Brouillage thermique d'un gaz cohérent de fermions", Comptes Rendus Physique 17, 789 (2016) [open access, doi: 10.1016/j.crhy.2016.02.005]. Bose-Einstein Condensation of Atoms in a Uniform Potential. A L Gaunt, T F Schmidutz, I Gotlibovych, R P Smith, Z Hadzibabic, Phys. Rev. Lett. 110200406A.L. Gaunt, T.F. Schmidutz, I. Gotlibovych, R.P. Smith, Z. Hadzibabic, "Bose-Einstein Condensation of Atoms in a Uniform Potential", Phys. Rev. Lett. 110, 200406 (2013). Damping of Low-Energy Excitations of a Trapped Bose-Einstein Condensate at Finite Temperatures. P O Fedichev, G V Shlyapnikov, J T M Walraven, Phys. Rev. Lett. 802269P.O. Fedichev, G.V. Shlyapnikov, J.T.M. Walraven, "Damping of Low-Energy Excitations of a Trapped Bose-Einstein Condensate at Finite Temperatures", Phys. Rev. Lett. 80, 2269 (1998). Limit of spin squeezing in trapped Bose-Einstein condensates. A Sinatra, Y Castin, E Witkowska, EPL. 10240001A. Sinatra, Y. Castin, E. Witkowska, "Limit of spin squeezing in trapped Bose-Einstein condensates", EPL 102, 40001 (2013). Classical quasiparticle dynamics in trapped Bose condensates. M Fliesser, A Csordás, R Graham, P Szépfalusy, Phys. Rev. A. 564879M. Fliesser, A. Csordás, R. Graham, P. Szépfalusy, "Classical quasiparticle dynamics in trapped Bose condensates", Phys. Rev. A 56, 4879 (1997). Classical quasiparticle dynamics and chaos in trapped Bose condensates. M Fliesser, R Graham, Physica D. 131141M. Fliesser, R. Graham, "Classical quasiparticle dynamics and chaos in trapped Bose condensates", Physica D 131, 141 (1999). Low temperature Bose-Einstein condensates in time dependent traps : beyond the U(1)-symmetry breaking approach. Y Castin, R Dum, Phys. Rev. A. 573008Y. Castin, R. Dum, "Low temperature Bose-Einstein condensates in time dependent traps : beyond the U(1)-symmetry breaking approach", Phys. Rev. A 57, 3008 (1998). Quantum statistical mechanics in a closed system. J M Deutsch, Phys. Rev. A. 432046J.M. Deutsch, "Quantum statistical mechanics in a closed system", Phys. Rev. A 43, 2046 (1991). Chaos and quantum thermalization. M Srednicki, Phys. Rev. E. 50888M. Srednicki, "Chaos and quantum thermalization", Phys. Rev. E 50, 888 (1994). Thermalization and its mechanism for generic isolated quantum systems. M Rigol, V Dunjko, M Olshanii, Nature. 452854M. Rigol, V. Dunjko, M. Olshanii, "Thermalization and its mechanism for generic isolated quantum systems", Nature 452, 854 (2008). Many-Body Problem in Quantum Mechanics and Quantum Statistical Mechanics. T D Lee, C N Yang, Phys. Rev. 1051119T.D. Lee, C.N. Yang, "Many-Body Problem in Quantum Mechanics and Quantum Statistical Mechanics", Phys. Rev. 105, 1119 (1957). Achieving a BCS transition in an atomic Fermi gas. L Carr, Y Castin, G Shlyapnikov, Phys. Rev. Lett. 92150404L. Carr, Y. Castin, G. Shlyapnikov, "Achieving a BCS transition in an atomic Fermi gas", Phys. Rev. Lett. 92, 150404 (2004). Collapses and Revivals of Bose-Einstein Condensates Formed in Small Atomic Samples. E M Wright, D F Walls, J C Garrison, Phys. Rev. Lett. 772158E.M. Wright, D.F. Walls, J.C. Garrison, "Collapses and Revivals of Bose-Einstein Condensates Formed in Small Atomic Samples", Phys. Rev. Lett. 77, 2158 (1996). Relative phase of two Bose-Einstein condensates. Y Castin, J Dalibard, Phys. Rev. A. 554330Y. Castin, J. Dalibard, "Relative phase of two Bose-Einstein condensates", Phys. Rev. A 55, 4330 (1997). Damping in dilute Bose gases : A mean-field approach. S Giorgini, Phys. Rev. A. 572949S. Giorgini, "Damping in dilute Bose gases : A mean-field approach ", Phys. Rev. A 57, 2949 (1998). Creating long-lived neutral atom traps in a cryogenic environment. P A Willems, K G Libbrecht, Phys. Rev. A. 511403P.A. Willems, K.G. Libbrecht, "Creating long-lived neutral atom traps in a cryogenic environment", Phys. Rev. A 51, 1403 (1995). Confinement of anti-hydrogen for 1000 seconds. Alpha The, Collaboration, Nature Physics. 7558The ALPHA collaboration, "Confinement of anti-hydrogen for 1000 seconds", Nature Physics 7, 558 (2011). A Sinatra, Y Castin, Phase Dynamics of Bose-Einstein Condensates : Losses versus Revivals. 4247A. Sinatra, Y. Castin, "Phase Dynamics of Bose-Einstein Condensates : Losses versus Revivals", Eur. Phys. J. D 4, 247 (1998). Three-Body Recombination at Vanishing Scattering Lengths in an Ultracold Bose Gas. Z Shotan, O Machtey, S Kokkelmans, L Khaykovich, Phys. Rev. Lett. 11353202Z. Shotan, O. Machtey, S. Kokkelmans, L. Khaykovich, "Three-Body Recombination at Vanishing Scattering Lengths in an Ultracold Bose Gas", Phys. Rev. Lett. 113, 053202 (2014). Measurement of s-wave scattering lengths in a twocomponent Bose-Einstein condensate. M Egorov, B Opanchuk, P Drummond, B V Hall, P Hannaford, A I Sidorov, Phys. Rev. A. 8753614M. Egorov, B. Opanchuk, P. Drummond, B.V. Hall, P. Hannaford, A.I. Sidorov, "Measurement of s-wave scattering lengths in a two- component Bose-Einstein condensate", Phys. Rev. A 87, 053614 (2013). W H Press, S A Teukolsky, W T Vetterling, B P Flannery, Numerical Recipes. CambridgeCambridge University PressW.H. Press, S.A. Teukolsky, W.T. Vetterling, B.P. Flannery, Numerical Recipes (Cambridge University Press, Cambridge, 1988). Finite-temperature perturbation theory for a spatially inhomogeneous Bose-condensed gas. P O Fedichev, G V Shlyapnikov, Phys. Rev. A. 583146P. O. Fedichev, G. V. Shlyapnikov, "Finite-temperature perturbation theory for a spatially inhomogeneous Bose-condensed gas", Phys. Rev. A 58, 3146 (1998). où Var ett sont la variance du déphasage et le temps écoulé dans les unités de la figure 4. où Var ett sont la variance du déphasage et le temps écoulé dans les unités de la figure 4, etK 3 = mK 3 /( a 4 ). Pour estimer l'ordre de grandeur deK 3 dans un gaz d'atomes froids, prenons l'exemple du rubidium 87 dans le sous-niveau hyperfin fondamental |F = 1, m F = −1 en champ magnétique quasi nul : les mesures donnent K 3 = 6 × 10 −42 m 6 /s et a = 5, 31 nm [37] doncK 3 ≃ 10. Sur la figure 4a (k B T = µ TF ), au temps réduitt = 5 d'entrée dans le régime asymptotique de la diffusion de phase, on voit que la variance parasite induite par les pertes pour cette valeur deK 3 est environ le triple de la variance utile ; leurs dépendances en temps très différentes devraient cependant permettre de les séparer. La situation est beaucoup plus favorable à plus haute température, k B T ≫ µ TF. La constante réduiteK 3 est une propriété intrinsèque de l'espèce atomique utilisée dans l'expérience (même s'il est possible de la faire varier à l'aide d'une résonance de Feshbach magnétiqueeffet des pertes sur la variance du déphasage étant par exemple encore négligeable au temps réduitt = 100 sur la figure 4b (k B T = 10µ TFLa constante réduiteK 3 est une propriété intrinsèque de l'espèce atomique utilisée dans l'expérience (même s'il est possible de la faire varier à l'aide d'une résonance de Feshbach magnétique [36]). Pour estimer l'ordre de grandeur deK 3 dans un gaz d'atomes froids, prenons l'exemple du rubidium 87 dans le sous-niveau hyperfin fondamental |F = 1, m F = −1 en champ magnétique quasi nul : les mesures donnent K 3 = 6 × 10 −42 m 6 /s et a = 5, 31 nm [37] doncK 3 ≃ 10. Sur la figure 4a (k B T = µ TF ), au temps réduitt = 5 d'entrée dans le régime asymptotique de la diffusion de phase, on voit que la variance parasite induite par les pertes pour cette valeur deK 3 est environ le triple de la variance utile ; leurs dépendances en temps très différentes devraient cependant permettre de les séparer. La situation est beaucoup plus favorable à plus haute température, k B T ≫ µ TF , l'effet des pertes sur la variance du déphasage étant par exemple encore négligeable au temps réduitt = 100 sur la figure 4b (k B T = 10µ TF ). Discussion de l'hypothèse ergodique Le mouvement classique des quasi-particules de Bogolioubov, comme l'ont montré les références [22, 23] dans le cas d'un piège harmonique à symétrie de révolution, est fortement chaotique aux énergies ǫ ≃ µ TF mais même à cette énergie, les sections de Poincaré révèlent des îlots de stabilité dans l'espace des phases, qui ne sont pas traversés par les trajectoires de la mer chaotique. il n'y a donc pas ergodicité au sens strictDiscussion de l'hypothèse ergodique Le mouvement classique des quasi-particules de Bogolioubov, comme l'ont montré les références [22, 23] dans le cas d'un piège harmonique à symétrie de révolution, est fortement chaotique aux énergies ǫ ≃ µ TF mais même à cette énergie, les sections de Poincaré révèlent des îlots de stabilité dans l'espace des phases, qui ne sont pas traversés par les trajectoires de la mer chaotique : il n'y a donc pas ergodicité au sens strict. . R Schmied, J.-D Bancal, B Allard, M Fadel, V Scarani, P Treutlein, N Sangouard, Bell, Science. 352441R. Schmied, J.-D. Bancal, B. Allard, M. Fadel, V. Scarani, P. Treutlein, N. Sangouard, « Bell correlations in a Bose-Einstein condensate », Science 352, 441 (2016). « Scalable Spin Squeezing for Quantum-Enhanced Magnetometry with Bose-Einstein Condensates. W Muessel, H Strobel, D Linnemann, D B Hume, M K Oberthaler, Phys. Rev. Lett. 113103004W. Muessel, H. Strobel, D. Linnemann, D.B. Hume, M.K. Oberthaler, « Scalable Spin Squeezing for Quantum-Enhanced Magnetometry with Bose-Einstein Condensates », Phys. Rev. Lett. 113, 103004 (2014). « Integrated Mach-Zehnder interferometer for Bose-Einstein condensates ». T Berrada, S Van Frank, R Bücker, T Schumm, J.-F Schaff, J Schmiedmayer, Nature Comm. 42077T. Berrada, S. van Frank, R. Bücker, T. Schumm, J.-F. Schaff, J. Schmiedmayer, « Integrated Mach-Zehnder interferometer for Bose-Einstein condensates », Nature Comm. 4, 2077 (2013). « Observation of Bose-Einstein Condensation in a Dilute Atomic Vapor. M H Anderson, J R Ensher, M R Matthews, C E Wieman, E A Cornell, Science. 269198M.H. Anderson, J.R. Ensher, M.R. Matthews, C.E. Wieman, E.A. Cornell, « Observation of Bose-Einstein Condensation in a Dilute Atomic Vapor », Science 269, 198 (1995). Einstein condensation in a gas of sodium atoms. K B Davis, M.-O Mewes, M R Andrews, N J Van Druten, D S Durfee, D M Kurn, W Ketterle, Bose, Phys. Rev. Lett. 753969K.B. Davis, M.-O. Mewes, M.R. Andrews, N.J. van Druten, D.S. Durfee, D.M. Kurn, W. Ketterle, « Bose-Einstein condensation in a gas of sodium atoms », Phys. Rev. Lett. 75, 3969 (1995). « Theory of Bose-Einstein condensation in trapped gases. F Dalfovo, S Giorgini, L P Pitaevskii, S Stringari, Rev. Mod. Phys. 71463F. Dalfovo, S. Giorgini, L.P. Pitaevskii, S. Stringari, « Theory of Bose-Einstein condensation in trapped gases », Rev. Mod. Phys. 71, 463 (1999). H Deng, G Weihs, C Santori, J Bloch, Y Yamamoto, Condensation of semiconductor microcavity exciton polaritons. 298199H. Deng, G. Weihs, C. Santori, J. Bloch, Y. Yamamoto, « Condensation of semiconductor microcavity exciton polaritons », Science 298, 199 (2002). . J Kasprzak, M Richard, S Kundermann, A Baas, P Jeambrun, J M J Keeling, F M Marchetti, M H Szymańska, R André, J L Staehli, V Savona, P B Littlewood, B Deveaud, Le Si Dang, « Bose, Nature. 443409J. Kasprzak, M. Richard, S. Kundermann, A. Baas, P. Jeambrun, J.M.J. Keeling, F.M. Marchetti, M.H. Szymańska, R. André, J.L. Staehli, V. Savona, P.B. Littlewood, B. Deveaud, Le Si Dang, « Bose-Einstein condensation of exciton polaritons », Nature 443, 409 (2006). « Superfluidity of Polaritons in Semiconductor Microcavities ». A Amo, J Lefrère, S Pigeon, C Adrados, C Ciuti, I Carusotto, R Houdré, E Giacobino, A Bramati, Nature Phys. 5805A. Amo, J. Lefrère, S. Pigeon, C. Adrados, C. Ciuti, I. Carusotto, R. Houdré, E. Giacobino, A. Bramati, « Superfluidity of Polaritons in Semiconductor Microcavities », Nature Phys. 5, 805 (2009). . M Alloing, M Beian, M Lewenstein, D Fuster, Y González, L González, R Combescot, M Combescot, F Dubin, Europhys. Lett. 10710012M. Alloing, M. Beian, M. Lewenstein, D. Fuster, Y. González, L. González, R. Combescot, M. Combescot, F. Dubin, « Evidence for a Bose-Einstein condensate of excitons », Europhys. Lett. 107, 10012 (2014). « Quantum kinetic theory. IV. Intensity and amplitude fluctuations of a Bose-Einstein condensate at finite temperature including trap loss. D Jaksch, C W Gardiner, K M Gheri, P Zoller, Phys. Rev. A. 581450D. Jaksch, C.W. Gardiner, K.M. Gheri, P. Zoller, « Quantum kinetic theory. IV. Intensity and amplitude fluctuations of a Bose-Einstein condensate at finite temperature including trap loss », Phys. Rev. A 58, 1450 (1998). « Decoherence of Bose-Einstein Condensates in Traps at Finite Temperature ». R Graham, Phys. Rev. Lett. 815262R. Graham, « Decoherence of Bose-Einstein Condensates in Traps at Finite Temperature », Phys. Rev. Lett. 81, 5262 (1998). « Orthogonality catastrophe and decoherence of a confined Bose-Einstein condensate at finite temperature ». A B Kuklov, J L Birman, Phys. Rev. A. 6313609A.B. Kuklov, J.L. Birman, « Orthogonality catastrophe and decoherence of a confined Bose-Einstein condensate at finite temperature », Phys. Rev. A 63, 013609 (2000). « Nondiffusive phase spreading of a Bose-Einstein condensate at finite temperature ». A Sinatra, Y Castin, E Witkowska, Phys. Rev. A. 7533616A. Sinatra, Y. Castin, E. Witkowska, « Nondiffusive phase spreading of a Bose-Einstein condensate at finite temperature », Phys. Rev. A 75, 033616 (2007). « Genuine phase diffusion of a Bose-Einstein condensate in the microcanonical ensemble: A classical field study ». A Sinatra, Y Castin, Phys. Rev. A. 7853615A. Sinatra, Y. Castin, « Genuine phase diffusion of a Bose-Einstein condensate in the microcanonical ensemble: A classical field study », Phys. Rev. A 78, 053615 (2008). . A Sinatra, Y Castin, E Witkowska, Coherence, Phys. Rev. A. 8033614A. Sinatra, Y. Castin, E. Witkowska, « Coherence time of a Bose-Einstein condensate », Phys. Rev. A. 80, 033614 (2009). A Sinatra, Y Castin, Physics of Quantum Fluids : new trends and hot topics in atomic and polariton condensates, édité par M. Modugno, A. Bramati. BerlinSpringer177A. Sinatra, Y. Castin, « Spatial and temporal coherence of a Bose-condensed gas », in Physics of Quantum Fluids : new trends and hot topics in atomic and polariton condensates, édité par M. Modugno, A. Bramati, Springer Series in Solid-State Sciences 177 (Springer, Berlin, 2013). « Brouillage thermique d'un gaz cohérent de fermions ». H Kurkjian, Y Castin, A Sinatra, 10.1016/j.crhy.2016.02.005Comptes Rendus Physique. 17libre accèsH. Kurkjian, Y. Castin, A. Sinatra, « Brouillage thermique d'un gaz cohérent de fermions », Comptes Rendus Physique 17, 789 (2016) [libre accès, doi : 10.1016/j.crhy.2016.02.005]. Einstein Condensation of Atoms in a Uniform Potential. A L Gaunt, T F Schmidutz, I Gotlibovych, R P Smith, Z Hadzibabic, Bose, Phys. Rev. Lett. 110200406A.L. Gaunt, T.F. Schmidutz, I. Gotlibovych, R.P. Smith, Z. Hadzibabic, « Bose-Einstein Condensation of Atoms in a Uniform Potential », Phys. Rev. Lett. 110, 200406 (2013). « Damping of Low-Energy Excitations of a Trapped Bose-Einstein Condensate at Finite Temperatures ». P O Fedichev, G V Shlyapnikov, J T M Walraven, Phys. Rev. Lett. 802269P.O. Fedichev, G.V. Shlyapnikov, J.T.M. Walraven, « Damping of Low-Energy Excitations of a Trapped Bose-Einstein Condensate at Finite Temperatures », Phys. Rev. Lett. 80, 2269 (1998). « Limit of spin squeezing in trapped Bose-Einstein condensates. A Sinatra, Y Castin, E Witkowska, EPL. 10240001A. Sinatra, Y. Castin, E. Witkowska, « Limit of spin squeezing in trapped Bose-Einstein condensates », EPL 102, 40001 (2013). « Classical quasiparticle dynamics in trapped Bose condensates. M Fliesser, A Csordás, R Graham, P Szépfalusy, Phys. Rev. A. 564879M. Fliesser, A. Csordás, R. Graham, P. Szépfalusy, « Classical quasiparticle dynamics in trapped Bose condensates », Phys. Rev. A 56, 4879 (1997). M Fliesser, R Graham, Classical quasiparticle dynamics and chaos in trapped Bose condensates. 131141M. Fliesser, R. Graham, « Classical quasiparticle dynamics and chaos in trapped Bose condensates », Physica D 131, 141 (1999). « Low temperature Bose-Einstein condensates in time dependent traps : beyond the U(1)-symmetry breaking approach. Y Castin, R Dum, Phys. Rev. A. 573008Y. Castin, R. Dum, « Low temperature Bose-Einstein condensates in time dependent traps : beyond the U(1)-symmetry breaking approach », Phys. Rev. A 57, 3008 (1998). « Quantum statistical mechanics in a closed system. J M Deutsch, Phys. Rev. A. 432046J.M. Deutsch, « Quantum statistical mechanics in a closed system », Phys. Rev. A 43, 2046 (1991). Chaos and quantum thermalization. M Srednicki, Phys. Rev. E. 50888M. Srednicki, « Chaos and quantum thermalization », Phys. Rev. E 50, 888 (1994). « Thermalization and its mechanism for generic isolated quantum systems ». M Rigol, V Dunjko, M Olshanii, Nature. 452854M. Rigol, V. Dunjko, M. Olshanii, « Thermalization and its mechanism for generic isolated quantum systems », Nature 452, 854 (2008). « Many-Body Problem in Quantum Mechanics and Quantum Statistical Mechanics. T D Lee, C N Yang, Phys. Rev. 1051119T.D. Lee, C.N. Yang, « Many-Body Problem in Quantum Mechanics and Quantum Statistical Mechanics », Phys. Rev. 105, 1119 (1957). « Achieving a BCS transition in an atomic Fermi gas. L Carr, Y Castin, G Shlyapnikov, Phys. Rev. Lett. 92150404L. Carr, Y. Castin, G. Shlyapnikov, « Achieving a BCS transition in an atomic Fermi gas », Phys. Rev. Lett. 92, 150404 (2004). « Collapses and Revivals of Bose-Einstein Condensates Formed in Small Atomic Samples. E M Wright, D F Walls, J C Garrison, Phys. Rev. Lett. 772158E.M. Wright, D.F. Walls, J.C. Garrison, « Collapses and Revivals of Bose-Einstein Condensates Formed in Small Atomic Samples », Phys. Rev. Lett. 77, 2158 (1996). « Relative phase of two Bose-Einstein condensates. Y Castin, J Dalibard, Phys. Rev. A. 554330Y. Castin, J. Dalibard, « Relative phase of two Bose-Einstein condensates », Phys. Rev. A 55, 4330 (1997). « Damping in dilute Bose gases : A mean-field approach. S Giorgini, Phys. Rev. A. 572949S. Giorgini, « Damping in dilute Bose gases : A mean-field approach », Phys. Rev. A 57, 2949 (1998). « Creating long-lived neutral atom traps in a cryogenic environment. P A Willems, K G Libbrecht, Phys. Rev. A. 511403P.A. Willems, K.G. Libbrecht, « Creating long-lived neutral atom traps in a cryogenic environment », Phys. Rev. A 51, 1403 (1995). The ALPHA collaboration, « Confinement of anti-hydrogen for 1000 seconds. Nature Physics. 7558The ALPHA collaboration, « Confinement of anti-hydrogen for 1000 seconds », Nature Physics 7, 558 (2011). A Sinatra, Y Castin, Phase Dynamics of Bose-Einstein Condensates : Losses versus Revivals. 4247A. Sinatra, Y. Castin, « Phase Dynamics of Bose-Einstein Condensates : Losses versus Revivals », Eur. Phys. J. D 4, 247 (1998). Body Recombination at Vanishing Scattering Lengths in an Ultracold Bose Gas. Z Shotan, O Machtey, S Kokkelmans, L Khaykovich, Three, Phys. Rev. Lett. 11353202Z. Shotan, O. Machtey, S. Kokkelmans, L. Khaykovich, « Three-Body Recombination at Vanishing Scattering Lengths in an Ultracold Bose Gas », Phys. Rev. Lett. 113, 053202 (2014). « Measurement of s-wave scattering lengths in a twocomponent Bose-Einstein condensate. M Egorov, B Opanchuk, P Drummond, B V Hall, P Hannaford, A I Sidorov, Phys. Rev. A. 8753614M. Egorov, B. Opanchuk, P. Drummond, B.V. Hall, P. Hannaford, A.I. Sidorov, « Measurement of s-wave scattering lengths in a two- component Bose-Einstein condensate », Phys. Rev. A 87, 053614 (2013). W H Press, S A Teukolsky, W T Vetterling, B P Flannery, Numerical Recipes. CambridgeCambridge University PressW.H. Press, S.A. Teukolsky, W.T. Vetterling, B.P. Flannery, Numerical Recipes (Cambridge University Press, Cambridge, 1988). « Finite-temperature perturbation theory for a spatially inhomogeneous Bose-condensed gas. P O Fedichev, G V Shlyapnikov, Phys. Rev. A. 583146P. O. Fedichev, G. V. Shlyapnikov, « Finite-temperature perturbation theory for a spatially inhomogeneous Bose-condensed gas », Phys. Rev. A 58, 3146 (1998).
[]
[ "Virtual Particle Interpretation of Quantum Mechanics -a non-dualistic model of QM with a natural probability interpretation", "Virtual Particle Interpretation of Quantum Mechanics -a non-dualistic model of QM with a natural probability interpretation", "Virtual Particle Interpretation of Quantum Mechanics -a non-dualistic model of QM with a natural probability interpretation", "Virtual Particle Interpretation of Quantum Mechanics -a non-dualistic model of QM with a natural probability interpretation" ]
[ "J M Karimäki \nAalto University\nFinland\n", "J M Karimäki \nAalto University\nFinland\n" ]
[ "Aalto University\nFinland", "Aalto University\nFinland" ]
[]
An interpretation of non-relativistic quantum mechanics is presented in the spirit of Erwin Madelung's hydrodynamic formulation of QM and Louis de Broglie's and David Bohm's pilot wave models. The aims of the approach are as follows: 1) to have a clear ontology for QM, 2) to describe QM in a causal way, 3) to get rid of the wave-particle dualism in pilot wave theories, 4) to provide a theoretical framework for describing creation and annihilation of particles, and 5) to provide a possible connection between particle QM and virtual particles in QFT. These goals are achieved, if the wave function is replaced by a fluid of so called virtual particles. It is also assumed that in this fluid of virtual particles exist a few real particles and that only these real particles can be directly observed. This has relevance for the measurement problem in QM and it is found that quantum probabilities arise in a very natural way from the structure of the theory. The model presented here is very similar to a recent computational model of quantum physics and recent Bohmian models of QFT.
null
[ "https://arxiv.org/pdf/1206.1237v2.pdf" ]
118,640,985
1206.1237
cabedb40e07326f2cace10c4a068cefa8168c999
Virtual Particle Interpretation of Quantum Mechanics -a non-dualistic model of QM with a natural probability interpretation 7 June 2012 J M Karimäki Aalto University Finland Virtual Particle Interpretation of Quantum Mechanics -a non-dualistic model of QM with a natural probability interpretation 7 June 2012Pilot waveMadelung fluidBohmian mechanicsOntological in- terpretation of quantum mechanicsHydrodynamic quantum mechanics An interpretation of non-relativistic quantum mechanics is presented in the spirit of Erwin Madelung's hydrodynamic formulation of QM and Louis de Broglie's and David Bohm's pilot wave models. The aims of the approach are as follows: 1) to have a clear ontology for QM, 2) to describe QM in a causal way, 3) to get rid of the wave-particle dualism in pilot wave theories, 4) to provide a theoretical framework for describing creation and annihilation of particles, and 5) to provide a possible connection between particle QM and virtual particles in QFT. These goals are achieved, if the wave function is replaced by a fluid of so called virtual particles. It is also assumed that in this fluid of virtual particles exist a few real particles and that only these real particles can be directly observed. This has relevance for the measurement problem in QM and it is found that quantum probabilities arise in a very natural way from the structure of the theory. The model presented here is very similar to a recent computational model of quantum physics and recent Bohmian models of QFT. Introduction Louis de Broglie [1] and David Bohm [2] have shown that it is possible to give an interpretation of non-relativistic quantum mechanics with a clearly defined ontology and continuous particle trajectories. The models of de Broglie and Bohm, although developed independently, are essentially the same, that is: there is a wave function that evolves according to the Schrödinger equation and guides the particles along continuous tracks. This theory, which we call the pilot wave theory in this paper, produces the same predictions as the standard interpretation of QM. The pilot wave theory has many virtues, but it has also been criticized, among other things, for having a kind of dualistic ontology, i.e. needing both waves and particles to describe the world. Also, it has been difficult to extend the theory to describe the creation and annihilation of particles, although recently some progress towards that goal has been made [3,4]. The interpretation, which we shall present in this paper removes the problem of ontological dualism by removing the status of the wave function as a fundamental building block of the theory. The wave function is replaced by a fluid of particles, which can exist in two different states: virtual and real. The number of real particles is assumed to be finite, whereas the virtual particles form a continuum, and thus the bulk of the fluid. All the particles in the fluid follow the trajectories given by the pilot wave theory. The density of the (mostly virtual) particles at some point gives the probability density of finding a real particle at that point at the time of measurement. Since there is no wave function, the particles must somehow guide themselves as a fluid. The equations for such a quantum fluid motion were first introduced by Erwin Madelung in his hydrodynamic formulation of quantum mechanics [5]. The theory presented in this paper can be understood as Madelung's theory supplemented by some minimal additional definitions and assumptions needed to make it really work -in answering the measuring problem of QM, for example. In the following sections we will see in more detail how the non-relativistic single particle quantum mechanics is handled in this model. After the detailed description of the single particle case, the generalization to the N -particle case will be shortly explained. Ontology of the model It is assumed that there is a background space, where the particles move. The particles can have mass and electric charge. Spin is not considered. The particles can be in two different states that we call here virtual and real. In addition to the background space and particles, there can also be external potential fields. These are assumed to be essentially classical. The number N of real particles in a given volume is assumed to be finite. The number N 0 of virtual particles in a given volume is expected to be much larger, essentially they can be thought of as forming a continuum characterized by a fluid density ρ(x, t), and, consequently, a number density N 0 ρ(x, t). Interactions Only the real particles can be observed directly. Otherwise all the particles, virtual or real, follow continuous tracks that are the same as those given by the de Broglie-Bohm pilot wave theory. Also the velocities are assumed to be the same as in the pilot wave theory. How this is possible without explicitly assuming the existence of the wave function will be shown in the next section. It will also be shown that the force affecting the particles can be understood as the sum of classical forces caused by the real particles and quantum forces caused by the fluid of virtual particles ( Figure 1). Equations for single particle case The single particle case here means that there is only one real particle in the system. The number of virtual particles can be assumed to be practically infinite. The velocity of the particles, virtual or real, is given by: v = 1 m ∇S,(1) where S(x, t) is simply the phase of the wave function Ψ multiplied by . It appears when the wave function is written in the polar form: Ψ = ρ 1/2 e iS/ . There are two equations that govern the evolution of the system: ∂ρ ∂t = −∇ · ( 1 m ρ∇S) (2) ∂S ∂t = − (∇S) 2 2m − V + 2 2m ∇ 2 √ ρ √ ρ .(3) These are known as the hydrodynamic equations of QM, first introduced by Erwin Madelung in 1926 [5], and later used by Louis de Broglie [1] and David Bohm [2] in their pilot wave models of QM. The hydrodynamic equations of QM are completely equivalent to the Schrödinger equation given that some additional constraints on the solutions ρ(x, t) and S(x, t) are imposed [6]. The last term in (3), taken with negative sign, is known as the quantum potential: Q = − 2 2m ∇ 2 √ ρ √ ρ .(4) Now, we would like to give up using the phase S (since it is derived from the wave function), and, instead of it, take the velocity field v(x, t) as the second variable. We replace ∇S by mv in (2) and (3) and take the gradient of (3) to get the following two equations: ∂ρ ∂t = −∇ · (ρv) (5) m ∂v ∂t = −∇( 1 2 mv 2 + V + Q).(6) Since we have taken a gradient, and want to ensure consistency with the results of standard QM, we need the following additional constraint: mv · dl = 2πn , n ∈ Z, where the integration is along any closed smooth path along which v is well defined. Equation (5) can be interpreted as the conservation of probability, which is the usual standpoint taken in hydrodynamic models of QM, or as the conservation or matter, which is actually more suitable for the interpretation presented in this paper. Equation (6) takes a more familiar form, if the time derivative of v is taken along the particle trajectory: m dv dt = −∇(V + Q)(7) or ma = F Classical + F Quantum ,(8) where F Classical = −∇V is the classical force and F Quantum = −∇Q is a quantum force caused by the local variations of ρ. 5 Interpretation of quantum probability in the single particle case In QM it is customary to interpret the square of the absolute value of the wave function ρ = |Ψ(x, t)|² to be the probability density of finding a particle at the location x, when its position is measured at the time t. This law is held to be true in the standard interpretations of QM as well as in the pilot wave approaches of de Broglie and Bohm, and it is usually taken to be a basic fundamental feature of Quantum Mechanics. In the standard interpretations, no further causal factors are sought to explain this probability law, whereas in the pilot wave models the probability law arises simply from our ignorance of the true particle positions. Thus, in the pilot wave models, we have to assume that the probability density is equal to |Ψ(x, t 0 )|² initially. Once this assumption is made, the guidance condition ensures that the probability density will agree with the value |Ψ(x, t)|² at any later time t > t 0 . In the model presented in this paper, the situation is, however, slightly different. The probability density is again given by the density ρ(x, t), but it has a very natural explanation: Since both the real and virtual particles follow the same equations of motion and their only difference is that the real particle can be detected and the virtual particles cannot, we simply assume that the single real particle can be any one among all the particles forming the quantum fluid with number density N 0 ρ(x, t). Thus, thinking by purely classical definitions of probability, the fluid density ρ(x, t) is the only natural candidate for the probability density of the real particle. This situation is in no way different from finding one fish that has swallowed a gold coin in a big swarm of fishes swimming in the ocean and described by a "fish density" ρ fish (x, t), or spotting one ringed bird flying in a big flock of other birds without a ring. 6 Generalization to N -particle case The generalization to the N -particle case, (where N is the number of real particles) simply requires the single particle wave function to be replaced by the N -particle wave function. This wave function can then be replaced by the Nparticle density and velocity fields, analogously to the single particle case. In this case, the single particle is replaced by an N -particle multiplet: there will be one real, detectable, N -multiplet moving in a continuous "fluid" of virtual, undetectable N -multiplets. Since the fluid of N -multiplets has dimension 3N , the term hyperfluid is here suggested to describe it. (The name Madelung fluid has been previously used, but mainly in the context of superfluids in 3D or as an abstract probability density fluid of arbitrary dimension.) It can be shown that in some cases, the N -particle case reduces to many separate one particle cases. Such is the case for N free particles. An analogous situation exists in superfluids. Actually, a superfluid, such as He II, is a real world working model of the virtual particle fluid presented in this paper. Particle creation and annihilation The second remaining problem of the pilot wave theory, vis à vis the difficulty of handling cases where particles are created or destroyed, can also be solved, at least conceptually, using the virtual particle interpretation. All we need is some kind of a process of energy exchange between real and virtual particles. The following gives a qualitative picture of such a situation: Let us assume, for example, that in the beginning there is a real electron-positron pair, i.e. two active particles and a sea of passive virtual particles, of various kinds, all around them. The two active particles would collide with sufficient energy and transfer their energy to a sea of passive muon-anti-muon pairs. (This could exist without difficulty since it would not be observable at low energies). Now, a passive muonanti-muon pair would receive the energy of the electrons and become active, so that we would end up with a real muon and a real anti-muon speeding in opposite directions from the point of impact. However, at this point it must be emphasized that the process of energy exchange described in the above example cannot be directly described using the formalism presented in the earlier sections, but the model would have to be extended or modified somehow (e.g., using the Fock space). The reason for this is that particle creation and annihilation cannot be described by non-relativistic particle QM. Instead one should use a QFT version of the present theory, or some other clever formalism. A Bohmian version of QFT with similarities to the model presented here, and allowing particle creation and annihilation, has been described in [3], so the possibility of devising such a formalism also within the virtual particle scheme presented in this paper, doesn't appear to be totally unrealistic. In case such a detailed model of particle creation and annihilation can be produced, it could perhaps also be extended to model the Hawking radiation close to a black hole. Related to this, we can also assume that the virtual particles, or the quantum potential, have a small rest mass. This might provide an answer to the mystery of dark mass and dark energy in cosmology. Discussion The interpretation presented in this paper is in many ways similar to other hydrodynamic or stochastic models [5]- [11]. The differences in the approaches usually concern such things as the role of the wave function, the nature of the particles (provided they exist at all), and the form of the particle trajectories. Many approaches assume very irregular particle trajectories or fluid motions to explain the emergence of quantum probabilities [7,8]. Why this is not necessary was explained in section 5, where the quantum probability law was found to be a natural consequence of the formalism. Madelung's original hydrodynamic formulation of QM [5] can be seen as a conceptual starting point for this work, and the two approaches are quite similar 1 in spirit. Madelung's paper, however, leaves open many questions relevant for the measurement problem in QM. These questions can be answered within the framework of the proposed interpretation -to a large degree thanks to the features it inherits from the pilot wave theory. The virtual particle interpretation can be seen to be analogous to the many worlds formalism. The difference is that the multitude of worlds is here replaced by the multitude of the virtual particles (or virtual N -particle systems). The fact that hydrodynamical equations [13] appear also in the decoherent histories approach may have some significance. The situation can be expected to be similar also in some of the spontaneous localization models. Since the theory presented here is non-relativistic, it would be interesting to study what the requirement of Lorentz invariance would bring into the model. Another interesting topic would be to investigate the possibility of formulating a Machian [14] version of this theory and include gravity in it. The connection of Feynman paths to the particle paths in various hydrodynamical models is also worth investigating. Conclusions The model presented here, apart from the more speculative ideas of sections 6 and 7, is a well-defined theory. The main motivation behind it was to solve the problem of wave-particle dualism present in the de Broglie-Bohm pilot wave theory, while retaining many of the desirable features of that approach. One of the outcomes of this approach was a surprisingly simple explanation for quantum probabilities. Since the equations are the same as in the hydrodynamic formulation of QM and the pilot wave theory, and the predictions the same as those given by standard QM, the theory presented in this paper should be viewed rather as a model, or, as the title suggests, as an interpretation of quantum mechanics than as a new theory. However, by providing a new perspective and giving new insights, this model can serve as a basis for new theoretical developments. Acknowledgments The writer wishes to thank Lee Smolin, Hrvoje Nikolić, Sam Karvonen and Veikko Karimäki for useful ideas and comments. Figure 1 . 1Schematic view of the interactions between "real" particles (dark gray) and "virtual" particles (light blue). Among the more recent approaches, the closest in semblance to the model presented in this paper is, quite surprisingly, a model introduced by Courtney Lopreore and Robert Wyatt for numerical computations in Quantum Physics and Chemistry[12] (The existence of such a method actually came as a big surprise to the writer). Comparing an interpretation of QM with a computational method is challenging, but at least the animations created by the computational research group give an idea of the flow of a virtual particle fluid. . L De Broglie, Comptes Rendus. 183380L. de Broglie, Comptes Rendus. 183 (1926), 447, 184 (1927), 273, 185 (1927), 380 A suggested Interpretation of the Quantum Theory in Terms of "Hidden" Variables I & II. D Bohm, Phys. Rev. 85D. Bohm, A suggested Interpretation of the Quantum Theory in Terms of "Hidden" Variables I & II, Phys. Rev. 85 (1952), 166-193 Bohmian particle trajectories in relativistic quantum field theory. H Nikolić, H. Nikolić, Bohmian particle trajectories in relativistic quantum field theory, (2002) D Dürr, S Goldstein, R Tumulka, N Zanghì, Bohmian Mechanics and Quantum Field Theory. D. Dürr, S. Goldstein, R. Tumulka and N. Zanghì, Bohmian Mechanics and Quantum Field Theory, (2003) Quantentheorie in hydrodynamischer Form. E Madelung, Z. Phys. 40E. Madelung, Quantentheorie in hydrodynamischer Form, Z. Phys. 40 (1926), 322-326 Inequivalence between the Schrödinger equation and the Madelung hydrodynamic equations. T C Wallstrom, Phys. Rev. A. 493T. C. Wallstrom, Inequivalence between the Schrödinger equation and the Madelung hydrodynamic equations, Phys. Rev. A, 49, No. 3 (1994), 1613- 1617 Model of the Causal Interpretation of Quantum Theory in Terms of a Fluid with Irregular Fluctuations. D Bohm, J P Vigier, Phys. Rev. D. Bohm and J. P. Vigier, Model of the Causal Interpretation of Quantum Theory in Terms of a Fluid with Irregular Fluctuations, Phys. Rev, 96 (1954), 208-216 Derivation of the Schrödinger Equation from Newtonian Mechanics. E Nelson, Phys. Rev. 1501079E. Nelson, Derivation of the Schrödinger Equation from Newtonian Me- chanics, Phys. Rev. 150 (1966), 1079 The Undivided Universe. D Bohm, B J Hiley, Routledge, London and New YorkD. Bohm and B. J. Hiley, The Undivided Universe, Routledge, London and New York (1993) Non-Locality and Locality in the Stochastic Interpretation of Quantum Mechanics. D Bohm, B J Hiley, Phys. Rep. 1723D. Bohm and B. J. Hiley, Non-Locality and Locality in the Stochastic In- terpretation of Quantum Mechanics, Phys. Rep. 172, No. 3 (1989), 93-122 Stochastic mechanics, hidden variables, and gravity. L Smolin, Quantum Concepts in Space and Time. R. Penrose and C. J. IshamOxfordL. Smolin, Stochastic mechanics, hidden variables, and gravity, in R. Pen- rose and C. J. Isham (eds.) Quantum Concepts in Space and Time, Oxford (1986), 147-173 Quantum Wave Packet Dynamics with Trajectories. C L Lopreore, R E Wyatt, Phys. Rev. Lett. 82C. L. Lopreore and R. E. Wyatt, Quantum Wave Packet Dynamics with Trajectories, Phys. Rev. Lett. 82 (1999), 5190-5193 The Emergence of Hydrodynamic Equations from Quantum Theory: A Decoherent Histories Analysis. J J Halliwell, Int. J. Theor. Phys. 397J. J. Halliwell, The Emergence of Hydrodynamic Equations from Quantum Theory: A Decoherent Histories Analysis, Int. J. Theor. Phys. 39, No. 7 (2000), 1767-1777 Leibnizian time, Machian dynamics, and quantum gravity, ibid. J B Barbour, J. B. Barbour, Leibnizian time, Machian dynamics, and quantum gravity, ibid. 236-246
[]
[ "Recombination energy in double white dwarf formation", "Recombination energy in double white dwarf formation" ]
[ "J L A Nandez 1⋆ \nDepartment of Physics\nUniversity of Alberta\nT6G 2E7EdmontonABCanada\n", "N Ivanova \nDepartment of Physics\nUniversity of Alberta\nT6G 2E7EdmontonABCanada\n", "J C Lombardi Jr\nDepartment of Physics\nAllegheny College\n16335MeadvillePAUSA\n" ]
[ "Department of Physics\nUniversity of Alberta\nT6G 2E7EdmontonABCanada", "Department of Physics\nUniversity of Alberta\nT6G 2E7EdmontonABCanada", "Department of Physics\nAllegheny College\n16335MeadvillePAUSA" ]
[ "Mon. Not. R. Astron. Soc" ]
In this Letter we investigate the role of recombination energy during a common envelope event. We confirm that taking this energy into account helps to avoid the formation of the circumbinary envelope commonly found in previous studies. For the first time, we can model a complete common envelope event, with a clean compact double white dwarf binary system formed at the end. The resulting binary orbit is almost perfectly circular. In addition to considering recombination energy, we also show that between 1/4 and 1/2 of the released orbital energy is taken away by the ejected material. We apply this new method to the case of the double-white dwarf system WD 1101+364, and we find that the progenitor system at the start of the common envelope event consisted of a ∼ 1.5M ⊙ red giant star in a ∼ 30 day orbit with a white dwarf companion.
10.1093/mnrasl/slv043
[ "https://arxiv.org/pdf/1503.02750v1.pdf" ]
118,349,382
1503.02750
93216d63d56e7f8f669b9c7b829f4cfd5f8e73ab
Recombination energy in double white dwarf formation 11 March 2015 J L A Nandez 1⋆ Department of Physics University of Alberta T6G 2E7EdmontonABCanada N Ivanova Department of Physics University of Alberta T6G 2E7EdmontonABCanada J C Lombardi Jr Department of Physics Allegheny College 16335MeadvillePAUSA Recombination energy in double white dwarf formation Mon. Not. R. Astron. Soc 000000011 March 2015Draft date: 11 March 2015(MN L A T E X style file v2.2)white dwarfs -hydrodynamic -equation of state -binaries: close In this Letter we investigate the role of recombination energy during a common envelope event. We confirm that taking this energy into account helps to avoid the formation of the circumbinary envelope commonly found in previous studies. For the first time, we can model a complete common envelope event, with a clean compact double white dwarf binary system formed at the end. The resulting binary orbit is almost perfectly circular. In addition to considering recombination energy, we also show that between 1/4 and 1/2 of the released orbital energy is taken away by the ejected material. We apply this new method to the case of the double-white dwarf system WD 1101+364, and we find that the progenitor system at the start of the common envelope event consisted of a ∼ 1.5M ⊙ red giant star in a ∼ 30 day orbit with a white dwarf companion. INTRODUCTION The formation of a compact binary system composed of two white dwarfs (WDs) is widely accepted to include a common envelope event (CEE), at least during the last episode of mass exchange between the first-formed WD and a low-mass red giant (RG). Lowmass RGs have a well-defined relation between their core masses and radii, providing for DWDs the best-defined state of a progenitor binary system at the onset of the CEE among all known types of post-common envelope (CE) systems. As a result, DWD systems have served extensively as test-sites for attempts to understand the physics of CEEs, using both population synthesis approaches and hydrodynamical methods. Previous attempts to model a CEE between a low-mass RG and a WD did not succeed to eject the entire CE during threedimensional (3D) hydrodynamical simulations (for most recent studies, see Passy et al. 2012;Ricker & Taam 2012). The final state of these simulations is that a significant fraction of the expanded envelope remains bound to the formed binary, forming a so-called circumbinary envelope. Then almost no energy transfer can take place from the binary orbit to that circumbinary envelope. Observationally, no circumbinary disk in a post-common envelope system has been found so far. It has been proposed long ago that recombination energy of hydrogen and helium should play a role during a CEE (Lucy 1967;Roxburgh 1967;Paczyński & Ziółkowski 1968;Han et al. 1994Han et al. , 2002. However, until now, this energy was not yet taken into account in 3D modeling. While the initially available recombination energy can be easily comparable to the binding energy of the remaining bound envelope (e.g. Passy et al. 2012), the important ⋆ E-mail: [email protected] (JLAN) question is when and where the energy is released -to be useful, recombination energy should not be released too early in the CEE nor in material already ejected, but instead in the circumbinary envelope at a time when the recombination energy is comparable to the binding energy of the not-yet ejected material. In this Letter, we investigate if the inclusion of recombination energy can help to remove the circumbinary envelope. We apply the new approach to the system WD 1101+364, a well-measured DWD that has P orb = 0.145 d and a mass ratio of q = M 1 /M 2 = 0.87 ± 0.03, where M 1 ≃ 0.31M ⊙ and M 2 ≃ 0.36M ⊙ are the masses of the younger and older WDs, respectively (Marsh 1995). INITIAL SET UP AND DEFINITIONS We anticipate that the progenitor of WD 1101+364 was a lowmass RG that had a degenerate core of 0.31M ⊙ . We consider the range of masses for the RG donor, M d,1 , from 1.0M ⊙ to 1.8M ⊙ . To evolve the RG and find the initial one-dimensional (1D) stellar profile, we use TWIN/Star stellar code (recent updates described in Glebbeek et al. 2008). The stars are evolved until their degenerate He cores have grown close to 0.31M ⊙ . For 3D simulations, we use StarSmasher (Gaburov et al. 2010;Lombardi et al. 2011), a Smoothed Particle Hydrodynamics (SPH) code. Technical details on using this code to treat CE events can be found in Nandez et al. (2014). A 1D stellar profile is imported to StarSmasher, where an initial stellar model represented by a certain number of particles N p is generated via a relaxation process. The core of a RG is modeled as a point mass -a special particle in SPH that interacts only gravitationally with other particles. Because the centre of the giant is not fully resolved, the core mass, M c,1 , is slightly more than in the 1D code (see Table Table 1. Initial conditions The models names are composed as following: two digits representing the RG mass are followed by "RG", η value is is outermost smoothing length; then "N" stands for non-synchronized and "S" for synchronized ("S") cases. "P" denotes the model with about twice larger number of particles than in all the other models. M d,1 , M env,1 and M c,1 are the total, envelope and core mass of the RG, in M ⊙ . R rlof is the radius of the donor Roche lobe, in R ⊙ , and η describes the adopted donor's radius definition (see §2). a orb,ini is the initial orbital separation in R ⊙ , P orb,ini is the initial orbital period in days. N p is the total number of SPH particles that represent the RG. E bind , E rec , E orb,ini and E tot,ini are the binding energy of the RG envelope without recombination energy, the total recombination energy of the RG envelope, initial orbital energy, and initial total energy , defined as the sum of the binding, recombination, and initial orbital energies, respectively, in the units of 10 46 erg. λ is a star structure parameter (see Equation 3). Model M d,1 M env,1 M c,1 R rlof a orb,ini P orb,ini N p η E bind E rec E orb,ini E tot, 1 for this and other initial values). This ensures a proper matching of stellar profiles of 3D envelopes with 1D stellar profiles. The envelope mass in a 3D star is M env,1 = M d,1 − M c,1 . In a 3D star, the radius of the star, R SPH , can not be defined as uniquely as the photospheric radius of the 1D star (for a thorough discussion, see Nandez et al. 2014). The stellar radius can be parameterized as R SPH = R out + ηh out , where R out is the position of the outermost particle and h out is the smoothing length of that particle. The parameter η can range between 0 (in this case, some mass will be found above R SPH ) to 2 (with all mass contained within R SPH ). In addition, we note that a synchronized giant is expected to attain a larger radius after relaxation than a non-synchronized giant. The initial orbital separation, a orb,ini , for the non-synchronized cases, is found from the assumption that R SPH is equal to the Roche lobe (RL) overflow radius, R rlof , and using the approximation by Eggleton (1983). The initial orbital period, P orb,ini is found assuming a Keplerian orbit. For the synchronized cases, the orbital period and separation are found at the moment when the RG overflows its Roche lobe (see §2.3 of Lombardi et al. 2011). Equations of state (EOS). The standard EOS (SEOS) in StarSmasher is analytical and includes radiation pressure and ideal gas contributions. To take into account recombination energy, we need another prescription for the EOS. Because we evolve the specific internal energy u i and density ρ i for each particle (among other variables), we prefer an EOS that uses u i and ρ i as independent variables. However, such an analytical expression does not exist in simple form when we consider recombination/ionization of atoms. Therefore, we are bound to use a tabulated EOS (TEOS) which uses u i and ρ i to provide the gas pressure P gas,i , temperature T i , specific entropy s i , etc. We use the MESA-EOS module to calculate such tables (see §4.2 of Paxton et al. 2011). The core of a RG is modeled as a point mass, and the rest of the star has uniform composition. Hence, only one table with a single set of composition for H, He, and metals needs to be generated for each RG. The tables that we generate operate in 9.84 log u[erg g −1 ] 19.0 and −14 log ρ[g cm −3 ] 3.8. When a particle has a density or specific internal energy outside the limits of our tables, we switch to the SEOS. Energy formalism. The energy formalism compares the donor's envelope binding energy E bind with the orbital energy before, E orb,ini , and after the CEE, E orb,fin (Webbink 1984;Livio & Soker 1988): E bind = α bind (E orb,fin − E orb,ini ) ≡ α bind ∆E orb . (1) Here α bind is the fraction of the released orbital energy used to expel the CE, 0 α bind 1. The binding energy of the donor's envelope, in its standard definition, is E bind = ∑ i m i φ i + 3 2 kT i µ i m H + aT 4 i ρ i ,(2) where m i , T i , ρ i , φ i and µ i are the mass, temperature, density, specific gravitational potential energy, and mean molecular mass, respectively, for each particle i. The constants k, a, and m H are the Boltzmann constant, radiation constant, and hydrogen atom mass, respectively, while φ i is calculated as in Hernquist & Katz (1989). Note that E bind in its standard definition does not include recombination energy. The binding energy of the donor's envelope is frequently parameterized using a parameter λ , defined as (de Kool 1990) λ ≡ − GM d,1 M env,1 E bind R rlof .(3) Here G is the gravitational constant. For low-mass giants, λ is known to have a value close to one, and we obtain similar results. We find the orbital energy of the binary system according to E orb = 1 2 µ|V 12 | 2 + ∑ i 1 2 m i φ i − ∑ j 1 2 m j φ RL 1 j − ∑ k 1 2 m k φ RL 2 k ,(4)where µ = M 1 M 2 /(M 1 + M 2 ) is the reduced mass, and V 12 = V 1 − V 2 is the relative velocity of the two stars. The first term gives the orbital kinetic energy. The second term is the total gravitational energy of the binary, with the sum being over all particles i in the binary. The third and fourth terms correspond to the removal of the self-gravitational energy of the donor (the sum being over particles j in star 1) and of the WD (the sum being over particles k in star 2, initially only the WD), respectively: the remaining gravitational energy is then just the orbital contribution. Recombination energy. In our treatment, the internal energy provided by the TEOS includes contributions from ideal gas, radiation, and the recombination energy for H, He, C, O, N, Ne, and Mg (see §4.2 of Paxton et al. 2011). The recombination energy can be extracted as E rec = ∑ i m i u i − 3 2 kT i µ i m H − aT 4 i ρ i ≡ α rec ∆E orb ,(5) where u i is the SPH specific internal energy of particle i. Values of E rec , as expected, scale well with the mass of the envelope. Note that here we introduce important new parameter, α rec -the ratio between the recombination energy and the released orbital energy. Total energy. The initial total available energy, E tot,ini , is E tot,ini = E orb,ini + E bind + E rec .(6) This quantity is conserved during the evolution of all our models. Bound and unbound material. For each particle, its total energy is defined as E tot,i ≡ 0.5m i v 2 i + m i φ i + m i u i , where the first, second and third terms are the kinetic, potential, and internal energies, respectively. If the particle has negative total energy, it is bound to the binary. In this case, if the particle is located outside of either RL, the particle is in the circumbinary region. Accordingly, we classify the particles to be in (i) the ejecta, m unb -the particles with positive energy, (ii) the circumbinary, m cir -the matter bound to the binary but outside of the two RLs, and (iii) binary, m bin -the particles are inside either of the two RLs. The total energy of the unbound material at infinity is computed when the unbound mass is in a steady state after the CEE: E ∞ tot,unb = ∑ i E unb tot,i = −α ∞ unb ∆E orb .(7) Here we introduce α ∞ unb -the energy taken away by the unbound material in units of the released orbital energy. Note that in the standard energy formalism this quantity is always assumed to be zero. Angular momentum budget. We calculate the orbital angular momentum J orb ≡ µ R 12 × V 12 ,(8) where R 12 = R 1 − R 2 is the displacement from star 2 to star 1. We note that the magnitude J orb = |J orb,z |, where the z-direction is perpendicular to the orbital plane. An outcome of a CEE can be characterized by how much orbital angular momentum is lost. We provide the γ-parameter (Nelemans et al. 2000;Nelemans & Tout 2005) as a way of quantifying angular momentum loss in our simulations: γ = M 1 + M 2 M unb J orb,ini − J orb,fin J orb,ini .(9) FORMATION OF A DWD THROUGH A CEE Comparison between the two EOSs. We compare the results of simulations with our two different EOSs using the model 1.5RG2N. In both cases, the initial relaxed stars have SPH profiles that match very well the 1D stellar profiles for pressure, density, and gravitational potential. However, this is not the case for the specific internal energy u (see Figure 1): clearly only the TEOS model matches the desired 1D stellar profile. As expected, the mismatch between the relaxed profile with SEOS and the stellar one is indeed due to neglecting recombination energy. We find that the SEOS fails to unbind the envelope in our CE simulations. Only about 50% of the envelope becomes unbound: the circumbinary matter does not interact with the formed binary at all, making it impossible to eject the entire envelope. This result is consistent with the findings of previous studies (Passy et al. 2012;Ricker & Taam 2012). On the other hand, the TEOS simulation clearly makes use of the recombination energy and ejects the envelope entirely. For all other models presented in the Letter, we use the TEOS. Masses. At the end of the simulations, we form a binary consisting of M 1 and M 2 (see Table 2 for all the outcomes). The unbound material M unb is at least 99.8% of the initial envelope. A few, usually less than 10, SPH gas particles remain bound to the newly formed binary, been bound to either the newly formed WD, or the old WD. This explains why M 1 can be slightly larger than M c,1 , and similarly why M 2 can exceed slightly 0.36M ⊙ . There is no circumbinary envelope left in all simulations with the TEOS. In all our simulations, the final mass ratio q ranges between 0.88 − 0.90, consistent with the observational error for WD 1101+364. Energies. The total energy at the end of the simulation is distributed in the "binding" energy of the gas bound to the binary, E bound , the final orbital energy of the binary, E orb,fin , and the total energy of the unbound material at infinity, E ∞ tot,unb : E tot,fin = E orb,fin + E bound + E ∞ tot,unb .(10) We have compared the initial and the final total energies and found that the error is less than 0.11% in all our simulations. E ∞ tot,unb is composed of E ∞ kin,unb , E ∞ int,unb , E ∞ pot,unb -the kinetic, internal and potential energies of the unbound material, respectively. We note that E ∞ kin,unb is the dominant energy in the unbound material, though the internal energy of the unbound material at the end of the simulations is also non-negligible (see Table 2). We present E bound for completeness, but the fate -accretion or ejection -of the several particles that remain bound to the binary can not be resolved by the numerical method we use; on the . E ∞ kin,unb = ∑ i m unb i v 2 i /2, E ∞ int,unb = ∑ i m unb i u i , E ∞ pot,unb = ∑ i m unb i φ i , and E ∞ tot,unb are kinetic, internal, potential and total energies, respectively, for the unbound material. E orb,fin is the orbital energy after the CEE. E bound is the total energy of the particles that remained bound to the binary. E tot,fin is the total energy of all the particles. All energies are in 10 46 erg. timescale of our simulation they stay in an orbit within the RL of their stars. This energy includes the kinetic, internal, potential and recombination energies for these several SPH gas. We should clarify that E orb,fin does not have to match with the two-body approximation, namely E orb = −GM 1 M 2 /(2a orb ). In the latter, the potential assumes a form φ ∝ 1/r, while our code includes the softened form as described in the appendix of Hernquist & Katz (1989). When the separation between the two SPH special particles is more than two smoothing lengths, the potential reduces to the Keplerian form. However, this separation is less than two smoothing lengths for the point particles after the CEE, and the potential is softened accordingly. The difference in orbital energy between the two methods varies from about 3% (for 1.0RG0N) to 14% (for 1.8RG2N), with the Keplerian values being closer to zero. The initial orbital energy, given by Equation 4, is the same as in the two-body approximation. Because energy is well conserved, we can equate Equations 6 and 10. For that, we also use Equations 1, 5, and 7. If we neglect E bound , we can re-write the conservation of energy in fractions of the change in the orbital energy: α bind + α rec + α ∞ unb ≈ 1.(11) We find that this is indeed the case in our simulations (see Table 3), and that the deviation from 1 is due to E bound : the maximum deviation occurs in 1.0RG0N (∼ 7.8%) and the minimum in 1.4RG2N (∼ 0.03%). Note that if α rec = α ∞ unb = 0, the previous equation reduces to the standard energy formalism. However, values of both α rec and α ∞ unb are non-negligible and comparable to α bind . We emphasize that previously it had been anticipated only that α bind is somewhat less than 1, and we provide new improved constraints. Unfortunately, this is not yet a final solution of the problem as α ∞ unb can not be easily predicted for any system-a subject of our future studies. Orbital angular momenta. We find that the ejected material takes away more than 90% of the initial angular momentum of the binary. Values of γ vary between 1.42 and 1.87. This large range of values unfortunately does not allow the obtained values of γ to be useful for predicting the final parameters in a population synthesis for all possible DWD systems (for details, see Ivanova et al. 2013). Final orbital parameters. We find the final orbital separation as a orb,fin = (r a + r p )/2, where r p is the periastron, and r a is Figure 2. Final orbital periods versus initial mass of a RG. Triangles represent non-synchronized RGs, and circles represent synchronized RGs. Note: to compare alike cases, we show only the η = 2 cases with our standard resolution. Different η's give similar outcomes (see Table 3). the apastron. We ensure that these two quantities, r p and r a , do not change with time at the moment when we extract them from the simulations. We calculate the final orbital period P orb,fin of the binary from Kepler's third law and the eccentricity of the post-CE orbit as e = (r a − r p )/(2a orb,fin ). The latter is small in all the models, showing that post-CE orbits are almost circular (in previous studies, where the ejection of the CE was incomplete, the final eccentricity was larger, 0.08 or more, e.g. Ricker & Taam 2012). Figure 2 shows the final orbital periods plotted versus initial RG. We see that, as expected, the more massive the star is, the tighter the orbit gets. We find also that the final orbital period for the non-synchronized and synchronized cases are very similar (for the final state of the binary system, the only change due to syncronization was observed for the final eccentricity, albeit final eccentricity The orbital angular momentum J orb,ini and J orb,fin for the initial and final binary, respectively, in units of 10 52 g cm 2 s −1 . The parameter γ is defined in Eq. 9. The closest and farthest orbital separations are r p and r a , respectively, while a orb,fin is the semimajor axis (all in R ⊙ ). The orbital period P orb,fin is given in days, and e is the eccentricity of the orbit. The energy fractions α bind , α rec , and α ∞ unb are defined in Eq. 1, Eq. 5, and Eq. 7, respectively. is small in all the cases). We conclude that our best progenitor for WD 1101+364 is a 1.4 − 1.5M ⊙ RG. CONCLUSIONS To understand the energy budget during a CEE leading to a DWD formation, we perform non-and synchronized 3D hydrodynamic simulations with two EOSs. We confirm that taking into account recombination energy leads to a full ejection of the RG's envelope and the formation of a non-eccentric binary system, whilst if we do not take recombination energy into account, we obtain result similar to previous studies and only half of the RG's envelope is ejected. The most important consideration appears not to be the value of the available recombination energy, but where and when this energy is released. Indeed, ionized material forms the circumbinary envelope initially. Recombination then takes places there, while the circumbinary envelope continues to expand. This results in the ejection of the circumbinary envelope and effectively of all the common envelope material. If instead the recombination energy had been released too early, the simulations would have ended up with unexpelled circumbinary envelope as in previous studies. In addition, we find that considering a complete synchronization versus non-synchronized case does not change noticeably the final results. We introduce a modification of the standard energy formalism (Webbink 1984;Livio & Soker 1988), with the parameters describing the use of the recombination energy and the unbound material energy. The first one can be found from initial stellar models, but the latter requires 3D simulations. For our set of models, α ∞ unb has values from about 0.2 to about 0.44. However to generalize the result and make it useful for population synthesis one needs to make a thorough parameter study; this is the subject of our future studies. As expected, we find that the more massive the parent RG star is, the tighter the final orbit gets. We do not find that the initial synchronization affects the final period but instead only changes the energy and angular momentum carried away by the ejecta, presumably shaping the post-CE nebula. We also find that our binaries end up with an eccentricity smaller than 0.04-a result that has been expected theoretically but not yet produced in simulations. We applied our method to the case of WD 1101+364, a wellknown DWD (see Marsh 1995). We inferred that its progenitor bi-nary could have been composed of a 1.4−1.5M ⊙ RG and a 0.36M ⊙ WD companion, with P orb,ini ≈ 31 − 33 days. ACKNOWLEDGMENTS JLAN acknowledges CONACyT for its support. NI thanks NSERC Discovery and Canada Research Chairs Program. JCL is supported by National Science Foundation (NSF) grant number AST-1313091. This research has been enabled by the use of computing resources provided by WestGrid and Compute/Calcul Canada. Figure 1 . 1Specific internal energy u profiles for the model 1.5RG2N. The black asterisks and gray triangles correspond to relaxed u profiles for TEOS and SEOS, respectively. The black solid line corresponds to the u profile from the stellar code. Table 2 . 2Energies and massesModel M unb M 1 M 2 E ∞ kin,unb E ∞ int,unb E ∞ pot,unb E ∞ tot,unb E orb,fin E bound E tot,fin ∆E orb 1.0RG0N 0.663 0.322 0.360 3.645 0.473 -0.025 4.093 -10.992 -0.615 -7.514 -9.874 1.0RG1N 0.663 0.322 0.360 4.123 0.295 -0.015 4.403 -11.278 -0.582 -7.457 -10.220 1.0RG2N 0.663 0.322 0.360 4.081 0.543 -0.024 4.600 -11.469 -0.531 -7.400 -10.490 1.2RG2N 0.872 0.323 0.360 4.604 0.629 -0.041 5.192 -15.504 -0.639 -10.951 -14.159 1.4RG2N 1.079 0.319 0.360 6.790 0.907 -0.094 7.603 -22.911 -0.005 -15.313 -21.196 1.5RG2N 1.178 0.319 0.361 6.089 0.917 -0.096 6.910 -25.484 -0.026 -18.600 -23.469 1.5RG2NP 1.178 0.320 0.360 7.415 1.407 -0.159 8.663 -26.969 -0.366 -18.665 -24.958 1.6RG2N 1.274 0.323 0.362 5.623 1.812 -0.440 6.995 -27.741 -0.244 -20.990 -25.584 1.6RG0S 1.274 0.323 0.362 5.603 1.692 -0.381 6.914 -27.228 -0.309 -20.623 -24.987 1.7RG2N 1.370 0.323 0.366 5.854 2.042 -0.417 7.479 -33.692 -0.715 -26.928 -31.073 1.7RG0S 1.373 0.323 0.363 5.032 2.061 -0.610 6.483 -32.417 -0.466 -26.400 -29.723 1.8RG2N 1.478 0.318 0.362 8.333 1.675 -0.371 9.637 -52.873 -0.171 -43.407 -48.961 M unb , M 1 , and M 2 are the unbound, stripped RG core and old WD, in M ⊙ Table 3 . 3Orbital parametersModel J orb,ini J orb,fin γ r p r a a orb,fin P orb,fin e α bind α rec α ∞ unb 1.0RG0N 14.340 1.188 1.861 2.015 2.115 2.065 0.416 0.024 0.855 -0.208 0.431 1.0RG1N 14.741 1.168 1.868 1.965 2.074 2.020 0.403 0.027 0.827 -0.201 0.431 1.0RG2N 15.119 1.157 1.873 1.947 2.036 2.000 0.397 0.022 0.808 -0.197 0.440 1.2RG2N 16.262 0.987 1.670 1.520 1.532 1.526 0.264 0.004 0.871 -0.192 0.367 1.4RG2N 17.116 0.759 1.557 1.070 1.089 1.080 0.158 0.009 0.800 -0.159 0.359 1.5RG2N 17.062 0.709 1.512 0.953 1.003 0.978 0.134 0.026 0.879 -0.172 0.294 1.5RG2NP 17.062 0.719 1.511 0.891 0.924 0.908 0.122 0.018 0.815 -0.148 0.347 1.6RG2N 17.685 0.678 1.479 0.880 0.948 0.914 0.122 0.037 0.893 -0.157 0.273 1.6RG0S 17.392 0.690 1.477 0.912 0.947 0.930 0.126 0.019 0.895 -0.160 0.277 1.7RG2N 17.151 0.610 1.449 0.746 0.771 0.758 0.092 0.016 0.922 -0.140 0.241 1.7RG0S 16.953 0.624 1.444 0.776 0.791 0.784 0.097 0.009 0.942 -0.146 0.218 1.8RG2N 14.932 0.446 1.417 0.464 0.493 0.479 0.047 0.030 0.902 -0.096 0.197 . M De Kool, ApJ. 358189de Kool M., 1990, ApJ, 358, 189 . P P Eggleton, ApJ. 268368Eggleton P. P., 1983, ApJ, 268, 368 . E Gaburov, J C LombardiJr, Portegies Zwart S. 402105Gaburov E., Lombardi Jr. J. C., Portegies Zwart S., 2010, MN- RAS, 402, 105 . E Glebbeek, O R Pols, J R Hurley, A&A. 4881007Glebbeek E., Pols O. R., Hurley J. R., 2008, A&A, 488, 1007 . Z Han, P Podsiadlowski, P P Eggleton, MNRAS. 270121Han Z., Podsiadlowski P., Eggleton P. P., 1994, MNRAS, 270, 121 . Z Han, P Podsiadlowski, P F L Maxted, T R Marsh, N Ivanova, MNRAS. 336449Han Z., Podsiadlowski P., Maxted P. F. L., Marsh T. R., Ivanova N., 2002, MNRAS, 336, 449 . L Hernquist, N Katz, ApJS. 70419Hernquist L., Katz N., 1989, ApJS, 70, 419 . N Ivanova, A&A Rev. 2159Ivanova N., et al. 2013, A&A Rev., 21, 59 . M Livio, N Soker, ApJ. 329764Livio M., Soker N., 1988, ApJ, 329, 764 . J C LombardiJr, ApJ. 73749Lombardi Jr. J. C., et al. 2011, ApJ, 737, 49 . L B Lucy, AJ. 72813Lucy L. B., 1967, AJ, 72, 813 . T R Marsh, MNRAS. 2751Marsh T. R., 1995, MNRAS, 275, L1 . J L A Nandez, N Ivanova, J C Lombardi, J , The Astrophysical Journal. 78639Nandez J. L. A., Ivanova N., J. C. Lombardi J., 2014, The Astro- physical Journal, 786, 39 . G Nelemans, C A Tout, MNRAS. 356753Nelemans G., Tout C. A., 2005, MNRAS, 356, 753 . G Nelemans, F Verbunt, L R Yungelson, Portegies Zwart, S F , A&A. 3601011Nelemans G., Verbunt F., Yungelson L. R., Portegies Zwart S. F., 2000, A&A, 360, 1011 . B Paczyński, J Ziółkowski, Acta Astron. 18255Paczyński B., Ziółkowski J., 1968, Acta Astron, 18, 255 . J.-C Passy, ApJ. 74452Passy J.-C., et al. 2012, ApJ, 744, 52 . B Paxton, ApJS. 1923Paxton B., et al. 2011, ApJS, 192, 3 . P M Ricker, R E Taam, ApJ. 74674Ricker P. M., Taam R. E., 2012, ApJ, 746, 74 . I W Roxburgh, Nature. 215838Roxburgh I. W., 1967, Nature, 215, 838 . E L Sandquist, ApJ. 500909Sandquist E. L., et al. 1998, ApJ, 500, 909 . R F Webbink, ApJ. 277355Webbink R. F., 1984, ApJ, 277, 355
[]
[ "arXiv:1407.6278v1 [quant-ph] Justifying the Classical-Quantum Divide of the Copenhagen Interpretation", "arXiv:1407.6278v1 [quant-ph] Justifying the Classical-Quantum Divide of the Copenhagen Interpretation" ]
[ "Arkady Bolotin \nBen-Gurion University of the Negev\nBeershebaIsrael\n" ]
[ "Ben-Gurion University of the Negev\nBeershebaIsrael" ]
[]
Perhaps the most significant drawback, which the Copenhagen interpretation (still the most popular interpretation of quantum theory) suffers from, is the classical-quantum divide between the large classical systems that carry out measurements and the small quantum systems that they measure. So, an "ideal" alternative interpretation of quantum theory would either eliminate this divide or justify it in some reasonable way. The present paper demonstrates that it is possible to justify the classical-quantum dualism of the Copenhagen interpretation by way of the analysis of the time complexity of Schrödinger's equation.
null
[ "https://arxiv.org/pdf/1407.6278v1.pdf" ]
118,349,498
1407.6278
1ad3eec6c084000fb0a3999a4b280e82b34228c2
arXiv:1407.6278v1 [quant-ph] Justifying the Classical-Quantum Divide of the Copenhagen Interpretation 23 Jul 2014 July 24, 2014 Arkady Bolotin Ben-Gurion University of the Negev BeershebaIsrael arXiv:1407.6278v1 [quant-ph] Justifying the Classical-Quantum Divide of the Copenhagen Interpretation 23 Jul 2014 July 24, 2014Copenhagen interpretation · Schrödinger's equation · Brute force · Time complex- ity · Exponential Time Hypothesis Perhaps the most significant drawback, which the Copenhagen interpretation (still the most popular interpretation of quantum theory) suffers from, is the classical-quantum divide between the large classical systems that carry out measurements and the small quantum systems that they measure. So, an "ideal" alternative interpretation of quantum theory would either eliminate this divide or justify it in some reasonable way. The present paper demonstrates that it is possible to justify the classical-quantum dualism of the Copenhagen interpretation by way of the analysis of the time complexity of Schrödinger's equation. Introduction As stated by the standard Copenhagen position on quantum mechanics [1], microscopic systems under consideration are described by wave functions or state vectors, whose time evolution is governed by the Schrödinger equation, whereas observers can access those systems only through macroscopic measuring devices that (together with the observers themselves) are subjects to the laws of classical physics. In this way, the standard Copenhagen position postulates that the world is governed by different laws: quantum mechanics (explicitly, Schrödinger's equation) for the microscopic world, and classical physics (Newton's laws of motion) for the macroscopic, directly accessible, world. The existence of two divided physical domains is considered by the many as the core weakness of the Copenhagen position [2]. Besides this, however, such an interpretation of quantum mechanics constitutes a coherent framework for the description of the physical world, which works quite well in most practical circumstances. So, at least among practicing physicists, the Copenhagen interpretation is still the most popular interpretation today [3]. The burning question is -if one left the mathematical structure of the Copenhagen interpretation intact (since this structure is widely accepted), would be an alternative interpretation of quantum theory possible that either would eliminate the "absurd" classical-quantum dualism or would convincingly justify it? A great deal of papers (for example [4,5,6], just to name a few), which argue that the Schrödinger equation can flawlessly predict the future behavior of all physical systems microscopic and macroscopic alike (including observers represented by their own wave functions), proves that the first option -i.e., an interpretation without the classical-quantum dualism of the Copenhagen interpretation -is possible. The aim of this paper is to demonstrate that the second option -i.e., the alternative quantummechanical interpretation that justifies the classical-quantum divide through the analysis of the time complexity of Schrödinger's equation -is possible as well. The paper is structured as follows. First, we will consider the complexity of verification of exact solutions |ψ to Schrödinger's equation H|ψ = 0 and show that for a very wide class of non-relativistic many-body Hamiltonians H, the decision form of this equation (i.e., does this equation have a solution?) can be verified in the amount of time polynomial in the system's constituent particle number. Next, it will be demonstrated that the complexity of verification of the Schrödinger equation for coupled spin systems (characterized with the Hamiltonian, whose first part contains all the interactions described by the distances between constituent particles of the system, while the second part contains coupling to an external magnetic field as well as coupling between spins of the constituent particles) is polynomial as well. Then, it will be shown that if the Exponential Time Hypothesis held true, no generic algorithm capable of exactly solving the Schrödinger equation for an arbitrary physical Hamiltonian could be significantly faster than the brute-force procedure of generating and testing all possible candidate solutions of this equation. Finally, it will be shown that the key element of the Copenhagen interpretation, which postulates the necessity of classical concepts in order to describe quantum phenomena, including measurements, can be explained by NP-hardness of the problem of exactly solving the Schrödinger equation for a macroscopic system. Complexity of verification of Schrödinger's equation exact solutions Let us consider the family Φ Ψ of generic algorithms capable of exactly solving the Schrödinger equation for an arbitrary physical Hamiltonian H. By "exactly solving" we mean that the algorithms ∈ Φ Ψ can determine exactly (i.e., in exact, more or less closed form) all (or at least the first several lower) eigenvalues and corresponding eigenfunctions of a given Hamiltonian H, and by "generic" we mean that those algorithms can do it so for any and all possible physical systems with any possible numbers of constituent particles N . We know that the family Φ Ψ is not empty and contains as a minimum one member: it is brute force, that is, the procedure of generating and testing all possible candidate solutions to the Schrödinger equation with a given Hamiltonian H. Let us assume that the family Φ Ψ contains at least one more member, which we will call the algorithm A(Φ Ψ ). Suppose the vector |ψ is the exact solution to the Schrödinger equation H|ψ = 0 for the ground state of the particular Hamiltonian H with zero eigenvalue E = 0 found by the algorithm A(Φ Ψ ). Let us show that the decision problem of this equation (i.e., does this equation have a solution?) can be verified quickly, i.e., in an amount of time polynomial in N . Clearly, the way to accomplish this is to substitute the solution |ψ back into the Schrödinger equation with the given H and estimate the runtime complexity of the operations needed to prove that |ψ is indeed the solution. Let L be the minimal number of elementary operations sufficient to compute the effect of the Hamiltonian H on the solution |ψ ; we will call L the complexity of verification. In the position basis {|r } the Hamiltonian H is quadratic in the operators ∂/∂r j , thus using the results of the papers [7,8] the complexity L can be presented as follows: L (H|ψ ) = L ∂ 2 Ψ ∂r 2 1 , . . . , ∂ 2 Ψ ∂r 2 j , . . . , ∂ 2 Ψ ∂r 2 N ≤ O N 2 ·cost(Ψ) ,(1) where only partial derivatives ∂ 2 Ψ/∂r 2 j (computed via the chain rule using the known partial derivatives of the elementary functions that form the wave function Ψ) are considered contributed to the complexity of verification L (as binary elementary operations whose both operands involve the function Ψ), while additions/subtractions and multiplications by arbitrary scalars are allowed for free, cost(Ψ) denotes the computational cost of the evaluation of the wave function Ψ(r, m) at the particular values of position vectors r = (r 1 , . . . , r j , . . . , r N ) and spin components along the z-axis m = (m 1 , . . . , m j , . . . , m N ). In the case of a weak or zero magnetic field along the z-axis and non-coupled spins, the Schrödinger equation for the N -body system with a very general potential described by the distances d jk between the j and the k particles of the system (such as Coulomb, inverse square, harmonic-oscillator and dipole types or as the Yukawa potential, the Gauss potential, the negative exponential potential and so on) can be transformed using hyperspherical coordinates ρ and Ω, where ρ is the hyperradius and Ω stands for the collective hyperspherical angles. Using this coordinate transformation, the exact solution Ψ(ρ, Ω) to the system's Schrödinger equation can be separated into the product of the hyperradial and hyperangular wave functions Ψ(ρ, Ω) = φ(ρ)Υ(Ω) ,(2) and so the computational cost of the evaluation of the solution Ψ(ρ, Ω) can be estimated separately as cost(Ψ) = cost(φ) + cost(Υ) + 1 .(3) There are different representations of the hyperspherical coordinates, but according to the paper [9] calculations with them give the same results. Therefore, using a coordinate system in the Ndimensional configuration space analogous to the spherical coordinate system for 3-dimensional Euclidean space, we can estimate the numbers of elementary operations L(ρ) and L(Ω j ) needed to calculate the hyperspherical coordinates ρ and Ω = (Ω 1 , . . . , Ω j , . . . , Ω N −1 ) as follows: L(ρ) = L r 1 2 + · · · + r j 2 + · · · + r N 2 = O(N ) ,(4)L(Ω j ) j<(N −2) = L arccos r j r 1 2 + · · · + r j 2 + · · · + r N 2 = O(N ) .(5) The numbers of elementary operations required to evaluate the wave functions φ(ρ) and Υ(Ω) = υ 1 (Ω 1 ) . . . υ j (Ω j ) . . . υ N −1 (Ω N −1 ) at the calculated values of the coordinates ρ and Ω are finite as φ(ρ) and υ j (Ω j ) can be expressed in the finite (and not dependent on N ) numbers of some previously known (elementary) functions of ρ and Ω j . Hence, we find that cost(Ψ) = poly(N ) .(6) This implies that for a system of N particles, in which disjoint pairs interact by arbitrary twoparticle potentials, the verification complexity L is upper-bounded by a polynomial L(HΨ(ρ, Ω)) ≤ poly(N ) . The corollary to this conclusion is that for a very wide class of non-relativistic many-body systems, the decision form of the Schrödinger equation is in NP, i.e., in the complexity class of computational problems whose solutions can be verified in polynomial time. Coupled spin systems Let us now turn to coupled spin systems, which can be characterized with the Hamiltonian H c = H + H int , where the first part H contains all the interactions described by the distances d jk between the j and the k constituent particles of the system, whereas the second part H int contains coupling to an external magnetic field as well as coupling between spins of the constituent particles. We will assume that the system characterized with the Hamiltonian H c is in the ground state of H at zero energy. Consider the problem of the zero ground state energy of the Hamiltonian function H(σ 1 ,. . ., σ j ,. . ., σ N ) that describe the energy of configuration of a set of N spins σ j = 2m j ∈ {− , + } in classical Ising models of a spin glass [10,11] H(σ 1 , . . . , σ j , . . . , σ N ) = − j<k J jk σ j σ k − µ N j h j σ j ,(8) where real numbers J jk are coupling coefficients, h j are external magnetic fields and µ is the magnetic moment. Namely, does the ground state of the Ising Hamiltonian (8) is polynomial. As indicated by the paper [12], all "the famous" NP problems (such as Karp's 21 NP-complete problems [13,14]) can be written down as the Ising Hamiltonian (8) with only a polynomial number of steps (to be exact, with a polynomial number of spins which scales no faster than N 3 ). Therefore, in just a polynomial number of steps one can get from any NP-complete problem to the problem of the zero ground state energy H(σ 1 ,. . ., σ j ,. . ., σ N ) = 0 of the Ising Hamiltonian (8). On the other hand, the generic algorithm A(Φ Ψ ) can find whether the Schrödinger equation H(σ z 1 , . . . , σ z j , . . . , σ z N )Ψ(ρ, Ω, m) = 0 with the quantum version of the Ising Hamiltonian (8) has a solution and -as a result -resolve the NP-complete problem of interest. Consequently, we get to the following conclusion: As an arbitrary NP problem is polynomial-time reducible to any NP-complete problem and subsequently to the decision problem of the Schrödinger equation with the quantum version of the Ising Hamiltonian (8) that encodes the given NP-complete problem, any problem in the complexity class NP can be exactly solved by the generic algorithm A(Φ Ψ ) with only polynomially more work. This conclusion means that if the generic algorithm A(Φ Ψ ) were efficient (i.e., polynomial in N ), then the complexity class NP would be equal to P, the class of computational problems solvable in polynomial time. However, as it is now prevalently believed [15], P =NP, and so it is almost certainly that neither A(Φ Ψ ) nor any other method of the family Φ Ψ can be polynomial in N . The exact generic algorithm A(Φ Ψ ) versus brute force Suppose the conjecture P =NP is true. Then the question naturally arises as to what kind of superpolynomial running time is possible for the generic algorithm A(Φ Ψ ) if we compare it to brute force. On the basis of postulates of quantum mechanics (specifically, the postulate that the Hilbert space H for the composite system containing two subsystems is the tensor product H = H 1 ⊗ H 2 of the Hilbert spaces H 1 and H 2 for two constituent subsystems), when solving the Schrödinger equation for a system of N particles the brute-force method will run exponentially in N . So, is it possible that A(Φ Ψ ) is significantly faster than brute force? Assume that A(Φ Ψ ) is a sub-exponential time algorithm. Since any NP-complete problem -including the 3-SAT problem -can be written down as the decision problem of the Schrödinger equation for the quantum version of the Ising Hamiltonian (8), it follows that the generic algorithm A(Φ Ψ ) can solve any NP-complete problem in sub-exponential time. Yet, as laid down by the widely believed conjecture called the Exponential Time Hypothesis (ETH), the 3-SAT problem does not have a sub-exponential time algorithm [16,17]. Hence, if the runtime complexity of A(Φ Ψ ) were sub-exponential in N , then ETH could be shown to be false. This would imply that many computational problems known to be solved in time O * (2 N ) (such as CHROMATIC NUMBER on an N -vertex graph, HITTING SET over an N -element universe, TRAVELING SALESMAN problem on N cities and so on) can be improved to O * (c N ) with some c < 2 (where the O * notation is used that suppresses factors polynomial in N ). However, such an improvement would be highly surprising since the lower bound O * (2 N ) is tight: there is strong evidence that this lower bound matches the running time of the best possible algorithms for those problems [18]. Thus, most likely, no method of the family Φ Ψ (of generic algorithms capable of exactly solving the Schrödinger equation for an arbitrary physical Hamiltonian H) could be significantly faster than the brute-force procedure of merely generating and testing all possible candidate solutions of this equation. Solving the Schrödinger equation for a macroscopic system In order to be able to meaningfully talk about the state of a macroscopic system within the formalism of quantum theory, explicitly, as being described by a certain wave function obeying the Schrödinger equation with a particular Hamiltonian H M , it is important to bear in mind the following: Such a wave function can be only an exact (analytical) solution of Schrödinger's equation obtained by some generic algorithm capable of exactly solving this equation with any physical Hamiltonian. Let us show this. The exact solution to the Schrödinger equation for a given physical system conveys the most complete information that can be known about the system. In opposition, when solving Schrödinger's equation numerically even small round-off errors in the coefficients of the characteristic polynomial (for the matrix representing the system Hamiltonian H) can end up being a large error in the eigenvalues and hence in the eigenvectors. So, in a sense numerical solutions of Schrödinger's equation can be viewed as the loss of the complete information theoretically possible about the system analogous to the loss of the information from the system into the environment in the models of environmental-induced decoherence [19,20]. Thus, if one hopes to make headway in foundational matters, say, to resolve the problem of macroscopic quantum superpositions -linear combinations of the solutions to the macroscopic system's Schrödinger equation -one has to consider only the exact solutions. If not, just by averaging over the possible span errors of the numerical solutions one may get the complete loss of coherence of the phase angles between the numerically acquired elements of the macroscopic superposition. By virtue of the large (and essentially unchecked) number of microscopic constituent particles in any macroscopic system, an ambiguity in the identification of a macroscopic system's Hamiltonian H M is inevitable. This indicates that including or excluding many microscopic degrees of freedom into or from the macroscopic Hamiltonian H M would have no bearing to its identification. For example, adding or removing a hundred molecules (accounting for numerous microscopic degrees of freedom) to or from the surface of a laptop would have no relevance to the laptop's properties or functioning and so to the laptop's Hamiltonian. In accordance with the paper [21], a parameterized form of an arbitrary Hamiltonian H determining a quantum (i.e., microscopic) system can be written as H(θ) = M (N ) j a j (θ)X j ,(10) where θ is a vector consisting of parameters governing the quantum (microscopic) evolution, a j are some known real-valued functions of θ,X j are known Hermitian operators, and M (N ) stands for an integer-valued function of the system constituent particle number N (typically M ≪ N 2 − 1). However, one would be hard-pressed to write down a similar parameterized form H M (θ) for the macroscopic Hamiltonian because the precise identification of microscopic parameters θ (i.e., microscopic degrees of freedom) governing a macroscopic system's evolution would be impossible. Therefore, to be able (even in principle) to exactly solve the Schrödinger equation for a macroscopic system one must use an algorithm that is not written in terms specialized to particular microscopic degrees of freedom θ, that is, to a precisely identifiable Hamiltonian H M (θ). This means to accomplish such a task one would be forced to use only a generic algorithm, namely, one of the family Φ Ψ . But since -assuming ETH -there is no generic algorithm ∈ Φ Ψ that can be significantly faster than brute force, this would imply that for a macroscopic system (that can be considered as a system containing roughly Avogadro's number N A ≈ 10 24 of the constituent microscopic particles) the time needed to exactly solve the Schrödinger equation would be proportional, at the lowest estimate, to the amount of O(2 N A ) elementary operations, which -whatever the amount of time an elementary operation may possibly take -would exceed the current age of the universe by extremely big orders of magnitude. At this point, one can object that NP-hardness of the problem of exactly solving the Schrödinger equation for a macroscopic system does not mean that any instance of this problem will take exponential in N A time, as, strictly speaking, NP-hardness is a worst-case notion. So, it might be possible that for many macroscopic systems (or at least for some of them) finding the exact solutions to Schrödinger's equation would take a relatively short time (at any rate, a time polynomial in N A ). However, even if we assume that solving exactly Schrödinger's equation takes exponential time only for a handful of specific macroscopic systems, this nevertheless will mean -given inescapable coupling of such "unlucky" systems to the rest of the universe -that eventually (after some fast coupling period) for all the other macroscopic systems of the universe finding their exact wave functions will take exponential time too. Concluding remarks Thus, assuming ETH, there is no real possibility to exactly solve the Schrödinger equation for macroscopic systems, and consequently, there is no sense in describing them within the formalism of quantum theory. The obvious deduction from this conclusion would be that the "mysterious" division between the microscopic world governed by quantum mechanics and the macroscopic world that obeys classical physics could be explained by NP-hardness of the problem of exactly solving the Schrödinger equation for a macroscopic system. If ETH holds, we cannot take the exact wave function (or the exact state vector) as a universal description of reality since for macroscopic physical systems such a description would be empty, i.e., without realistically reachable predictive content. After all, the time complexity of the Schrödinger equation, that is, the amount of time taken to exactly solve this equation for a given system as a function of the system's constituent particle number N , could be the very reason that justifies the classical-quantum dualism of the Copenhagen interpretation explaining why macroscopic measuring devices cannot be realistically described by the same equation as the microscopic systems under consideration. have zero energy? Since the generic algorithm A(Φ Ψ ) can exactly solve the Schrödinger equation for all Hamiltonians, it can also solve the Schrödinger equation H c |ψ = 0 for the Hamiltonian H c = H + H(σ z 1 , . . . , σ z j , . . . , σ z N ), where spins σ j in the classical Ising Hamiltonian (8) are simply replaced by quantum operators -Pauli spin-1/2 matrices σ z j . As it is readily to observe, the decision problem of the Schrödinger equation H c |ψ = 0 (i.e., does this equation have a solution?) is in the complexity class NP. Let Ψ(ρ, Ω, m c ) denote the exact solution to this equation in the position-spin basis. As it has been just demonstrated, the decision problem of the Schrödinger equation HΨ(ρ, Ω, m c ) = 0 for the noncoupled spin Hamiltonian H is in NP. Regarding the coupling Hamiltonian H(σ z 1 , . . . , σ z j , . . . , σ z N ), the validity of a positive answer (i.e., there is a spin configuration m c = (m c1 , . . . , m cj , . . . , m cN ) with zero energy) can be tested on a deterministic machine by calculating the Hamiltonian function (8) of that configuration m c in polynomial time. Hence, the complexity of verification L (H c |ψ ) L(H c |ψ ) = L HΨ(ρ, Ω, m c ), H(σ z 1 , . . . , σ z j , . . . , σ z N )Ψ(ρ, Ω, m c ) Copenhagen Interpretation of Quantum Mechanics, The Stanford Encyclopedia of Philosophy. Faye J , N. ZaltaFaye J. Copenhagen Interpretation of Quantum Mechanics, The Stanford En- cyclopedia of Philosophy (Fall 2008 Edition). Edward N. Zalta (ed.) Available: http://plato.stanford.edu/archives/fall2008/entries/qm-copenhagen/. Between classical and quantum. N Landsman, Handbook of the Philosophy of Science. ElsevierLandsman N. Between classical and quantum. In: Handbook of the Philosophy of Science, pp 417-553. Elsevier, 2007. A snapshot of foundational attitudes toward quantum mechanics. M Schlosshauer, J Koer, A Zeilinger, arXiv:1301.1069Studies in History and Philosophy of Science Part B. 443Schlosshauer M., Koer J., Zeilinger A. A snapshot of foundational attitudes toward quan- tum mechanics. Studies in History and Philosophy of Science Part B, 44(3):222-230, 2013. arXiv:1301.1069. Nine formulations of quantum mechanics. D Styer, Am. J. Phys. 703Styer D. et al. Nine formulations of quantum mechanics. Am. J. Phys., 70(3), March 2002. L Vaidman, arXiv:1405.4222Quantum Theory and Determinism. Vaidman L. Quantum Theory and Determinism. arXiv:1405.4222. 2014. R Penrose, Quantum State Reduction. Foundations of Physics. 1Penrose R. On the Gravitization of Quantum Mechanics 1: Quantum State Reduction. Foun- dations of Physics, 445:557-575, 2014. On the number of nonscalar multiplications necessary to evaluate polynomials. M Paterson, L Stockmeyer, SIAM J. Comput. 21Paterson M. and Stockmeyer L. On the number of nonscalar multiplications necessary to evaluate polynomials, SIAM J. Comput., 2(1), 1973. The complexity of partial derivatives. W Baur, V Strassen, Theor. Comp. Sc. 22Baur W. and Strassen V. The complexity of partial derivatives. Theor. Comp. Sc., 22:317-330, 1983. Exact solutions of the Schrödinger equation for some quantummechanical many-body systems. R Zhang, C Deng, Phys. Rev. A. 471Zhang R. and Deng C. Exact solutions of the Schrödinger equation for some quantum- mechanical many-body systems. Phys. Rev. A, 47(1):71-77, 1993. . K Fischer, J Hertz, Spin, Glasses, Cambridge University PressFischer K. and Hertz J. Spin Glasses. Cambridge University Press, 1991. The thermodynamic limit in mean field spin glass models. F Guerra, F Toninelli, Communications in Math. Phys. 2301Guerra F. and Toninelli F. The thermodynamic limit in mean field spin glass models. Com- munications in Math. Phys., 230 (1):71-79, 2002. Ising formulations of many NP problems. A Lucas, Frontiers in Physics. 25Lucas A. Ising formulations of many NP problems. Frontiers in Physics, 2(5):1-15, 2014. Reducibility Among Combinatorial Problems. R Karp, Miller R. and Thatcher J.PlenumNew YorkComplexity of Computer ComputationsKarp R. Reducibility Among Combinatorial Problems. In: Miller R. and Thatcher J. (editors). Complexity of Computer Computations. New York: Plenum. pp. 85-103, 1972. Computers and Intractability: a Guide to the Theory of NP-Completeness. M Garey, D Johnson, Freeman & CoNew YorkGarey M. and Johnson D. Computers and Intractability: a Guide to the Theory of NP- Completeness. New York: Freeman & Co, 1979. . W Gasarch, P=?np Poll, News, 33Gasarch W. P=?NP poll. SIGACT News, 33(2):34-47, 2002. Available: http://www.cs.umd.edu/∼gasarch/papers/poll.pdf. Which problems have strongly exponential complexity?. R Impagliazzo, R Paturi, F Zane, J. Comput. System Sci. 63Impagliazzo R., Paturi R., Zane F. Which problems have strongly exponential complexity? J. Comput. System Sci., 63:512-530, 2001. Exact Algorithms for NP-hard Problems: A Survey. Combinatorial Optimization -Eureka. G Woeginger, Springer-VerlagWoeginger G. Exact Algorithms for NP-hard Problems: A Survey. Combinatorial Optimization -Eureka, You Shrink! Springer-Verlag, pp. 185-207, 2003. Lower bounds based on the exponential time hypothesis. D Lokshtanov, D Marx, S Saurabh, Bulletin of the EATCS. 84Lokshtanov D., Marx D., Saurabh S. Lower bounds based on the exponential time hypothesis. Bulletin of the EATCS, 84:41-71, 2011. Quantum decoherence in a pragmatist view: Resolving the measurement problem. R Healey, arXiv:1207.7105Healey R. Quantum decoherence in a pragmatist view: Resolving the measurement problem. arXiv:1207.7105. 2012. The quantum-to-classical transition and decoherence. M Schlosshauer, Handbook of Quantum Information. Aspelmeyer M., Calarco T., Eisert J., Schmidt-Kaler F.Berlin/HeidelbergSpringerSchlosshauer M. The quantum-to-classical transition and decoherence. In: Aspelmeyer M., Calarco T., Eisert J., Schmidt-Kaler F. (eds.), Handbook of Quantum Information. Springer: Berlin/Heidelberg, 2014. Quantum Hamiltonian identification from measurement time traces. J Zhang, M Sarovar, arXiv:1401.5780Zhang J. and Sarovar M. Quantum Hamiltonian identification from measurement time traces. arXiv:1401.5780. 2014.
[]
[ "El Uso de Repositorios y su Importancia para la Educación en Ingeniería", "El Uso de Repositorios y su Importancia para la Educación en Ingeniería" ]
[ "Jose Texier [email protected] \nUniversidad Nacional Experimental del Tachira (UNET)\nVenezuela\n\nServicio de Difusión de la Creación Intelectual\nUniversidad Nacional de La Plata (SeDiCI)\nArgentina\n\nCONICET\nArgentina\n", "; De Giusti \nServicio de Difusión de la Creación Intelectual\nUniversidad Nacional de La Plata (SeDiCI)\nArgentina\n", "Marisa ", "; Oviedo \nServicio de Difusión de la Creación Intelectual\nUniversidad Nacional de La Plata (SeDiCI)\nArgentina\n", "Nestor ", "; Villarreal \nServicio de Difusión de la Creación Intelectual\nUniversidad Nacional de La Plata (SeDiCI)\nArgentina\n", "Gonzalo L ", "; Lira \nServicio de Difusión de la Creación Intelectual\nUniversidad Nacional de La Plata (SeDiCI)\nArgentina\n\nCONICET\nArgentina\n", "Ariel [email protected] " ]
[ "Universidad Nacional Experimental del Tachira (UNET)\nVenezuela", "Servicio de Difusión de la Creación Intelectual\nUniversidad Nacional de La Plata (SeDiCI)\nArgentina", "CONICET\nArgentina", "Servicio de Difusión de la Creación Intelectual\nUniversidad Nacional de La Plata (SeDiCI)\nArgentina", "Servicio de Difusión de la Creación Intelectual\nUniversidad Nacional de La Plata (SeDiCI)\nArgentina", "Servicio de Difusión de la Creación Intelectual\nUniversidad Nacional de La Plata (SeDiCI)\nArgentina", "Servicio de Difusión de la Creación Intelectual\nUniversidad Nacional de La Plata (SeDiCI)\nArgentina", "CONICET\nArgentina" ]
[]
ResumenLos repositorios institucionales son depósitos de archivos digitales de diferentes tipologías para accederlos, difundirlos y preservarlos. Este artículo tiene como propósito explicar la importancia de los repositorios en el ámbito académico de la Ingeniería como una manera de democratizar el conocimiento por parte de los docentes, investigadores y alumnos para contribuir al desarrollo social y humano. Estos repositorios, enmarcados generalmente en la iniciativa Open Access, permiten asegurar el acceso libre y abierto (sin restricciones legales y económicas) a los diferentes sectores de la sociedad y de esa manera puedan hacer uso de los servicios que ofrecen. Finalmente, los repositorios están evolucionando en el ámbito académico y científico, y las diferentes disciplinas de la Ingeniería deben prepararse para brindar un conjunto de servicios a través de esos sistemas para la sociedad de hoy y del futuro.Palabras claves: repositorios, ingeniería, open access, dataset, repositorios institucionales.AbstractInstitutional repositories are deposits of different types of digital files for access, disseminate and preserve them. This paper aims to explain the importance of repositories in the academic field of engineering as a way to democratize knowledge by teachers, researchers and students to contribute to social and human development. These repositories, usually framed in the Open Access Initiative, allow to ensure access free and open (unrestricted legal and economic) to different sectors of society and, thus, can make use of the services they offer. Finally, that repositories are evolving in the academic and scientific, and different disciplines of engineering should be prepared to provide a range of services through these systems to society of today and tomorrow.
null
[ "https://arxiv.org/pdf/1210.5224v2.pdf" ]
13,263,691
1210.5224
17479c2902cfd4207e0c06041dba7d8a9e544fed
El Uso de Repositorios y su Importancia para la Educación en Ingeniería Jose Texier [email protected] Universidad Nacional Experimental del Tachira (UNET) Venezuela Servicio de Difusión de la Creación Intelectual Universidad Nacional de La Plata (SeDiCI) Argentina CONICET Argentina ; De Giusti Servicio de Difusión de la Creación Intelectual Universidad Nacional de La Plata (SeDiCI) Argentina Marisa ; Oviedo Servicio de Difusión de la Creación Intelectual Universidad Nacional de La Plata (SeDiCI) Argentina Nestor ; Villarreal Servicio de Difusión de la Creación Intelectual Universidad Nacional de La Plata (SeDiCI) Argentina Gonzalo L ; Lira Servicio de Difusión de la Creación Intelectual Universidad Nacional de La Plata (SeDiCI) Argentina CONICET Argentina Ariel [email protected] El Uso de Repositorios y su Importancia para la Educación en Ingeniería repositoriesengineeringopen accessdatasetinstitutional repository ResumenLos repositorios institucionales son depósitos de archivos digitales de diferentes tipologías para accederlos, difundirlos y preservarlos. Este artículo tiene como propósito explicar la importancia de los repositorios en el ámbito académico de la Ingeniería como una manera de democratizar el conocimiento por parte de los docentes, investigadores y alumnos para contribuir al desarrollo social y humano. Estos repositorios, enmarcados generalmente en la iniciativa Open Access, permiten asegurar el acceso libre y abierto (sin restricciones legales y económicas) a los diferentes sectores de la sociedad y de esa manera puedan hacer uso de los servicios que ofrecen. Finalmente, los repositorios están evolucionando en el ámbito académico y científico, y las diferentes disciplinas de la Ingeniería deben prepararse para brindar un conjunto de servicios a través de esos sistemas para la sociedad de hoy y del futuro.Palabras claves: repositorios, ingeniería, open access, dataset, repositorios institucionales.AbstractInstitutional repositories are deposits of different types of digital files for access, disseminate and preserve them. This paper aims to explain the importance of repositories in the academic field of engineering as a way to democratize knowledge by teachers, researchers and students to contribute to social and human development. These repositories, usually framed in the Open Access Initiative, allow to ensure access free and open (unrestricted legal and economic) to different sectors of society and, thus, can make use of the services they offer. Finally, that repositories are evolving in the academic and scientific, and different disciplines of engineering should be prepared to provide a range of services through these systems to society of today and tomorrow. INTRODUCCIÓN En los últimos años los Repositorios Institucionales (RIs) han cobrado importancia en la sociedad académica y científica, porque representan una fuente de información digital especializada, organizada y accesible para los lectores de diversas áreas. Los RIs son sistemas informáticos dedicados a gestionar los trabajos científicos y académicos de diversas instituciones de forma libre y gratuita, es decir, siguiendo las premisas del movimiento Open Access (OA). El OA se entiende como el acceso inmediato a trabajos académicos, científicos, o de cualquier otro tipo sin requerimientos de registro, suscripción o pago. Por ello, este movimiento y los repositorios están ayudando a transformar el proceso de publicación de artículos científicos (1), permitiendo el acceso instantáneo o inmediato a las publicaciones arbitradas, gracias a las diferentes aplicaciones (Google Scholar, Microsoft Academic, Arxiv, Repositorios Institucionales de las Universidades) y servicios informáticos (alertas a partir de criterios previamente definidos, RSS, listas de correos, las diferentes redes sociales, etc.). Con el movimiento OA se han creado otros movimientos como el Open Data, Open Knowledge o Data Sharing, que incentivan el aumento de instalaciones y usos de los repositorios de documentos científicos y, en un porcentaje menor, de repositorios de documentos administrativos y de conjuntos de datos, conocidos también como dataset, raw data o datos crudos. En la Figura 1, se aprecia la curva de crecimiento de los repositorios registrados en el Directory of Open Access Repositories (OpenDOAR) (2), donde en diciembre del 2005 existían 28 repositorios registrados, y, en mayo del 2012 hay 2183 repositorios registrados. También, se puede extraer que la bandera "A" evidencia un crecimiento de 400 a 2183 repositorios, es decir, una incorporación de 1783 repositorios nuevos, representando un aumento de 445.75%. Para la bandera "B" el crecimiento es del 172.88% (se incorporaron 1383, de 800 a 2183), en la bandera "C" del 81.92% (se incorporaron 983, de 1200 a 2183), en la bandera "D" de 36.44% (se incorporaron 583, de 1600 a 2183) y en la bandera "E" de 9.15% (se incorporaron 183, de 2000 a 2183). Por tanto, se observa un crecimiento sostenido. Nuestra investigación está centrada en revisar los repositorios de dataset en la Ingeniería y sus implicaciones educativas, fortaleciendo a los repositorios como una herramienta más en la formación del futuro ingeniero, además de lograr la preservación de esos documentos y datos en el tiempo, garantizando su acceso a futuras generaciones. El Data Sharing consiste en compartir los datos de las investigaciones de los científicos, con el objetivo de unir esfuerzos y optimizar el uso de los recursos (3), es decir, que los data sets, puedan ser reutilizados con diferentes propósitos por el resto de los investigadores. Comenzó a estudiarse a partir de 1901, solicitando a los autores de publicaciones de estudios biométricos depositar los datos en algún lugar para su consulta (4). Luego, en 1971 inicia uno de los repositorios más antiguos, el Protein Data Bank. Después, surge la revista Journal of Biological Chemistry en 1983, que se convierte en una de las primeras revistas en exigir los datos de las investigaciones como condición para la publicación de artículos (5). Piwowar, en el 2007, muestra un incremento en el número de citas del 69% en revistas del área de la biomedicina (6). Estos hechos, establecen una percepción positiva en el uso de los repositorios en el área de la biomedicina pero no pareciera extenderse a las otras disciplinas, ya que los investigadores perciben un riesgo a publicar sus data raw y por lo que prefieren mantenerlas en sus computadoras con el riesgo que desaparezcan para siempre. Por ello, surge la duda de dónde depositar esos datos para poder preservarlos y compartirlos. Uno de los aportes de este trabajo, será contribuir con los investigadores en la Ingeniería para depositar sus trabajos. En las estadísticas en OpenDOAR, muestran que solamente el 3.62% de repositorios (79 de 2183), permiten colocar dataset. Esta situación genera cierto desconcierto y desánimo en los investigadores para depositar los datos de sus investigaciones. Por tanto, es necesario incentivar la publicación a través del data sharing como práctica generalizada junto con políticas de creación y uso de los repositorios, ayudando al cambio cultural para una mejor formación del ingeniero. Un ejemplo es el editorial de Nature en 2009 (7) que expresa: "la comunidad científica, para llevar a cabo el data sharing necesita el equivalente digital de las bibliotecas actuales, es decir, alguien que preserve y haga accesible todos esos datos, apuntando directamente a las bibliotecas universitarias (como instituciones) y al data management (como rama del conocimiento) como los pilares sobre los que se debe apoyar el futuro del data sharing". Otros ejemplos similares se observan en otro editorial de Science (8), en diferentes artículos y políticas de publicación en revistas como British Medical, Plos ONE, el Archivo Nacional del Reino Unido, National Institute Health (NIH), Research Information Network (RIN), National Science Foundation, entre otras. Asimismo, diferentes instituciones universitarias, a través de sus bibliotecas, están avocadas a preservar los datos de los trabajos. En este artículo se describe el concepto de los repositorios y la representación de los datos originales en las diferentes disciplinas de la Ingeniería. Luego, se exponen las implicaciones educativas de los repositorios, unas consideraciones finales y sugerencias para trabajos futuros. LOS REPOSITORIOS DEFINICIÓN Los repositorios, también conocidos como repositorios digitales, están constituidos por un conjunto de archivos digitales en representación de productos científicos y académicos que pueden ser accedidos por los usuarios. Específicamente, los repositorios institucionales consisten en estructuras web interoperables de servicios informáticos, dedicadas a difundir la perpetuidad de los recursos científicos y académicos (físicos o digitales) de las universidades a partir de la enumeración de un conjunto de datos específicos (metadatos), para que esos recursos se puedan recopilar, catalogar, acceder, gestionar, difundir y preservar de forma libre y gratuita, por lo que están estrechamente ligados a los ideales y objetivos del Open Access. La representación de estos recursos se logra mediante el registro persistente del conjunto de datos asociados a ellos. Estos datos sirven como síntesis y reemplazo del objeto "real", lo cual permite distribuir el recurso sin requerir del objeto en sí, sino usando su representación. Las actividades de catalogación, acceso, gestión y difusión de los contenidos son las más consolidadas con el crecimiento de los repositorios, por el contrario, la recopilación de materiales y la preservación todavía se encuentran en sus primeros pasos. A continuación se nombran los tres directorios más consultados según comScore.com (9), en los que se pueden observar algunas características de los repositorios registrados, a saber: • OpenDOAR tiene registrados 2183 repositorios para el 13/05/12 (10). • ROAR -Registry of Open Access Repositories tiene registrados 2875 repositorios para el 13/05/12 (11). • University of Illinois OAI-PMH Data Provider Registry, con 2906 repositorios para el 13/05/12 (12). Para este trabajo, se utiliza el OpenDOAR porque es el más consultado y sirve de fundamento para las estadísticas análizadas más adelante. INICIATIVA DEL OPEN ACCESS El Open Access (OA) tiene como fin asegurar el acceso libre y abierto a la producción científica. Una de las formas de lograr ese objetivo es por medio de la creación de repositorios institucionales donde se deposita esa producción científica para hacerla accesible sin restricciones y preservarla digitalmente como un bien común para la sociedad de hoy y del futuro. El movimiento de acceso abierto a la información se basa en dos estrategias fundamentales, una a través de las revistas de acceso abierto y la otra por medio de los repositorios institucionales. En 1966, se conoce el lanzamiento de Educational Resources Information Center (ERIC), biblioteca digital especializada en educación, y de Medline, una base de datos bibliográfica de biomedicina producida por la National Library Medicine (NLM) de los Estados Unidos. Una de las voces líderes es Peter Suber (13), que en el 2005 indica que existe una gran división en las publicaciones científicas, una referida a aquellas que están disponibles gratuitamente en la internet y otra en las cuales los lectores deben pagar para tener acceso a ellas. Además, enumera una serie de beneficios que ha generado, destacándose que los artículos en acceso abierto han sido citados 50-300% más que artículos que no están en OA en una misma revista, resaltando la importancia del autoarchivo como bandera del movimiento. REPOSITORIOS DE DATOS Los datos finales de investigación se entienden como el material factual registrado, aceptado por la comunidad científica y necesario para validar los resultados de la investigación, según el National Institutes of Health (3). Por tanto, no son datos finales de investigación: notas de laboratorio, sets de datos parciales, análisis preliminares, borradores de trabajos científicos, planes para investigaciones futuras, informes que han tenido un proceso de revisión por pares, comunicaciones con colegas, u objetos físicos como ejemplares de laboratorio. Torres-Salinas et al. en el 2012 (3), exponen diferentes taxonomías de los datos de investigación que sirven para determinar de una más clara qué se puede compartir y bajo qué contexto: Todos estos repositorios relacionados con datos fuentes para investigaciones, recopilados en la bibliografía relevada, indican que esta corriente se encuentra en un momento de plena expansión. Este hecho permite visualizar que tanto los investigadores en el área de las ciencias de la información como los administraciones de instituciones donde se prestan servicios relacionados, deben prepararse para auge de los repositorios de datos, que crecen a la par con la tecnología, situación que muchos autores llaman la era de la información (14). Se estima que en los próximos años las diferentes disciplinas, subdisciplinas e interdisciplinas de la Ingeniería comenzarán a generar sus estándares de metadatos y de la misma forma también surgirán repositorios de datos. Por tanto, los ingenieros como las diferentes dependencias que están involucradas, deben prepararse para la consolidación de los repositorios en la Ingeniería y servicios que estimularán y cambiarán la educación en el área. Los repositorios son sistemas que necesitan desarrollarse en alguna plataforma de software, por eso se observa en la Figura 4 la distribución de los repositorios registrados de acuerdo con el tipo de software usado. Se destaca que el líder es DSpace (desarrollado bajo licencia de software libre) con 872 repositorios con un 39.9%, a pesar de existir un 16.4% (358 repositorios) que están identificados por software desconocido, indicando que muchos de esos poseen desarrollos propios o simplemente no registraron su tipo de software. La tercera y cuarta posición son desarrollos en software libre: EPrints y Digital Commons. Entonces, se puede decir que las tres primeras plataformas de software para repositorios con licencias libres representan un 59.14%, es decir, 1291 repositorios. REPOSITORIOS EN LA INGENIERÍA En PAUTAS PARA LA IMPLEMENTACIÓN DE REPOSITORIOS Según el manual de LEADIRS II de Barton y Waters (16), se pueden tomar en cuenta los siguientes pasos para la implementación de repositorios: • Aprendizaje sobre el proceso leyendo y examinando otros Repositorios Institucionales. • Desarrollar una definición y un plan de servicio: • Realizar una evaluación de las necesidades de su Universidad. • Desarrollar un modelo de coste basado en este plan. • Crear una planificación y un horario. • Desarrollar políticas de actuación que gestionen la recopilación de contenidos, su distribución y mantenimiento. • Formar el equipo. • Tecnología (elegir e instalar el software). • Marketing. • Difundir el servicio. • Puesta en funcionamiento del mismo. Estas pautas son básicas y por supuesto pueden variar de acuerdo con cada institución, pero ayudan a marcar una línea lógica de implementación. IMPLICACIONES EDUCATIVAS DE LOS REPOSITORIOS Dave Turek, el encargado del desarrollo de supercomputadoras de IBM, declaró en mayo del 2012, que desde el comienzo de la historia humana hasta el año 2003, se ha generado según cálculos de IBM, cinco exabytes, es decir, cinco mil millones de gigabytes de información. El año pasado, se generaban esa misma cantidad de datos cada dos días y para el próximo año, pronóstica Turek, se generará lo mismo cada 10 minutos (17). Esta afirmación definitivamente es una evidencia de que se vive en la era de la información, la misma que se consolidado gracias al desarrollo de internet y de las tecnologías de la información (14). Este cambio de época se puede observar en lo cultural, social, económico y tecnológico, por ello, hoy día los niños y jóvenes crecen con un computador o dispositivo electrónico, conocidos como nativos digitales (18). La situación descrita por Turek plantea a la sociedad, empresas e instituciones académicas un gran reto relacionado con la gestión de grandes conjuntos de datos así como con la generación de información. La educación, en todos los niveles, debe comenzar a pensarse con base en la época en la que se vive, permitiendo entre otras cosas el acceso libre y gratuito a todos los datos a través de repositorios, los cuales garantizan la recopilación, difusión y preservación de la información para la sociedad de hoy y del futuro. Gran parte de esos datos provienen de la investigación que debe cumplir con los estándares de calidad, por ejemplo, la revisión por pares. Por lo general, los trabajos se apoyan en trabajos anteriores y dependen principalmente de las posibilidades que tengan los científicos de consultar y compartir las publicaciones científicas y los datos de la investigación. Por tanto, la difusión de la información debe ser rápida para que pueda contribuir a la innovación y evitar la repetición de investigaciones con aporte a la ciencia. En el 2011, Castro et al. (19) muestran en su investigación cómo los repositorios están transformando la educación primaria y secundaria en Portugal, cambiando poco a poco todas las estructuras y formas de pensar de los actores en esos niveles, favoreciendo la calidad académica. De igual manera, Xia y Opperman (20) en el 2009 destacan la importancia de los repositorios en la formación de futuros magister y grados intermedios, y reportan que el 49.50% de los recursos depositados pertenecen a trabajos estudiantiles, conviertiéndolos en actores principales de los nuevos trabajos disponibles en los repositorios. Ambos trabajos concluyen: que los contenidos digitales provenientes de diversas fuentes están aumentando, la existencia de intercambios de esos contenidos, la publicación de esos contenidos en repositorios, la reutilización de la información se realiza todo el tiempo y los software de esos repositorios están en su mayoría, con licencias de software libre. A continuación se nombran algunas potencialidades y limitaciones (19) para fundamentar la relación entre repositorios y la educación: -Potencialidades: • Facilitar la modificación de las prácticas pedagógicas. • Fomentar las prácticas de enseñanza más interactiva y constructiva. • Inducir y facilitar la producción y utilización de herramientas, contenidos, recursos e información en formato digital. • Facilitar enfoques de colaboración en la enseñanza. • Minimizar la brecha digital, permitiendo el acceso remoto y contenidos de bajo coste, módulos y cursos. • Fomentar la inclusión en la enseñanza y el aprendizaje de los ciudadanos con necesidades especiales. • Desarrollar y fortalecer una cultura de aprendizaje permanente. • Mantener la información en el tiempo y garantizar su acceso a próximas generaciones. -Limitaciones: • Técnica: falta de disponibilidad de internet en algunos sectores. • Económico: la falta de recursos para invertir en hardware y software, limitando el desarrollo de herramientas informáticas y mantenimiento de proyectos a largo plazo. • Social: la ausencia de habilidades para utilizar las invenciones técnicas. • Cultural: resistencias en la distribución o el uso de los recursos producidos por otros profesores o instituciones. • Políticas de estado y marcos legales. Finalmente, el compartir (data sharing) los datos primarios o data raw de las investigaciones, permitirá aumentar la eficiencia de la investigación y la calidad de la educación que recibirán los estudiantes, ya que esos dataset se pueden utilizar para explorar las nuevas hipótesis o las relacionadas, además de ser indispensables para el desarrollo y la validación de los métodos de estudio, técnicas de análisis e implementaciones de software. Por supuesto, que si esos datos están libres y gratuitos para los estudiantes, se generarán mayor innovación en ellos -entre otros beneficios-, ayudando a identificar los errores en etapas tempranas de los trabajos y evitando la recolección de datos duplicados. Por tanto, parte del éxito de la educación estará en que los datos se puedan recopilar, catalogar, acceder, gestionar, difundir y preservar, es decir, se necesitan y se necesitarán repositorios para la Ingeniería. CONSIDERACIONES FINALES • Se requieren más estudios sobre el comportamiento de los repositorios de datos en la Ingeniería, ya que el acceso a la información científica, su difusión y su preservación constituyen importantes retos en la era digital. • En el trabajo se evidencia la debilidad existente en la formalización de repositorios en la Ingeniería, pero simplemente se observa como un llamado de atención para continuar ayudando a construir estándares y herramientas que permitan la fácil adaptación a las diversas disciplinas de la Ingeniería. • El movimiento de Data Curation esta tomando una gran importancia en el mundo de los repositorios, ya que permitirá enriquecer la comprensión de difundir y preservar datos. Existen proyectos como los coordinados por la University of Illinois (21) y Purdue University (22), o cursos en postgrados (23 y 24) para fomentan que los materiales deben ser calidad y garantiza el acceso a largo plazo. • Incrementar la visibilidad de la producción académica y científica de las Ingenierías, a partir de la consolidación e incremento de los repositorios en las universidades, tomando en cuenta, la poca cantidad de repositorios en esas instituciones, además de la falta de sitios web de las diferentes disciplinas de las ingenierías, permitiendo conocer con más detalle esas disciplinas. • Partiendo del principio que no existen políticas claras entre los investigadores acerca de cómo conservar sus datos, ya que muchas veces los datos están dispersos en diferentes medios de almacenamiento (CDs, DVDs, discos duros externos, PCs, correos, en la nube), esto provocará pérdidas irrecuperables por diversas causas, por ello deben establecerse políticas que ayuden a la comunidad a conservar sus datos en el tiempo. • Las revistas deben continuar obligando que en las publicaciones se exijan que los datos utilizados se almacenen en el repositorio, por tanto, los repositorios deben contar con la plataforma acorde para permitir catalogar todas las diversidades de datos existentes. Por ejemplo: Nature, Science, Plos one, etc. • Los repositorios y los Learning Management Systems (LMS) tienen objetivos completamente distintos, los repositorios están pensados para el acceso, difusión y preservación de documentos y datos, en cambio, las plataformas e-learnings integra un conjunto de herramientas para la enseñanza-aprendizaje en línea, de forma no presencial o mixta, generando una interacción alumno-profesor. • Una recomendación para los formatos de los archivos de los datos que se quieren conservar es que sus formatos no sean propietarios, que no estén comprimidos y que no estén encriptados. TRABAJOS FUTUROS • Establecer políticas para la constitución y catalogación en los repositorios de datos. • Generar una propuesta de ranking propio que tome en cuenta características de la disciplina en la que se encuentre, por ejemplo las estadísticas de colecciones de data set de economía llamado RePEc (25). • Analizar los modelos de los repositorios referentes o los más consultados para proponer un modelo general de repositorios, permitiendo generar otras plataformas o desarrollar componentes sobre los ya existentes, usando metodologías como MDA o líneas de productos de software. Fig. 1 . 1Comportamiento de la Base de Datos del OpenDOAR Fig. 4 . 4Software de los Repositorios según OpenDOAR 1 . 1Según el formato Sólo de interés para un proyecto de investigación • Alcance medio ○ De interés para una disciplina concreta. • De interés general ○ De interés para la ciencia en su conjunto e incluso de interés social. 4. Según fase de investigación • Datos preliminares ○ Datos recién extraídos sin ningún tipo de procesamiento. Denominados en inglés raw data • Datos finales ○ Datos que ya han sido procesados y combinados con otros (en inglés final research data.)Esta clasificación sirve para categorizar inicialmente los repositorios, de esa manera se puede ayudar a la constitución de repositorios bajo un estándar. A continuación se nombran algunos proyectos que permiten confirmar el surgimiento de repositorios de datos:• Textos • Números • Imágenes • Otros. 2. Proceso de obtención • Experimentales ○ Secuencias genéticas ○ Cromatografías • Simulaciones ○ Modelos climáticos ○ Modelos económicos • Observacionales ○ Encuestas ○ Experimentos irrepetibles. 3. Según objetivo recogida • Específicos ○ Proyecto o Trabajo Características Referencia Lista Europea de Catálogos LOD2 Se encuentran registrados 51 catálogos de Open Data de 27 países http :// lod 2. okfn . org / eu -data -catalogues Trabajo de Peter Kirlew Repositorios de datos de Life science http://www.istl.org/11-spring/refereed1.html Universidad de Columbia Repositorios de datos en: astronomía, ciencias biológicas, química, ciencias de la tierra y climáticas, datos marinos y ciencias sociales http :// scholcomm . columbia . edu / data -management / data - repositories / Narcis Portal de repositorios de Holanda. Para el momento de la consulta: 24.207 datasets http :// www . narcis . nl / British University -Dryad Datos de investigaciones revisadas por pares en biociencia. Tiene 1605 paquetes y 4018 archivos de datos. http :// www . datadryad . org / Cornell University -DataStar Repositorio de datos experimentales http :// datastar . mannlib . cornell . edu / PDB/Protein Data Bank Bioinformática http :// www . rcsb . org / pdb / home / home . do GSA Data Repository Geología http :// www . geosociety . org / pubs / drpint . htm Cornell University Library Física http :// arxiv . org / ChemxSeer Química http :// chemxseer . ist . psu . edu / Organic Eprints Agricultura http :// www . orgprints . org / GIS Data Resources Geoespaciales http :// www . gis 2 gps . com / GIS / gisdata / gisdata . html American Mathematical Society Matemáticas http :// www . ams . org / global -preprints / SeaDataNet / NOAA Datos marinos http :// www . seadatanet . org / o http :// www . nodc . noaa . gov / Repositories of Archaeology Data Arqueología http :// openarchaeologydata . metajnl . com / repositories / DataCite Instituto británico para el acceso de los datos de investigación http :// www . datacite . org / repolist Australian Partnership for Sustainable Repositories Centro para la gestión de datos académicos en formato digital http :// www . apsr . edu . au / Directorio mundial del Open Access Lista de repositorios de datos http :// oad . simmons . edu / oadwiki / Data _ repositories Tabla 1. Lista de Proyectos y Repositorios de Datos el 2012 Thompson clasifica a la Ingeniería en cuatro disciplinas principales o básicas(15): Ingeniería Química, Ingeniería Civil, Ingeniería Eléctrica e Ingeniería Mecánica. Luego el autor explica como las otras Ingenierías existentes son subdisciplinas e interdisciplinas (Ingeniería en Informática, Ingeniería de Materiales, Ingeniería Electrónica, Ingeniería Industrial, Ingeniería Agronómica, etc.). A partir de esta afirmación, se realiza una búsqueda de las disciplinas principales de la Ingeniería en el directorio de repositorios OpenDOAR, analizando los clasificados en las disciplinas básicas y los que tienen como contenidos a datasets.La Tabla 2, expone la relación de los repositorios entre las cuatro disciplinas básicas y los dataset. Esta tabla se construyo a partir de una revisión sobre cuántos repositorios funcionaban o estaban activos. Se evidencia una debilidad en cuanto a número real de repositorios válidos, la poca existencia de políticas claras por cada una de estas disciplinas y la ausencia de estándares de metadatos, características propias de las entidades (papers, datasets, libros, etc)La Figura 2, muestra las diferentes áreas representadas en el directorio. Se destaca que las cuatro disciplinas básicas, según Thompson, tienen 153 repositorios distribuidos con 63 en Ingeniería Química, 39 en Ingeniería Mecánica, 29 en Ingeniería Eléctrica y 22 en Ingeniería Civil. En la Figura 3, según los tipos de contenidos, los Journal Articles tienen 1467 repositorios (67.20%) y los Dataset tienen 79 repositorios (3.62%). Estos datos en concreto, permiten afirmar la poca presencia de esas disciplinas en los diferentes repositorios en el mundo, al igual que los data set. Fig. 2. Áreas en OpenDOAR Fig. 3. Tipos de Contenido en OpenDOAR , que permiten identificarlos para búsquedas, almacenamientos y recuperaciones. En cambio, las otras áreas que se mencionan a continuación, se observa que la comunidad de usuarios e instituciones afines han desarrollado estándares propios de metadatos, ayudando a consolidar los repositorios de datos: • Darwin Core (Biology) • DDI (Data Documentation Initiative for Social and Behavioral Sciences Data) • DIF(Directory Interchange Format for Scientific Data) • EML (Ecological Metadata Language) • FGDC/CSDGM (Geographic Data) • NBII (National Biological Information Infrastructure) • MIAME (Minimum Information About a Microarray Experiment) • MINSEQE (Minimum Information about a high-throughput SeQuencing Experiment). Clasificación Total de Repositorios Repositorios con Content Datasets Porcentaje con Dataset No Funciona Porcentaje que no funciona Ing. Civil 22 2 9.09 % 4 18.18 % Ing. Química 63 7 11.11 % 21 33.33 % Ing. Eléctrica 29 0 0.00 % 7 24.14 % Ing. Mecánica 39 4 10.26 % 9 23.08 % Totales 79 15 18.99 % 5 6.33 % Tabla. 2. Relación de las Disciplinas Básicas de Ingeniería y Datasets Who Shares? Who Doesn't? Factors Associated with Openly Archiving Raw Research Data. H A Piwowar, PLoS ONE. 6718657H. A. Piwowar, "Who Shares? Who Doesn't? Factors Associated with Openly Archiving Raw Research Data," PLoS ONE, vol. 6, no. 7, p. e18657, Jul. 2011. OpenDOAR -Home Page -Directory of Open Access Repositories. Opendoar, OpenDOAR, "OpenDOAR -Home Page -Directory of Open Access Repositories," 2012. [Online]. Compartir los datos de investigación en ciencia: introducción al data sharing. D Torres-Salinas, N Robinson-García, A Cabezas-Clavijo, Profesional de la Informacion. 212D. Torres-Salinas, N. Robinson-García, and A. Cabezas-Clavijo, "Compartir los datos de investigación en ciencia: introducción al data sharing," Profesional de la Informacion, vol. 21, no. 2, pp. 173-184, 2012. Preparing raw clinical data for publication: guidance for journal editors, authors, and peer reviewers. I Hrynaszkiewicz, M L Norton, A J Vickers, D G Altman, BMJ. 340jan28 1I. Hrynaszkiewicz, M. L. Norton, A. J. Vickers, and D. G. Altman, "Preparing raw clinical data for publication: guidance for journal editors, authors, and peer reviewers," BMJ, vol. 340, no. jan28 1, pp. c181- c181, Jan. 2010. VLA Observations of Ultraluminous IRAS Galaxies: Active Nuclei or Starbursts?. T Crawford, J Marr, B Partridge, M A Strauss, The Astrophysical Journal. 460225T. Crawford, J. Marr, B. Partridge, and M. A. Strauss, "VLA Observations of Ultraluminous IRAS Galaxies: Active Nuclei or Starbursts?," The Astrophysical Journal, vol. 460, p. 225, Mar. 1996. Sharing Detailed Research Data Is Associated with Increased Citation Rate. H A Piwowar, R S Day, D B Fridsma, PLoS ONE. 23NatureH. A. Piwowar, R. S. Day, and D. B. Fridsma, "Sharing Detailed Research Data Is Associated with Increased Citation Rate," PLoS ONE, vol. 2, no. 3, p. e308, Mar. 2007. (7) "Data's shameful neglect," Nature, vol. 461, no. 7261, pp. 145-145, Sep. 2009. Making Data Maximally Available. B Hanson, A Sugden, B Alberts, Science. 3316018B. Hanson, A. Sugden, and B. Alberts, "Making Data Maximally Available," Science, vol. 331, no. 6018, pp. 649-649, Feb. 2011. Measuring the Digital World. 21"comScore, Inc. -Measuring the Digital World." [Online]. Available: http://www.comscore.com/. [Accessed: 21-May-2012]. OpenDOAR -Home Page -Directory of Open Access Repositories. "OpenDOAR -Home Page -Directory of Open Access Repositories." . Registry of Open Access Repositories (ROAR). 21ROAR, "Registry of Open Access Repositories (ROAR)," 2012. [Online]. Available: http://roar.eprints.org/. [Accessed: 21-Jun-2012]. OAI Registry at UIUC (searchform.asp). 21"OAI Registry at UIUC (searchform.asp)." [Online]. Available: http://gita.grainger.uiuc.edu/registry/. [Accessed: 21-May-2012]. Open access, impact, and demand. P Suber, BMJ. 3307500P. Suber, "Open access, impact, and demand," BMJ, vol. 330, no. 7500, pp. 1097-1098, May 2005. The Power of Identity: The Information Age: Economy, Society, and Culture. M Castells, John Wiley & SonsM. Castells, The Power of Identity: The Information Age: Economy, Society, and Culture. John Wiley & Sons, 2011. The Oxford Handbook of Interdisciplinarity, Reprint. R Frodeman, J T Klein, C Mitcham, Oxford University PressUSAR. Frodeman, J. T. Klein, and C. Mitcham, The Oxford Handbook of Interdisciplinarity, Reprint. Oxford University Press, USA, 2012. Cómo crear un repositorio institucional.Manual LEADIRS II. M Barton, &amp; M Waters, MIT Libraries. M. Barton & M. Waters,"Cómo crear un repositorio institucional.Manual LEADIRS II,"MIT Libraries, 2004. Big Data or Too Much Information?. R Rieland, Access.: 12Smithsonian magazineR. Rieland, "Big Data or Too Much Information?," Smithsonian magazine. [Online]. Available: http://blogs.smithsonianmag.com/ideas/2012/05/big-data-or-too-much-information/. [Access.: 12-May-2012]. Students as designers and creators of educational computer games: Who else?. M Prensky, British Journal of Educational Technology. 396M. Prensky, "Students as designers and creators of educational computer games: Who else?," British Journal of Educational Technology, vol. 39, no. 6, pp. 1004-1019, 2008. Repositories of Digital Educational Resources in Portugal in the elementary and secondary education. C Castro, S A Ferreira, A Andrade, C. Castro, S. A. Ferreira, and A. Andrade, "Repositories of Digital Educational Resources in Portugal in the elementary and secondary education," 2011, pp. 1 -7. Current Trends in Institutional Repositories for Institutions Offering Master's and Baccalaureate Degrees. J Xia, D B Opperman, Serials Review. 361J. Xia and D. B. Opperman, "Current Trends in Institutional Repositories for Institutions Offering Master's and Baccalaureate Degrees," Serials Review, vol. 36, no. 1, pp. 10-18, Mar. 2010. Data Curation | www. 21lis.illinois.edu." [Online"Data Curation | www.lis.illinois.edu." [Online]. Available: http://www.lis.illinois.edu/academics/programs/ms/data_curation. [Accessed: 21-May-2012]. D2C2 -Distributed Data Curation Center. 21"D2C2 -Distributed Data Curation Center." [Online]. Available: http://d2c2.lib.purdue.edu/. [Accessed: 21- May-2012]. About | Data Curation Profiles. 21About | Data Curation Profiles." [Online]. Available: http://datacurationprofiles.org/about. [Accessed: 21- May-2012]. DigCCurr Carolina Digital Curation Curriculum Project. 21DigCCurr Carolina Digital Curation Curriculum Project." [Online]. Available: http://www.ils.unc.edu/digccurr/aboutII.html. [Accessed: 21-May-2012]. . Toplisting &quot;logec, 321"LogEc Toplisting." [Online]. Available: http://logec.repec.org/scripts/itemstat.pf?type=redif- paper;sortby=3d. [Accessed: 21-May-2012].
[]
[ "Time-and memory-efficient representation of complex mesoscale potentials", "Time-and memory-efficient representation of complex mesoscale potentials" ]
[ "Grigory Drozdov ", "Igor Ostanin ", "Ivan Oseledets " ]
[]
[]
We apply the modern technique of approximation of multivariate functions -tensor train cross approximation -to the problem of the description of physical interactions between complex-shaped bodies in a context of computational nanomechanics. In this note we showcase one particular example -van der Waals interactions between two cylindrical bodies -relevant to modeling of carbon nanotube systems. The potential is viewed as a tensor (multidimensional table) which is represented in compact form with the help of tensor train decomposition. The described approach offers a universal solution for the description of van der Waals interactions between complex-shaped nanostructures and can be used within the framework of such systems of mesoscale modeling as recently emerged mesoscopic distinct element method (MDEM).
10.1016/j.jcp.2017.04.056
[ "https://arxiv.org/pdf/1611.00605v3.pdf" ]
15,884,605
1611.00605
fd5750aa3c4a09aebe6ab6a8c23be1f69bbdde3a
Time-and memory-efficient representation of complex mesoscale potentials May 2017 Grigory Drozdov Igor Ostanin Ivan Oseledets Time-and memory-efficient representation of complex mesoscale potentials May 2017 We apply the modern technique of approximation of multivariate functions -tensor train cross approximation -to the problem of the description of physical interactions between complex-shaped bodies in a context of computational nanomechanics. In this note we showcase one particular example -van der Waals interactions between two cylindrical bodies -relevant to modeling of carbon nanotube systems. The potential is viewed as a tensor (multidimensional table) which is represented in compact form with the help of tensor train decomposition. The described approach offers a universal solution for the description of van der Waals interactions between complex-shaped nanostructures and can be used within the framework of such systems of mesoscale modeling as recently emerged mesoscopic distinct element method (MDEM). Introduction Mesoscale coarse-grained mechanical modeling [1][2][3][4] is an important tool for in silico characterization of nanostructures. The typical coarse-grained modeling approach is based on the idea of the representation of a nanostructure (e.g. nanotube, nanoparticle, etc.) as a set of elements, which are treated either as point masses or solid bodies (rigid or deformable), interacting via bonding or non-bonding potentials. Compared to regular molecular dynamic (MD) approaches, such models are easily as scalable as MD, while rendering much higher computational efficiency. For example, they can be used for accurate mechanical modeling of the representative volume elements of complex nanostructured materials. However, such models require the efficient and reliable description of van der Waals (vdW) interaction potentials between complex-shaped elements. Such potentials depend on multiple parameters. For example, the interaction between two rigid bodies of a fixed size requires tabulation of six-dimensional array with good precision. The description of interactions between the bodies of variable size, which is required for adaptive models, or interactions between deformable elements, requires even more degrees of freedom and leads to larger multidimensional arrays of data. This makes simple tabulation of such po-tentials impractical, since the time to construct such a table and the memory to store it grows exponentially with the number of dimensions (the problem known as "curse of dimensionality"). The approach that is often used in the field is to design relatively simple analytical approximations for such interaction potentials, representing the potentials sought as separable functions [2,3]. Such an approach may be efficient in some situations, but in many cases it leads to over-simplifications and incorrect mechanics on the larger scale. In this work we suggest the reliable and universal approach for storing arbitrarily complex mesoscale potentials as compressed multidimensional tables (following the terminology accepted in data science and computational linear algebra communities, such tables will be referred to as tensors below). The compression is reached via tensor-train cross approximation [6,7]. It allows to construct the approximation of a tensor with the desired precision, using relatively small number of explicitly computed potential values. Such approaches were applied earlier in the field of computational chemistry [8], however, they are still widely unknown to the community of multiscale mechanical modeling. In order to demonstrate the pipeline of tensor approximation in application to mesoscale models, we consider relatively simple yet pratically important problem of the vdW interaction between two equal-sized cylinders. This problem is important in a context of mesoscale mechanical modeling of large assemblies of carbon nanotubes (CNTs). As has been showed earlier, the use of simple pair potentials for the description of intertube interactions between cylindrical CNT segments leads to significant artifacts in model's behavior. In the previous works [2,3] the problem was addressed with sophisticated analytical approximations. However, these approximations can not be transparently generalized onto more complex situations -CNTs of different diameters, curved CNT segments etc. Our approach presented here does not suffer from such a lack of generality and can be used in a number of similar problems of the description of interactions between complex-shaped nanostructures. Method In this section we describe our technique of construction of the compressed table of the interaction potential between complex-shaped bodies with the example of vdW interactions between two equal-sized cylindrical segments of CNTs. Following the coarse-graining approach developed in [3,4], we idealize CNT segments as interacting cylindrical surfaces of uniform density. Total vdW interaction between two segments is found via the integration of standard Lennard-Jones (LJ) potential: u LJ (r) = 4ε σ r 12 − σ r 6 (1) where r is the distance between particles, σ 6 √ 2 is the equilibrium distance, −ε is the energy at the equilibrium distance. For carbon-carbon interaction, we accept σ = 3.851Å, ε = 0.004 eV . In order to avoid the integration of an artificial singular part of LJ potential, we replace the potential in nearsingular region (0, r 0 ), r 0 = 3.8Å, with the cubic spline u(r), satisfying u(0) = 3 eV, u ′ (0) = 0, u(r 0 ) = u LJ (r 0 ), u ′ (r 0 ) = u ′ LJ (r 0 ) . We assume that carbon atoms are uniformly distributed over surfaces of cylinders with the surface density ρ = 4 3 √ 3aC−C , where a C−C = 1. 42Å is the equilibrium carbon-carbon bond length. The potential between cylindrical segments of nanotubes is then represented as the integral over the surface of each cylinder: U t = S1 S2 ρ 2 u LJ (r) dS 1 dS 2(2) where S 1 and S 2 are cylinders side surfaces, dS 1 and dS 2 are the elements of surfaces, r is the distance between dS 1 and dS 2 . The shapes of function r for different parametrizations are given in the Appendix. In order to describe the mutual position and orientation of two cylinders one needs four independent variables -six variables for general rigid bodies are reduced by two due to axial symmetries of the cylinders. Since the choise and order of these four variables is important for the approximation technique we intend to use, we compare two different parameterizations. The first one includes one distance and three angles (R, α 1 , α 2 , α 12 ( Fig. 1(a))), whereas the second utilizes three distances and one angle ( t 1 , t 2 , H, γ( Fig. 1(b))). For both choices of independent variables, we specify the regular grid in four-dimensional space with the appropriate tabulation limits. We sample n points along each independent variable, which results in n 4 tabulated potential values. Clearly, it is impossible to perform such expensive calculations "on the fly" for millions of interacting bodies. Therefore, within the direct approach we have to tabulate this potential function on multidimensional dense grid. It is possible for our rather simple example, but becomes computationally prohibitive in terms of required time and memory for more complex structures and corresponding potentials. Below we describe a general approach that is capable to compute such multidimensional tables fast and store them in a compact form. Our approach is based on tensor train decomposition (TT) and tensor train cross approximation [6,7]. In the TT format each element of d-dimensional tensor U (i 1 , i 2 , . . . , i d ) is represented as the product of d matrices: U (i 1 , i 2 , . . . , i d ) = α0,α1,...,α d G 1 (α 0 , i 1 , α 1 ) G 2 (α 1 , i 2 , α 2 ) . . . G d (α d−1 , i d , α d )(3) Here indices α k are changing from 1 to r k , which are called TT-ranks. Each matrix G k (α k − 1, i k , α k ) has the size r k−1 × r k , depends on only one index of the original tensor, and these matrices form the three-dimensional tensors, which are called cores of the TT-decomposition. r 0 and r d are assigned to 1. Intermediate TT ranks r k are determined by the TT approximation procedure to provide given approximation precision ǫ tt and are conditioned by the structure of the approximated tensor. Since tensors generated by the discretization of physically meaningful functions are typically ether low-rank or well approximated by the low-rank tensors, the decomposition (3) provides a powerful tool to represent such tensors in a compact form. The technique of construction of the best approximation in TT format with a given precision is described in [7]. In this work we use the standard package TT toolbox for construction of the low-rank approximation of our potential. In application to mesoscale modeling, TT decomposition can be used in two different ways. The first way is to accelerate the design of a full potential table and the second -to store it in a compact form. In the latter case the calculation of a single value of the potential in a nodal point would require computation of d matrix-vector products, with matrix dimensions r k−1 × r k . Results and discussion We constructed TT decompositions for a tensor of cylinder-cylinder interactions, for few different values of mesh refinement and accuracy. The potential of interaction between two cylindrical segments of (10,10) CNTs with the radius R c = 6.78Å and height 2R c = 13.56Å was considered. The fourdimensional integral (2) was computed using regular rectangle quadratures over azimuthal angles of cylinders, and Gauss quadratures over cylinders axial directions. The number of integration points was chosen adaptively to provide the desired integration precision ǫ i = 10 −3 . In order to keep our computations fast, we restricted ourselves with the best integration precision ǫ i = 10 −3 . In our case of four-dimensional tensor TT-decomposition contains two matrices and two three-dimensional tensors as TT-cores. The tensor indices correspond to the sampling of the continuous potential function on a regular grid: U (i 1 , i 2 , . . . , i d ) = U t (x 1 , x 2 , . . . , x d ), where x d = x min d + i/n · (x max d − x min d ). The tabulation limits (x min d , x max d ) for both parametrizations are given in Table 1. We can define the compression of the tensor as the ratio of the numbers of elements in the initial tensor and in all the cores of the resulting TT-decomposition. R (2R c , 5R c ) H (2R c , 5R c ) α 1 (0, π) t 1 (−5R c , 5R c ) α 2 (0, π) t 2 (−5R c , 5R c ) α 12 (0, π) γ (0, π) The TT compression was performed for two different parameterizations of cylinders mutual position and different orders of variables (TT decomposition is not invariant with respect to permutation of tensor indices, our results are given for TT decompositions with the permutation of variables providing the lowest ranks of TT cores and the best compression). Tables 2 and 3 give the compression number obtained for the first and second parametrizations respectively, as a function of the grid refinement (number of points per each dimension) and the requested approximation precision ǫ tt . It appears that even black-box application of the TT algorithm gives significant compression of the original tensor (which also roughly corresponds to the time needed to construct this tensor approximation). Figure 2 illustrates the quality of the approximation of a complex potential relief at few cross-sections that correspond to in-plane (a) and out-of-plane (b) rotaton of one cylinder with respect to another for different intercenter distances. As we can see, the compressed potential representation fully reproduce original tensor. Conclusion In this note we have demonstrated the successful application of the technique that have recently emerged in the community of computational multilinear algebra -tensor-train cross approximation -to the problem of representation of the interaction potentials in modern mesoscale models. Within this approach the multidimensional potential table can be constructed based on just few direct calculations of the potential values, which dramatically accelerates construction of such a table. In the case when memory efficient representation is required, this table can be stored in a compact form and then each required value of the potential can be recovered at a moderate computational cost. In case when performance of on-the-fly computations is the highest priority, our approach can be used for fast construction of the full multidimensional table. The instruments for such approximaiton are freely available as components of the open source software TT-toolbox (https://github.com/oseledets/ttpy), and our code, performing the approximation of cylinder-cylinder vdW interaction potential, can be found at https://bitbucket.org/iostanin/cnt_potential_tt_compression/. The resulting approximations can be easily stored in either full or compact form, and further used in the framework of such existing mesoscale models as MDEM [3][4][5]. Our approach appears to be particularly useful for the problems involving interactions between complex deformable shapese.g. CNT segments with variable length, radius, curvature and flatness. Figure 1 : 1Two possible parameterizations of mutual position of two cylinders Figure 2 : 2The potential in the first parametrization (U (R, α 1 , α 2 , α 12 ), solid red) and its TT approximation (dotted blue, n = 50, ǫ tt = 10 −2 ), as a function of one of parametrization angles and few different values of R: (a) α 1 ∈ (0, π), α 2 = 0, α 12 = 0, R = 19.8, 20.6, 21.4, 22.2, (b) α 12 ∈ (0, π), α 1 = 0, α 2 = 0, R = 14.4, 14.8, 15.2, 15.6. Table 1 : 1Tabulation limits for two paremetrizations used.Parametrization 1 Parametrization 2 Table 2 : 2The compression for the first parameterization TT approximation precision ǫ tt 10 −3 10 −2 n Max rank Compression, % Max rank Compression, % 10 20 29.4 13 18.2 20 41 13.6 16 3.4 30 47 5.3 16 1 40 48 2.4 16 0.4 Table 3 : 3The compression for the second parameterization TT approximation precision ǫ tt 10 −3 10 −2 n Max rank Compression, % Max rank Compression, % 10 27 44.8 19 30 20 65 24.7 33 10.2 30 85 12.1 38 3.4 40 96 6.3 39 1.5 AcknowledgementsAuthors gratefully acknowledge partial financial support from Russian Foundation for Basic Research under grants RFBR 16-31-00429, RFBR 16-31-60100. This work was partially supported by an Early Stage Innovations grant fromAppendixHere we give the explicit shapes for the functions r for two parameterizations used. In both cases the distance depends on four variables describing the mutual position of two cylinders, and four more variables -axial coordinates z 1 , z 2 and azimuthal angles ϕ 1 , ϕ 2 , describing the positions of surface elements dS 1 and dS 2 on cylinders' side surfaces. First parameterization:+ R sin α 1 + z 2 (cos α 2 sin α 1 − cos α 1 cos α 12 sin α 2 ) + +R c cos ϕ 2 sin α 1 sin α 2 + cos α 1 cos α 12 cos α 2 cos ϕ 2 − cos α 1 sin α 12 sin ϕ 2 − cos ϕ 1 2 + + R cos α 1 + z 2 (cos α 1 cos α 2 + cos α 12 + cos α 12 sin α 1 sin α 2 ) + +R c cos α 1 cos ϕ 2 sin α 2 − cos α 12 cos α 2 cos ϕ 2 sin α 1 + sin α 1 sin α 12 sin ϕ 2 − z 1 2 1/2Second parameterization:r(dS 1 , dS 2 ) = r(t 1 , t 2 , H, γ, z 1 , z 2 , ϕ 1 , ϕ 2 ) = (t 2 + z 2 ) sin γ + R c (sin ϕ 1 − cos γ sin ϕ 2 ) 2 + + H + R c (cos ϕ 2 − cos ϕ 1 ) 2 + + (t 2 + z 2 ) cos γ − R c sin γ sin ϕ 2 − (t 1 + z 1 ) 2 1/2 Mesoscale modeling of mechanics of carbon nanotubes: selfassembly, self-folding, and fracture. M J Buehler, J. Mater. Res. 2128552869Buehler, M.J., 2006. Mesoscale modeling of mechanics of carbon nanotubes: self- assembly, self-folding, and fracture. J. Mater. Res. 21, 28552869. Mesoscopic interaction potential of carbon nanotubes of arbitrary length and orientation. A N Volkov, L V Zhigilei, J. Phys. Chem. C114. 55135531Volkov, A.N.,Zhigilei,L.V., 2010. Mesoscopic interaction potential of carbon nan- otubes of arbitrary length and orientation. J. Phys. Chem. C114, 55135531. A distinct element method for large scale simulations of carbon nanotube assemblies. I Ostanin, R Ballarini, D Potyondy, T Dumitrica, Journal of the mechanics and physics of solids. 613Ostanin, I., Ballarini, R., Potyondy, D., and Dumitrica, T., 2013. A distinct el- ement method for large scale simulations of carbon nanotube assemblies. Journal of the mechanics and physics of solids 61(3), pp. 762-782. Distinct element method modeling of carbon nanotube bundles with intertube sliding and dissipation. I Ostanin, R Ballarini, T Dumitrica, Journal of applied mechanics. 81661004Ostanin, I., Ballarini, R., Dumitrica, T., 2014. Distinct element method model- ing of carbon nanotube bundles with intertube sliding and dissipation. Journal of applied mechanics 81 (6), p. 061004. Distinct element method for multiscale modeling of cross-linked carbon nanotube bundles: From soft to strong nanomaterials. I Ostanin, R Ballarini, T Dumitrica, Journal of Materials Research. 301Ostanin, I., Ballarini, R., Dumitrica, T., 2015. Distinct element method for multiscale modeling of cross-linked carbon nanotube bundles: From soft to strong nanomaterials. Journal of Materials Research 30 (1), p. 19-25. Tensor Train decomposition. I Oseledets, SIAM Journal on Scientific Computing. 33Oseledets I. Tensor Train decomposition. SIAM Journal on Scientific Computing 33: 2295-2317, 2011. TT-cross approximation for multidimensional arrays. I Oseledets, E Tyrtyshnikov, Linear Algebra and its Applications. 432Oseledets I., Tyrtyshnikov E. TT-cross approximation for multidimensional ar- rays. Linear Algebra and its Applications 432: 70-88, 2010. Fitting high-dimensional potential energy surface using active subspace and tensor train (AS+TT) method // Journal of Chemical Physics. V Baranov, I Oseledets, 143174107Baranov V., Oseledets I. Fitting high-dimensional potential energy surface us- ing active subspace and tensor train (AS+TT) method // Journal of Chemical Physics. 2015. 143, 174107.
[ "https://github.com/oseledets/ttpy)," ]
[ "RADIO TRANSIENT FOLLOWING FRB 150418: AFTERGLOW OR COINCIDENT AGN FLARE?", "RADIO TRANSIENT FOLLOWING FRB 150418: AFTERGLOW OR COINCIDENT AGN FLARE?" ]
[ "Ye Li \nDepartment of Physics and Astronomy\nUniversity of Nevada\nLas Vegas89154NVUSA\n", "Bing Zhang \nDepartment of Physics and Astronomy\nUniversity of Nevada\nLas Vegas89154NVUSA\n" ]
[ "Department of Physics and Astronomy\nUniversity of Nevada\nLas Vegas89154NVUSA", "Department of Physics and Astronomy\nUniversity of Nevada\nLas Vegas89154NVUSA" ]
[]
Recently, Keane et al. reported the discovery of a fading radio transient following FRB 150418, and interpreted it as the afterglow of the FRB. Williams & Berger, on the other hand, suggested that the radio transient is analogous to a group of variable radio sources, so that it could be a coincident AGN flare in the observational beam of the FRB. A new observation with VLA showed a re-brightening, which is consistent with the AGN picture. Here, using the radio survey data of Ofek et al., we statistically examine the chance coincidence probability to produce an event like the FRB 150418 transient. We find that the probabilities to produce a variable radio transient with at least the same variability amplitude and signal-to-noise ratio as the FRB 150415 transient, without and with the VLA point, are P 1 ∼ 6 × 10 −4 and P 1 ∼ 2 × 10 −3 , respectively. In addition, the chance probability to have a fading transient detected following a random time (FRB time) is less than P 2 ∼ 10 −2.9±1.3 . Putting these together and assuming that the number of radio sources within one Parkes beam is 16, the final chance coincidence of having an FRB 150418-like radio transient to be unrelated to the FRB is < 10 −4.9±1.3 and < 10 −4.4±1.3 , respectively, without and with the VLA point. We conclude that the radio transient following FRB 150418 has a low probability being an unrelated AGN flare, and the possibility of being the afterglow of FRB 150418 is not ruled out.
null
[ "https://arxiv.org/pdf/1603.04825v1.pdf" ]
118,421,483
1603.04825
d4b5f13bfdd9f0f7d940529bba362acdf2300878
RADIO TRANSIENT FOLLOWING FRB 150418: AFTERGLOW OR COINCIDENT AGN FLARE? 15 Mar 2016 Draft version February 20, 2018 Draft version February 20, 2018 Ye Li Department of Physics and Astronomy University of Nevada Las Vegas89154NVUSA Bing Zhang Department of Physics and Astronomy University of Nevada Las Vegas89154NVUSA RADIO TRANSIENT FOLLOWING FRB 150418: AFTERGLOW OR COINCIDENT AGN FLARE? 15 Mar 2016 Draft version February 20, 2018 Draft version February 20, 2018arXiv:1603.04825v1 [astro-ph.HE] Preprint typeset using L A T E X style emulateapj v. 01/23/15 Recently, Keane et al. reported the discovery of a fading radio transient following FRB 150418, and interpreted it as the afterglow of the FRB. Williams & Berger, on the other hand, suggested that the radio transient is analogous to a group of variable radio sources, so that it could be a coincident AGN flare in the observational beam of the FRB. A new observation with VLA showed a re-brightening, which is consistent with the AGN picture. Here, using the radio survey data of Ofek et al., we statistically examine the chance coincidence probability to produce an event like the FRB 150418 transient. We find that the probabilities to produce a variable radio transient with at least the same variability amplitude and signal-to-noise ratio as the FRB 150415 transient, without and with the VLA point, are P 1 ∼ 6 × 10 −4 and P 1 ∼ 2 × 10 −3 , respectively. In addition, the chance probability to have a fading transient detected following a random time (FRB time) is less than P 2 ∼ 10 −2.9±1.3 . Putting these together and assuming that the number of radio sources within one Parkes beam is 16, the final chance coincidence of having an FRB 150418-like radio transient to be unrelated to the FRB is < 10 −4.9±1.3 and < 10 −4.4±1.3 , respectively, without and with the VLA point. We conclude that the radio transient following FRB 150418 has a low probability being an unrelated AGN flare, and the possibility of being the afterglow of FRB 150418 is not ruled out. 1. INTRODUCTION Fast Radio Bursts (FRBs) are bright milliseconds radio transients with dispersion measure (DM) much larger than the values expected for the Milky Way galaxy, so that they are expected to have an extragalactic origin (Lorimer et al. 2007;Keane et al. 2011;Thornton et al. 2013;Spitler et al. 2014;Burke-Spolaor & Bannister 2014;Champion et al. 2015;Masui et al. 2015;Ravi et al. 2015;Petroff et al. 2015;Keane et al. 2016). However, due to the large positional uncertainty (∼ 14 arcminutes), no FRB has been previously well located to allow a distance measurement. Recently, Keane et al. (2016) reported the discovery of a radio transient following FRB 150418 starting from 2 hours after the FRB, which faded away in 6 − 8 days. They claimed that the radio transient is the afterglow of the FRB, and used it to identify an elliptical galaxy at z = 0.492 ± 0.002. The measured baryon number density Ω b based on this redshift is consistent with the value inferred from the CMB data, lending an indirect support to the association. If such an association is established, the brightness of the radio afterglow is consistent with that of a double compact star merger system (Zhang (2016) and references therein). However, shortly after the publication of the discovery, suggested that the chance probability of having a radio variable source within one Parkes beam is of the order of unity, and argued that the so-called afterglow of FRB 150418 discovered by Keane et al. (2016) could be simply an unrelated AGN radio flare in spatial coincidence with the FRB. further observed the source and claimed a re-brightening of the host, which strengthened the AGN interpretation. Here we statistically examine the chance probability of having a transient source similar to the putative afterglow of FRB 150418. We notice the following im-portant facts: 1. The first two data points in 5.5 GHz of Keane et al. (2016) are 0.27(5) mJy per beam and 0.23(2) mJy per beam, respectively, which are a factor of 2.5-3 times of the claimed "quiescent" flux 0.09 mJy per beam at later time, as shown in the left panel of Figure 1. The standard deviation is too large for a flux fluctuation due to scintillation, suggesting that these two points have to be related to a flaring object. 2. The flux is fading during the first 6-8 days after the FRB. Our simulations intend to address the probability of having a variable source with such a large flux variability in the FRB observation beam (Sect. 2), and the probability to have a bright flare that is fading from 2 hours to 8 days after the FRB (Sect. 3). We find a very small overall probability. We also compare some short GRB radio afterglows with the FRB 150418 transient, and find that the probabilities that some short GRB afterglows to be confused as AGN flares due to chance coincidence could be comparable to that of the putative afterglow of FRB 150418 (Sect. 4). Putting it together, we argue that the possibility that the radio transient is the intrinsic afterglow of the FRB is not ruled out. PROBABILITY Williams & Berger (2016) argued that the probability of finding a radio source in each Parkes beam is N r = 16, and that of finding a variable radio source is ∼ 0.6. However, the true question is: what is the probability of finding a variable radio source with the amplitude and S/N at least those of the transient following FRB 150418? SPATIAL AND FLUX COINCIDENCE To address this question, similar to , we make use of the 5 GHz survey data with Very Large Array (VLA) and the expanded VLA by Ofek et al. (2011). There are 464 sources in the main catalog of Ofek et al. (2011). For each source, there are 14 − 16 observations (see Table 6 of Ofek et al. (2011), which lists all the observation results). In order to compare with the observation of FRB 150418 afterglow candidate (Keane et al. 2016), we randomly select the same number of observational epochs as FRB 150418 for each source, and calculate the relative standard deviation STD/ f as well as median signal-to-noise ratio S/N. We do it 1000 times for each source, so that we have 464,000 simulated mock observations. Similar to Fig. 2 of , we present the S/N -STD/ f two-dimensional distribution of these mock events in the right panel of Figure 1. For the sake of clear presentation, only a random set of 4,640 mock events are shown. For comparison, the FRB afterglow candidate, which has STD/ f = 0.54 and median S/N = 5.4 with the observational data of Keane et al. (2016), is also marked as the red star in the right panel of Fig. 1. reported an observation of the FRB host on 2016 Feb 27 and 28, which has a flux 0.157 ± 0.006 mJy/beam. By including this point, the FRB afterglow candidate has STD/ f = 0.48 and median S/N = 5.5. It is shown as the orange star in the right panel of Fig. 1. An immediate observation from Fig.1 is that a large STD/ f tends to appear for small S/N values. At the S/N for FRB 150418, the observed STD/ f in general is much smaller than that of the FRB 150418 transient. argued that the transient source is consistent with the distribution of the Ofek et al. (2011) sources. However, we argue that it is more important to check the chance probability to have a variable source with both STD/ f and S/N at least the values inferred from the FRB 150418 transient. From our mock sample, the fraction of events that have STD/ f larger than STD/ f FRB and median S/N larger than S/N FRB for the Keane et al. (2016) data only (without the late VLA point of ) turns out to be P 1 ∼ 6 × 10 −4 . It indicates that the average number of events which could be as bright as the FRB transient in the Parkes beam by chance is N v = N r P 1 ∼ 0.009. Adding the latest VLA observational point , this fraction is increased to P 1 ∼ 2 × 10 −3 , and the variable number becomes N v = N r P 1 ∼ 0.03 1 . Therefore, even if the radio variable source may be common, the chance probability of having a high-variability radio transient similar to the putative FRB 150418 afterglow within the FRB 150418 Parkes beam is small. We also try to compare the FRB afterglow candidate with Mooley et al. (2016), who monitored a larger sky area and have more radio sources in their catalog. Since there are only two observational points in week timescale for each source in Mooley et al. (2016), we choose the first observational point of Keane et al. (2016), i.e. 0.27± 0.05 mJy, and the quiescent flux, 0.09 ± 0.02 mJy, to compare with those in Mooley et al. (2016). Using the distribution of m = ∆f f , analogous to STD/ f FRB , and V s = ∆f σ , analogous to median S/N, as shown in Figure 10 of Mooley et al. (2016), we find that there is no source in Mooley et al. (2016) that is as significant as this FRB 1 Since the last data point was detected by VLA while other data points were detected with ATCA, there might be some systematic errors introduced by cross-calibration between different telescopes. afterglow candidate. If we change the second point to 0.11 ± 0.02, the fraction of sources as significant as the FRB afterglow candidate is 0.001. This is consistent with our previous result. TEMPORAL COINCIDENCE PROBABILITY The existence of an event with a similar STD/ f and median S/N to the FRB 150418-transient does not necessarily interpret the observation. One important, intriguing fact is that the radio source was fading during the span of the 5 ATCA observations (left panel of Fig.1), which have observational epochs at t 0 + [0. 09, 5.9, 7.9, 78.7, 193.4] days after the FRB time t 0 , respectively, for the Keane et al. (2016) observation. Assuming that during 190 days there was only one bright transient (i.e. there is no variability during the last three observational epoch), the duty cycle of flares may be estimated as ∼ 8 days /190 days ∼ 0.04. the chance of having the first observation to be the brightest point is P 2 ∼ 0.09/190 ∼ 5 × 10 −4 . One may argue that the source may be variable all the time, and that the first observation may have missed even brighter phases at earlier times. In order to examine these probabilities, we perform a Monte Carlo simulation using the most conservative approach by assuming that the source has a variability duty cycle of 100% 2 . We assume that the source varies sinusoidally with time, i.e. f = f 0 + Asin(2π t P ), but with the amplitude A varying in different periods. Here f 0 is the flux of the quiescent state. Since the period P is unknown, we vary it from 2 days to 100 days. By fixing a particular P, we simulate a mock light curve which lasts for 500 periods. We allow the amplitude A to be variable. For each period, it is randomly simulated based on the STD/ f distribution in the Ofek et al. (2011) catalog. For a sinusoidal distribution, one has A = √ 2STD, and f = f 0 . In order to be consistent with the observed FRB afterglow, the STD/ f distribution with median S/N = (5 − 6) is used. An example of a small passage of the simulated light curve is shown in the left panel of Figure 2. From each simulated light curve, we randomly pick a t 0 time as the epoch of the FRB, and then pick up the fluxes at the time series t 0 + [0. 09, 5.9, 7.9, 78.7, 193.4] days as a simulated detection series. We require that the resulting light curve should statistically decrease with time with respect to the first point. We simulate 10 6 FRBs in each light curve and estimate the fraction of simulated detection series that satisfy the above criterion. The fraction as a function of the assumed period is shown in the right panel of Figure 2. It can be seen that, the fraction of the simulated detection series that satisfy the monotonic criterion P 2 is period dependent. In general, the larger the period is, the less possible to produce a detection series similar to the FRB 150418 transient. By accounting for the range of P 2 introduced by the period-dependence, we finally get P 2 = 10 −2.9±1.3 , with the latest probability P 2,max ∼ 0.14 (corresponding to period P ∼ 23 days). (Keane et al. 2016). The radio afterglow light curves of the short GRBs 050724 (green) and 130603B (blue) are also presented for comparison. Right: The relative standard deviation STD/ f vs. median S/N for our simulated mock observations (gray dots) compared with the FRB 150418 afterglow candidate (stars) and two short GRBs (filled circles). The red star denotes the original ATCA data (Keane et al. 2016), and the orange star also includes the latest VLA data ). The short GRBs 050724 (green) and 130603B (blue) are also shown. For each short GRB, the small circles treat upper limits as detections assuming that the fluxes are half of the upper limits and the errors are also half of the upper limits. The large circles, on the other hand, exclude the upper limits from the calculations. The true value is likely between the two circles, as shown as the green and blue segments. In reality the variable source is not strictly periodic. We also tried to simulate light curves with a distribution of period within each light curve. Both a uniform distribution and a Gaussian distribution of P are tested. The mean value of P 2 is slightly larger, but the scatter becomes smaller, with the largest probability P 2,max ∼ 0.01. Combining all the constraints (spatial, flux, and temporal coincidence), the final chance probability to have an unrelated AGN flare to mimic the putative afterglow of FRB 150418 is P = N r P 1 P 2 , which is ∼ 10 −4.9±1.3 for the original Keane et al. (2016) data, and is ∼ 10 −4.4±1.3 with the inclusion of the latest VLA data ) (see Table 1). Both are very small numbers. COMPARISON WITH SHORT GRB RADIO AFTERGLOWS Since the putative afterglow of FRB 150418 has a flux comparable to that of short GRBs (Keane et al. 2016;Zhang 2016), it is worth comparing the variability properties of short GRB afterglows with the FRB 150418 transient. There are two SGRBs with at least two radio afterglow detections. One is GRB 050724 from the radio afterglow catalog of Chandra & Frail (2012). Another one is GRB 130603B, which has two detections and two upper limits (Fong et al. 2014). Following the same procedure, we present their STD/ f and median S/N in Table 1. Assuming that they fall into one of the Parkes beams, we calculate the chance probability of confusing them with an underlying AGN radio flaring sources in the Ofek et al. (2011) catalog. The afterglow light curves of the two short GRBs are presented along with that of FRB 150418 in the left panel of Fig.1 (in green and blue). Upside down triangles indicate upper limits. In the process of estimating STD/ f and median S/N for the two short GRBs, we treat the upper limits in two different methods: one is to include them by assuming that both detection values and the errors are half of the upper limit values; the other is to exclude the upper limits completely. These two methods define a range of STD/ f and median S/N, which are marked in the right panel of Fig.1 as segments connected with two filled circles. The results with the first method are marked with small circles, and those with the second method are marked with large circles. One can see that GRB 050724 has a more significant variability and a smaller chance probability than FRB 150418. On the other hand, GRB 130603B is less variable and has a similar chance probability to be confused as a flaring source as FRB 150418. In general, FRB 150418 sits near the two short GRBs in the STD/ f -S/N space, suggesting that the data are not inconsistent with being an FRB / short GRB afterglow, as suggested by Keane et al. (2016). CONCLUSIONS AND DISCUSSION We have statistically examined the probability of having a random variable source, such as AGN, following FRB 150418 by comparing the event with the radio variable sources presented in Ofek et al. (2011). By requiring that the coincident transient should have at least the same STD/ f and median S/N values as the FRB 150418 transient and that it should decay starting from the first observation, the combined spatial, flux, and temporal chance coincidence probability is < 10 −4.9±1.3 for Keane et al. (2016) observational data only, and < Note. -Column 1: FRB/GRB names; Column 2: relative standard deviation. STD is the standatd deviation and f is the average flux. Column 3: median signal to noise ratio; Column 4: estimated number of radio sources within one Parkes beam; Column 5: probability to reproduce one event as significant as the FRB afterglow candidate (with both STD/ f and median S/N larger than those of the FRB afterglow candidate) with the observational data of Ofek et al. (2011); Column 6: probability upper limit to have a fading event by chance, with a 100% duty cycle assumed; Column 7: overall probability to have the FRB afterglow candidate originating from chance coincidence. 10 −4.4±1.3 with the VLA point included . We also show that the event is not inconsistent with a short GRB radio afterglow (with host contamination). We therefore argue that the event has a low probability being an unrelated AGN flare, and that the possibility that the source is the intrinsic afterglow of FRB 150418 is not ruled out by the current data. Further monitoring the source is needed to place more constraints on the afterglow and AGN possibilities. If the source later re-brightens to the level of the first two data points of Keane et al. (2016), then the afterglow scenario would be ruled out. A smaller variability as seen by , would not significantly alter the conclusion of this paper, since scintillations (Hughes et al. 1992;Qian et al. 1995;Goodman 1997) would explain such fluctuations as long as the emitting region has a small enough angular size. Suppose that the first two data points following FRB 150418 are indeed the afterglow of the FRB, the existence of a bright radio host is still puzzling. One may consider the following possibilities. (1) First, the host may be related to the star formation in the host. Assuming that the quiescent radio emission of the host originates from star formation, the star formation rate (SFR) estimated with radio emission is 10 1.7 M ⊙ yr −1 (Kennicutt & Evans 2012), two orders of magnitude larger than the upper limit estimated from Hα emission line (Keane et al. 2016). It indicates an inconsistency between the emission line-estimated SFR and the continuum-estimated SFR. Such a discrepancy also occasionally seen in SGRB hosts. For example, the host of GRB 050509B, which is an elliptical galaxy, also shows no SFR by using emission lines as an indicator, < 0.1 M ⊙ yr −1 (Berger 2009), while the SFR estimated by UV emission is 16.9 M ⊙ yr −1 (Savaglio et al. 2009). Although such a possibility is not ruled out, it is not favored. Within this picture, the host flux is not expected to fluctuate significantly. If the VLA rebrightening reported by is not due to a mis-calibration between VLA and ATCA, it strongly disfavors this possibility. (2) Second, the quiescent radio flux is from an underlying low-activity AGN, while the FRB is related to a compact-star-merger event (Zhang 2016) within the host galaxy of the AGN. This is not impossible, since the host galaxy appears as an elliptical galaxy with little star formation, which is consistent with being a host of compact star mergers (Gehrels et al. 2005;Barthelmy et al. 2005;Berger 2014). The probability of having such a weak AGN host may be low, and is worth investigating. As an elliptical galaxy with a stellar mass 10 11 M ⊙ (Keane et al. 2016), the center black hole mass of the putative FRB host may be estimated using the M BH − M * relation (Häring & Rix 2004), which gives M BH = 10 8.2 M ⊙ . If the radio quiescent emission is trully from the central black hole, It indicates a radio luminosity 4.2 × 10 39 erg/s. The X-ray luminosity anticipated from the black hole activity fundamental plane is 1.4 ×10 43 erg/s, which is smaller than the X-ray upper limit from Swift (Keane et al. 2016). Although the optical spectrum of the host disfavors an AGN at the center, it is supported by the consistency between its radio spectrum and those of AGNs (Vedantham et al. 2016). It may be an low luminosity AGN or radio analog to the X-ray bright Optical galaxy (XBONG) (Yuan & Narayan 2004;Maiolino et al. 2003). Deep X-ray monitoring might be the best way to investigate the AGN possibility, and the origin of the radio quiescent emission of the host. In any case, the existence of an AGN does not rule out the pos-sibility that the first two data points are due to the FRB 150418 afterglow. (3) The third probability is that both afterglow and AGN are true (similar to the second possibility), but the FRB may be related to the AGN itself. However, no viable FRB model has been proposed to be produced in an AGN environment. One difficulty would be the time scale. Whereas the shortest time scale for a supermassive BH may be hours, the typical FRB duration is at most milliseconds. More work is needed to explore this possibility. Fig. 1 . 1-Left: The putative radio afterglow lightcurve of FRB 150428 (red). The last point is from the latest VLA point, while the first five points are original ATCA points Fig. 2 . 2-Left: A passage of one simulated lightcurve. Right: The fraction of simulated detection series similar to FRB afterglows (monotonously decreasing) as a function of the assumed period. TABLE 1 1Probability to reproduce FRB/SGRB radio afterglows by a chance coincidencename STD/ f median S/N Nr P 1 P 2 P = NrP 1 P 2 FRB 150418 (Keane et al. (2016)) 0.54 5.4 16 6 × 10 −4 10 −2.9±1.3 10 −4.9±1.3 FRB 150418 0.48 5.5 16 2 × 10 −3 10 −2.9±1.3 10 −4.4±1.3 (Keane et al. (2016)+Williams et al. (2016)) GRB 050724 (with upper limits) 0.85 5.2 4.3 × 10 −5 GRB 050724 (without upper limits) 0.63 5.5 2 × 10 −4 GRB 130603B (with upper limits) 0.95 4.3 1 × 10 −4 GRB 130603B (without upper limits) 0.45 6.5 7.8 × 10 −4 By introducing a smaller duty cycle, our simulations suggest that the chance probabilities for the temporal coincidence are indeed even lower. We thank helpful communications and comments from Edo Berger, Evan Keane and Weimin Yuan. This work is partially supported by NASA NNX15AK85G and NNX14AF85G. . S D Barthelmy, G Chincarini, D N Burrows, Nature. 438994Barthelmy, S. D., Chincarini, G., Burrows, D. N., et al. 2005, Nature, 438, 994 . E Berger, ApJ. 690231Berger, E. 2009, ApJ, 690, 231 . E Berger, ARA&A. 5243Berger, E. 2014, ARA&A, 52, 43 . S Burke-Spolaor, K W Bannister, ApJ. 79219Burke-Spolaor, S. & Bannister, K. W. 2014, ApJ, 792, 19 . D J Champion, E Petroff, M Kramer, prints:1511.07746Champion, D. J., Petroff, E., Kramer, M., et al. 2015, ArXiv e-prints:1511.07746 . W Fong, E Berger, B D Metzger, ApJ. 780118Fong, W., Berger, E., Metzger, B. D., et al. 2014, ApJ, 780, 118 . N Gehrels, C L Sarazin, P T O&apos;brien, Nature. 437851Gehrels, N., Sarazin, C. L., O'Brien, P. T., et al. 2005, Nature, 437, 851 . J Goodman, 2449Goodman, J. 1997, New A, 2, 449 . N Häring, H.-W Rix, ApJ. 60489Häring, N. & Rix, H.-W. 2004, ApJ, 604, L89 . P A Hughes, H D Aller, M F Aller, ApJ. 396469Hughes, P. A., Aller, H. D., & Aller, M. F. 1992, ApJ, 396, 469 . E F Keane, S Johnston, S Bhandari, Nature. 530453Keane, E. F., Johnston, S., Bhandari, S., et al. 2016, Nature, 530, 453 . E F Keane, M Kramer, A G Lyne, B W Stappers, M A Mclaughlin, MNRAS. 4153065Keane, E. F., Kramer, M., Lyne, A. G., Stappers, B. W., & McLaughlin, M. A. 2011, MNRAS, 415, 3065 . R C Kennicutt, N J Evans, ARA&A. 50531Kennicutt, R. C. & Evans, N. J. 2012, ARA&A, 50, 531 . D R Lorimer, M Bailes, M A Mclaughlin, D J Narkevic, F Crawford, Science. 318777Lorimer, D. R., Bailes, M., McLaughlin, M. A., Narkevic, D. J., & Crawford, F. 2007, Science, 318, 777 . R Maiolino, A Comastri, R Gilli, MNRAS. 34459Maiolino, R., Comastri, A., Gilli, R., et al. 2003, MNRAS, 344, L59 . K Masui, H.-H Lin, J Sievers, Nature. 528523Masui, K., Lin, H.-H., Sievers, J., et al. 2015, Nature, 528, 523 . K P Mooley, G Hallinan, S Bourke, ApJ. 818105Mooley, K. P., Hallinan, G., Bourke, S., et al. 2016, ApJ, 818, 105 . E O Ofek, D A Frail, B Breslauer, ApJ. 74065Ofek, E. O., Frail, D. A., Breslauer, B., et al. 2011, ApJ, 740, 65 . E Petroff, M Bailes, E D Barr, MNRAS. 447246Petroff, E., Bailes, M., Barr, E. D., et al. 2015, MNRAS, 447, 246 . S J Qian, S Britzen, A Witzel, A&A. 29547Qian, S. J., Britzen, S., Witzel, A., et al. 1995, A&A, 295, 47 . V Ravi, R M Shannon, A Jameson, ApJ. 7995Ravi, V., Shannon, R. M., & Jameson, A. 2015, ApJ, 799, L5 . S Savaglio, K Glazebrook, D Le Borgne, ApJ. 691182Savaglio, S., Glazebrook, K., & Le Borgne, D. 2009, ApJ, 691, 182 . L G Spitler, J M Cordes, J W T Hessels, ApJ. 790101Spitler, L. G., Cordes, J. M., Hessels, J. W. T., et al. 2014, ApJ, 790, 101 . D Thornton, B Stappers, M Bailes, Science. 34153Thornton, D., Stappers, B., Bailes, M., et al. 2013, Science, 341, 53 . H K Vedantham, V Ravi, K Mooley, D Frail, G Hallinan, S R Kulkarni, 1603.04421Vedantham, H. K., Ravi, V., Mooley, K., Frail, D., Hallinan, G., Kulkarni, S. R., 2016, ArXiv e-prints:1603.04421 . P K G Williams, E Berger, prints: 1602.08434Williams, P. K. G. & Berger, E. 2016, ArXiv e-prints: 1602.08434 P K G Williams, E Berger, R Chornock, The Astronomer's Telegram. 8752Williams, P. K. G., Berger, E., & Chornock, R. 2016, The Astronomer's Telegram, 8752 . F Yuan, R Narayan, ApJ. 612724Yuan, F., & Narayan, R. 2004, ApJ, 612, 724 . B Zhang, prints: 1602.08086Zhang, B. 2016, ArXiv e-prints: 1602.08086
[]
[ "Reinforcement learning in market games", "Reinforcement learning in market games" ]
[ "Edward W Piotrowski \nInstitute of Mathematics\nUniversity of Bia lystok\nLipowa 41, Pl 15424 Bia lystokPoland\n", "Jan S Ladkowski \nInstitute of Physics\nUniversity of Silesia\nUniwersytecka 4, Pl40007KatowicePoland\n", "Anna Szczypińska \nInstitute of Physics\nUniversity of Silesia\nUniwersytecka 4, Pl40007KatowicePoland\n" ]
[ "Institute of Mathematics\nUniversity of Bia lystok\nLipowa 41, Pl 15424 Bia lystokPoland", "Institute of Physics\nUniversity of Silesia\nUniwersytecka 4, Pl40007KatowicePoland", "Institute of Physics\nUniversity of Silesia\nUniwersytecka 4, Pl40007KatowicePoland" ]
[]
Financial markets investors are involved in many games -they must interact with other agents to achieve their goals. Among them are those directly connected with their activity on markets but one cannot neglect other aspects that influence human decisions and their performance as investors. Distinguishing all subgames is usually beyond hope and resource consuming. In this paper we study how investors facing many different games, gather information and form their decision despite being unaware of the complete structure of the game. To this end we apply reinforcement learning methods to the Information Theory Model of Markets (ITMM). Following Mengel, we can try to distinguish a class Γ of games and possible actions (strategies) a i m i for i−th agent. Any agent divides the whole class of games into analogy subclasses she/he thinks are analogous and therefore adopts the same strategy for a given subclass. The criteria for partitioning are based on profit and costs analysis. The analogy classes and strategies are updated at various stages through the process of learning. We will study the asymptotic behaviour of the process and attempt to identify its crucial stages, eg existence of possible fixed points or optimal strategies. Although we focus more on the instrumental aspects of agents behaviours, various algorithm can be put forward and used for automatic investment. This line of research can be continued in various directions.
null
[ "https://arxiv.org/pdf/0710.0114v1.pdf" ]
6,415,699
0710.0114
0ef68d2fb3da95364be92f11e5396f3143f5df94
Reinforcement learning in market games 30 Sep 2007 Edward W Piotrowski Institute of Mathematics University of Bia lystok Lipowa 41, Pl 15424 Bia lystokPoland Jan S Ladkowski Institute of Physics University of Silesia Uniwersytecka 4, Pl40007KatowicePoland Anna Szczypińska Institute of Physics University of Silesia Uniwersytecka 4, Pl40007KatowicePoland Reinforcement learning in market games 30 Sep 2007Preprint submitted to Elsevier 4 December 2008arXiv:0710.0114v1 [q-fin.TR] Motto:econophysicsmarket gameslearning PACS: 0175+m0250Ga0270-c4230Sy Financial markets investors are involved in many games -they must interact with other agents to achieve their goals. Among them are those directly connected with their activity on markets but one cannot neglect other aspects that influence human decisions and their performance as investors. Distinguishing all subgames is usually beyond hope and resource consuming. In this paper we study how investors facing many different games, gather information and form their decision despite being unaware of the complete structure of the game. To this end we apply reinforcement learning methods to the Information Theory Model of Markets (ITMM). Following Mengel, we can try to distinguish a class Γ of games and possible actions (strategies) a i m i for i−th agent. Any agent divides the whole class of games into analogy subclasses she/he thinks are analogous and therefore adopts the same strategy for a given subclass. The criteria for partitioning are based on profit and costs analysis. The analogy classes and strategies are updated at various stages through the process of learning. We will study the asymptotic behaviour of the process and attempt to identify its crucial stages, eg existence of possible fixed points or optimal strategies. Although we focus more on the instrumental aspects of agents behaviours, various algorithm can be put forward and used for automatic investment. This line of research can be continued in various directions. Motto: "The central problem for gamblers is to find positive expectation bets. But the gambler also needs to know how to manage his money, i.e. how much to bet. In the stock market (more inclusively, the securities markets) the problem is similar but more complex. The gambler, who is now an investor, looks for excess risk adjusted return." Edward O. Thorp Introduction Noise or structure? We face this question almost always while analyzing large data sets. Patern discovery is one of the primary concerns in various fields in research, commerce and industry. Models of optimal behaviour often belong to that class of problems. The goal of an agent in a dynamic environment is to make optimal decision over time. One usually have to discard a vast amount of data (information) to obtain a concise model or algorithm. Therefore prediction of individual agent behaviours is often burdened with large errors. The prediction game algorithm can be described as follows. FOR n = 1, 2, . . . Reality announces x n ∈ X Predictor announces γ n ∈ Γ Reality announces y n ∈ Y END FOR, where x n ∈ X is the data upon which the prediction γ n ∈ Γ is made at each round n. The prediction quality is measured by some utility function υ : Γ × Y → R. One can view such a process as a communication channel that transmit information from the past to the future [1]. The gathering of information, often indirect and incomplete, is referred to as measurements. Learning theory deals with the abilities and limitations of algorithms that learn or estimate functions from data. Learning helps with optimal behaviour decisions by adjusting agent's strategies to information gathered over time. Agents can base their action choices on prediction of the state of the environment or on reward received during the process. For example, Markov decision process can be formulated as a problem of finding a strategy π that maximizes the expected sum of discounted rewards: υ(s, π) = r(s, a π ) + β s ′ p(s ′ |s, a π ) υ(s ′ , π), where s is the initial state, a π is the action induced by the strategy π, r is the reward at stage t and β is the discount factor; υ is called the value function. p(s ′ |s, a π ) denote the (conditional) probability 1 of reaching the state s ′ from the state s as result of action a π . It can be shown that, in the case of infinite horizon, an optimal strategy π * such that (Bellman optimality equation) υ(s, π * ) = max a {r(s, a) + β s ′ p(s ′ |s, a) υ(s ′ , π * )} exists. In reinforcement learning, the agent receives rewards from the environment and uses them as feedback for its action. Reinforcement learning has its roots in statistics cybernetics, psychology, neuroscience, computer science ... . In its standard formulation, the agent must improve his/her performance in a game through trial-and-error interaction with a dynamical environment. There are two ways of finding the optimal strategy: strategy iteration -one directly manipulates the strategy; value iteration -one approximates the optimal value function. Therefore two classes of algorithms are considered: strategy (policy) iteration algorithms and value iteration algorithms. In the following section we discuss the adequacy of reinforced learning in market games. Reinforcement learning in market games Can reinforcement learning help with market games analysis? Could it be used for finding optimal strategies? It not easy to answer this question because it involves the problem of real-time decision making one often have to (re-)act as quickly as possible. Consider model-free reinforcement learning, Q-learning 2 In this approach one defines the value of an action Q(s, a) as a discounted return if action a following from the strategy π is applied: Q * (s, a) = r(s, a) + β s ′ p(s ′ |s, a) υ(s ′ , π * ) then υ(s, π * ) = max a Q * (s, a) . In Q-learning, the agent starts with arbitrary Q(s, a) and at each stage t observes the reward r t and the updates the value of Q according to the rule: Q t+1 (s, a) = (1 − α t ) Q t (s, a) + α t (r t + β max b Q t (s, b)), where α t ∈ [0, 1) is the learning rate that needs to decay over time for the learning algorithm to converge. This approach is frequently used in stochastic games setting. Watkins and Dayan proved that this sequence converges provided all states and actions have been visited/performed infinitely often [5]. Therefore we anticipate weak convergence ratios. Indeed, various theoretical and experimental analyses [6,7,8] showed that even in very simple games might require ∼ 10 8 steps! If a well-shaped stock trend is formed, one can expect that there are sorts of adversarial equilibria (no agent is hurt by any change of others' strategies) R i (π 1 , . . . , π n ) ≤ R i (π ′ 1 , . . . , π ′ i−1 , π ′ i , π ′ i+1 , . . . , π ′ n ) or coordination equilibria (all agents achieve their highest possible return) R i (π 1 , . . . , π n ) = max a 1 ,...,an R i (a 1 , . . . , a n ). Here Rs denote the pay-off functions and πs the one-stage strategies. The problem is they can be easily identified with technical analysis 3 tools and there is no need to recall to learning algorithms. In the most interesting classes of games neither adversarial equilibria nor coordination equilibria exist. This type of learning is much more subtle and, up to now, there is no satisfactory analysis in the field of reinforcement learning. Therefore a compromise is needed, for example we must be willing to accept returns that might not be optimal. The models discussed in the following subsections belong to that class and seem to be tractable by leaning algorithms. Kelly's criterion Kelly's criterion [9] can be successfully applied in horse betting or blackjack when one can discern biases [10] even though its optimality and convergence can be proven only in the asymptotic cases. The simplest form of Kelly's formula is: Θ = W − (1 − W )/R where: • Θ = percentage of capital to be put into a single trade. • W = historical winning percentage of a trading system. • R = historical average win/loss ratio. Originally, Kelly's formula involves finding the "bias ratio" in a biased game. If the game is infinitely often repeated then one should put at stake the percentage of one's capital equal to the bias ratio. Therefore one can easily construct various learning algorithms that perform the task of finding an environment so that Kelly's approach can be effectively applied (bias search + horizon of the investment) [11,12]. MMM model Piotrowski and S ladkowski have analysed the model where the trader fixes a maximal price he is willing to pay for the asset Θ and then, if the asset is bought, after some time sells it at random [13]. One can easily reverse the buying and selling strategies. The expected value of the of the profit after the whole cycle is ρ η (a) = − −a −∞ p η (p) dp 1 + −a −∞ η (p) dp where a is the withdrawal price. The maximal value of the function ρ, a max , lies at a fixed point of ρ, that is fulfills the condition ρ (a max ) = a max . The simplest version of of the strategy is as follows: there an optimal strategy that fixes the withdrawal price at the level historical average profit 4 . Task: find an implementation of reinforced learning algorithm that can be used effectively on markets. We should control both, the probability distribution η and the profit "quality". Learning across games An interesting approach was put forward by Mengel [14]. One can easily give examples of situations where agents cannot be sure in what game they are taking part (e.g. the games may have the same set of actions). Distinguishing all possible games and learning separately for all of them requires a large amount of alertness, time and resources (costs). Therefore the agent should try to identify some classes of situations she/he perceives as analogous and therefore takes the same actions. The learning algorithm should update both the partition of the set of games and actions to be taken: • Agents are playing repeatedly a game (randomly) drawn from a set Γ • Agents partition the set of all games into subset (classes) of games they learn not to discriminate (see them as analogous) • Agents update both propensities to use partitions {G} and attractions towards using their possible strategies/actions Asymptotic behaviour and computation complexity of such process is discussed in Ref. [14]. Stochastic approximation is working in this case (approximation through a system of deterministic differential equations is possible). It would be interesting to analyse the following problems. Problem 1.: Identify possible "classes of market games" Problem 2.: Identify "universal" set of strategies. For example, on the stock exchange one can try the brute force approach. Admit as strategies buying/selling at all possible price levels and identify classes of games with trends. Unfortunately, the number of approximations generates huge transaction costs. This can be reduced on the derivative markets as due to the leverage the ratio of transaction cost to price movements is much lower. We envisage that an agent may try to optimize among various classes of technical analysis tools. Conclusion As conclusions we would like to stress the following points. Algorithms are simple but computation is complex, time and resource consuming. Learning across games could be used to "fit" technical analysis models. Dynamic proportional investing (Kelly) should be the easiest to implement. But here we envisage problems analogous to related to heat (entropy) in thermodynamics, and exploration of knowledge might involve in cases of high effectiveness paradoxes [11] analogous to those of arising when speed approaches the speed of light [12]. One can envisage learning (information) models of markets/ portfolio theory. Implementation should be carefully tested -transaction costs can "kill" even crafty algorithms [15]. Quantum algorithms/computers, if ever come true might change the situation in a dramatic way: we would have powerful algorithms at our disposal and and the learning limits would certainly broaden [16,17,18]. In a more formal setting it would be a transition kernel of for the process of consecutive actions and observations. 2 This is obviously a value iteration, but in market games there is a natural value function -the profit. We understand the term technical analysis as simplified hypothesis testing methods that can be applied in real time. Or else: do not try to outperform yourself. . J P Crutchfield, C R Shalizi, Physical Review E. 59275Crutchfield, J. P., Shalizi, C. R., Physical Review E 59 (2003) 275. Adaptive Algorithms and Stochastic Approximation. A Benveniste, M Metevier, P Priouret, Springer VerlagBerlinBenveniste A., M. Metevier and P. Priouret (1990), Adaptive Algorithms and Stochastic Approximation, Berlin: Springer Verlag. D Fudenberg, D K Levine, The Theory of Learning in Games. CambridgeMIT-PressFudenberg, D. and D.K. Levine (1998), The Theory of Learning in Games, Cambridge: MIT-Press. H J Kuschner, G G Lin, Stochastic Approximation and Recursive Algorithms and Applications. New YorkSpringerKuschner, H.J. and G.G. Lin (2003), Stochastic Approximation and Recursive Algorithms and Applications, New York: Springer. Q-learning. C J C H Watkins, P Dayan, Machine Learning. 8279Watkins, C.J.C.H., Dayan, P., Q-learning, Machine Learning 8 (1992) 279. . V F Farias, C C Moallemi, T Weissman, B Van Roy, arXiv:0707.3087v1Universal Reinforcement Learningcs.ITFarias, V.F., Moallemi,C.C., Weissman, T., Van Roy,B., Universal Reinforcement Learning, arXiv:0707.3087v1 [cs.IT]. The rate of convergence to a perfect competition of a simple Matching and bargaining Mechanism,Univ of Britich Columbia preprint. A Shneyerov, A Chi Leung Wong, Shneyerov, A., Chi leung Wong, A., The rate of convergence to a perfect competition of a simple Matching and bargaining Mechanism,Univ of Britich Columbia preprint (2007). Markov games as a framework for multi-agent reinforcement learning. M L Littman, Brown Univ. preprintLittman, M.L., Markov games as a framework for multi-agent reinforcement learning, Brown Univ. preprint. A New Interpretation of Information Rate. J L Kelly, Jr, The Bell System Technical Journal. 35917Kelly, J.L., Jr., A New Interpretation of Information Rate, The Bell System Technical Journal 35 (1956) 917. Optimal gambling systems for favorable games, Revue de l'Institut International de Statistique / Review of the International Statistical Institute. E O Thorpe, 37273Thorpe, E.O., Optimal gambling systems for favorable games, Revue de l'Institut International de Statistique / Review of the International Statistical Institute, 37 (1969) 273. Kelly Criterion revisited: optimal bets. E W Piotrowski, Physica A. in pressPiotrowski, E. W., Kelly Criterion revisited: optimal bets, Physica A (2007), in press. The relativistic velocity addition law optimizes a forecast gambler's profit, submmited to. E W Piotrowski, J Luczka, arXiv:0709.4137v1Physica A. physics.data-anPiotrowski, E. W., Luczka, J., The relativistic velocity addition law optimizes a forecast gambler's profit, submmited to Physica A; arXiv:0709.4137v1 [physics.data-an]. The Merchandising Mathematician Model. E W Piotrowski, J Ladkowski, Physica A. 318496Piotrowski, E. W., S ladkowski, J., The Merchandising Mathematician Model, Physica A 318 (2003) 496. Learning across games, Univ. of Alicante report WP-AD. F Mengel, Mengel, F., Learning across games, Univ. of Alicante report WP-AD 2007-05. Arbitrage risk induced by transaction costs. E W Piotrowski, J Ladkowski, Physica A. 331233Piotrowski, E. W., S ladkowski, J., Arbitrage risk induced by transaction costs, Physica A 331 (2004) 233. Fixed point theorem for simple quantum strategies in quantum market games. E W Piotrowski, Physica A. 324196Piotrowski, E. W., Fixed point theorem for simple quantum strategies in quantum market games, Physica A 324 (20043) 196. Quantum computer: an appliance for playing market games. E W Piotrowski, J Ladkowski, International Journal of Quantum Information. 2495Piotrowski, E. W., S ladkowski, J., Quantum computer: an appliance for playing market games, International Journal of Quantum Information 2 (2004) 495. Quantization of Games: Towards Quantum Artificial Intelligence. K Miakisz, E W Piotrowski, J Ladkowski, Theoretical Computer Science. 35815Miakisz, K.,Piotrowski, E. W., S ladkowski, J., Quantization of Games: Towards Quantum Artificial Intelligence, Theoretical Computer Science 358 (2006) 15.
[]
[ "Stability of Linear Systems under Extended Weakly-Hard Constraints", "Stability of Linear Systems under Extended Weakly-Hard Constraints" ]
[ "Nils Vreman ", "Paolo Pazzaglia ", "Victor Magron ", "Jie Wang ", "Senior Member, IEEEMartina Maggio " ]
[]
[]
Control systems can show robustness to many events, like disturbances and model inaccuracies. It is natural to speculate that they are also robust to sporadic deadline misses when implemented as digital tasks on an embedded platform. This paper proposes a comprehensive stability analysis for control systems subject to deadline misses, leveraging a new formulation to describe the patterns experienced by the control task under different handling strategies. Such analysis brings the assessment of control systems robustness to computational problems one step closer to the controller implementation.
10.1109/lcsys.2022.3179960
[ "https://export.arxiv.org/pdf/2101.11312v2.pdf" ]
249,333,770
2101.11312
152535d342a46291e451554081dc27a74fc500cb
Stability of Linear Systems under Extended Weakly-Hard Constraints 30 Aug 2022 Nils Vreman Paolo Pazzaglia Victor Magron Jie Wang Senior Member, IEEEMartina Maggio Stability of Linear Systems under Extended Weakly-Hard Constraints 30 Aug 2022Index Terms-Fault tolerant systemsLinear systemsNetworked control systems Control systems can show robustness to many events, like disturbances and model inaccuracies. It is natural to speculate that they are also robust to sporadic deadline misses when implemented as digital tasks on an embedded platform. This paper proposes a comprehensive stability analysis for control systems subject to deadline misses, leveraging a new formulation to describe the patterns experienced by the control task under different handling strategies. Such analysis brings the assessment of control systems robustness to computational problems one step closer to the controller implementation. There is however a mismatch between the guarantees that can be obtained for real-time tasks and platforms [28], [4], and the analysis available for control tasks under the weakly-hard model. Fewer works deal with stability analysis of weaklyhard real-time control tasks, often targeting specific use-cases. For instance, the analysis in [16] is limited to constraints specifying a maximum number of consecutive deadline misses. The results in [14], [15], obtained for networked linear control systems having packet dropouts bounded using the weaklyhard model, can not be generalised for late completions or sets of weakly-hard constraints. The authors of [13], [12] studied safety guarantees of weakly-hard controllers, considering a miss as a discarded computation with a known periodic pattern. In [8], [9], an over-approximation-based approach is proposed to check the safety of nonlinear weakly-hard systems, where misses are treated as discarded computations and the actuator holds its previous value. Convergence rates (providing sufficient stability guarantees) are analysed in [6]. A Lyapunov-based stability analysis of nonlinear weakly-hard systems is studied in [7], with deadline misses treated as packet dropouts. However, the state-of-the-art listed above lack generalisability to more expressive real-time implementations, such as different deadline miss models or handling strategies. This paper aims at filling the gap, by providing a stability analysis that can be applied to a class of generic weaklyhard models and deadline miss handling strategies. First, we formally extend the weakly-hard model to explicitly consider the strategy used to handle the miss events. By leveraging an automaton representation of the sequences allowed by (a set of) extended weakly-hard constraints, we use Kronecker lifting and the joint spectral radius to properly express its stability conditions. Using the concept of constraint dominance, we prove analytic bounds on the stability of a weaklyhard system with respect to less dominant constraints. Finally, we analyse the stability of the resulting closed-loop systems using SparseJSR [26], which exploits the sparsity pattern that naturally arises in the Kronecker lifted representation. The proposed analysis calls for modularity and separation of concern, and can be a useful tool to decouple the constraint specification and the control verification. II. BACKGROUND AND NOTATION We consider a controllable and fully observable discretetime sampled linear time invariant system, expressed as P : x t+1 = A p x t + B p u t y t = C p x t + D p u t ,(1) where x t ∈ R n , u t ∈ R r and y t ∈ R q are the plant state, the control signal and the plant output, sampled at time t · T , T is the sampling period, and t ∈ N. The plant is controlled by a stabilising, LTI, one-step delay discrete-time controller C : z t+1 = A c z t + B c (r t − y t ) u t+1 = C c z t + D c (r t − y t ) ,(2) where z t ∈ R s is the controller's internal state and r t ∈ R q is the setpoint. Without loss of generality, we consider r t = 0. A. Real-time tasks that may miss deadlines The controller in (2) is implemented as a real-time task τ , and designed to be executed periodically with period T in a real-time embedded platform. Under nominal conditions the task releases an instance (called job) in each period, that should be completed before the release of the next instance. We denote the sequence of activation instants for τ with (a i ) i∈N , such that, in nominal conditions, a i+1 = a i + T , the sequence of completion instants (f i ) i∈N , and the sequence of job deadlines with (d i ) i∈N , such that d i = a i + T (also called implicit deadline). This requirement can be either satisfied or not, leading respectively to deadline hits and misses. Definition 1 (Deadline hit and miss). The i-th job of a periodic task τ with period T hits its deadline when f i ≤ d i and misses its deadline when f i > d i . We refer to both deadline hits and misses using the term outcome of a job. Intuitively, each job's outcome is dependent on the characteristics of the remaining tasks executing in the real-time system and the chosen scheduling algorithm. Given a taskset and a (worst-case) schedule, it is possible to bound the worst-case behaviour of the job outcomes [2], [28]. This bound is generally denoted using the weakly-hard model [2]. Following such model, a task τ may satisfy any combination of these weakly-hard constraints, defined as follows. (i) τ ⊢ m k : in any window of k consecutive jobs, at most m deadlines are missed; (ii) τ ⊢ h k : in any window of k consecutive jobs, at least h deadlines are hit; (iii) τ ⊢ m k : in any window of k consecutive jobs, at most m consecutive deadlines are missed; and (iv) τ ⊢ h k : in any window of k consecutive jobs, at least h consecutive deadlines are hit. In all such cases, m, h ∈ N, k ∈ N \ {0}, m ≤ k, and h ≤ k. A generic weakly-hard constraint is hereafter denoted with the symbol λ, while a set of L constraints will be referred to as Λ = {λ 1 , λ 2 , . . . , λ L }. We define a string ω = α 1 , α 2 , . . . , α N as a sequence of N consecutive outcomes, where each outcome α i is a character in the alphabet Σ = {M, H}. We use ω ⊢ λ to denote that ω satisfies the constraint λ. Stating that τ ⊢ λ means that all the possible sequences of outcomes that τ can experience satisfy the corresponding constraint λ. The set of such sequences naturally results from the definition of λ, and is formally defined as the satisfaction set as follows [2]. Definition 2 (Satisfaction set S N (λ)). We denote with S N (λ) the set of strings of length N ≥ 1 that satisfy a constraint λ. Formally, S N (λ) = {ω ∈ Σ N | ω ⊢ λ}. Taking the limit to infinity, the set S (λ) contains all the strings of infinite length that satisfy λ. The notion of domination between constraints [2] then follows. Definition 3 (Constraint domination). Constraint λ i dominates λ j (formally, λ i λ j ) if S (λ i ) ⊆ S (λ j ). B. Control tasks that may miss deadlines When a control task τ is implemented on an embedded platform with limited computational power, alongside other applications, it is not uncommon for it to experience deadline misses, even in case of simple control designs (PID, LQG, etc) [1], [18]. Computational overruns may be caused by, e.g., bursts of interrupts, cache misses, variable execution times of ancillary functions, or other complex interactions. If such events are rare or temporary, choosing a longer period for the controller to avoid them may result in worse performance and stability margins for nominal conditions [19]. Characterising the stability and performance of such controllers requires knowing what happens when a control deadline is missed [19], [16], [24]. In particular, we need a deadline miss handling strategy to decide the fate of the job that missed the deadline (and possibly the next ones), and an actuator mode to deal with the loss of a new control signal, for example by holding the previous value constant or zeroing it [22]. A few handling strategies for periodic controllers have been proposed in literature, the most interesting being Kill and Skip-Next [3], [19], [16]. Definition 4 (Kill strategy). Under the Kill strategy, a job that misses its deadline is terminated immediately. Formally, for the i-th job of τ either f i ≤ d i or f i = ∞. Definition 5 (Skip-Next strategy). Under the Skip-Next strategy, a job that misses its deadline is allowed to continue during the following period. Formally, if the i-th job of τ misses its deadline d i , a new deadline d + i = d i + T is set for the job, and a i+1 = d + i . C. Stability analysis techniques based on JSR In [16], the authors identify a set of subsequences of hit and missed deadlines, which can be arbitrarily combined to obtain all possible sequences in S ( m k ). The stability analysis of the resulting arbitrary switching system is then obtained by leveraging the Joint Spectral Radius (JSR) [21]. Given ℓ ∈ N \ {0} and a set of matrices A = {A 1 , . . . , A ℓ } ⊆ R n×n , under the hypothesis of arbitrary switching over any sequence s = a 1 , a 2 , . . . of indices of matrices in A, the JSR of A is defined by: ρ (A) = lim k→∞ max s∈{1,...,ℓ} k A a k · · · A a2 A a1 1 k .(3) The number ρ (A) characterizes the maximal asymptotic growth rate of matrix products from A (thus ρ(A) < 1 means that the system is asymptotically stable), and is independent of the norm · used in (3). Existing practical tools such as the JSR Matlab toolbox [23] include multiple algorithms to compute both upper and lower bounds on ρ (A). When the switching sequences between the dynamics of A are not arbitrary, but constrained by a graph G, the so called constrained joint spectral radius (CJSR) [5] can be applied. Introducing S k (G) as the set of all possible switching sequences s of length k that satisfy the constraints of a graph G, the CJSR of A is defined by ρ (A, G) = lim k→∞ max s∈S k (G) A a k · · · A a2 A a1 1 k .(4) In general, computing or approximating the CJSR is harder than using the JSR. In [20], the authors propose a multinormbased method to approximate with arbitrary accuracy the CJSR. Other works [10], [29] propose the creation of an arbitrary switching system such that its JSR is equal to the CJSR of the original system, based on a Kronecker lifting method. This will be also our approach, as detailed later. In [17], the authors propose an efficient approach to compute upper bounds of the JSR based on positive polynomials which can be decomposed as sums of squares (SOS). Finding the coefficients of a polynomial being SOS simplifies to solving an SDP [11]. To reduce time and space complexity, a sparse variant has been proposed in [26] exploiting the sparsity of the input matrices, based on the term sparsity SOS (TSSOS) framework [27]. By contrast, the procedure in [17] will be denoted hereafter as dense. While providing a more conservative result, the sparse upper bound can be obtained significantly faster if the matrices from A are sparse [26], e.g., the matrices we analyse in Section IV. III. EXTENDED WEAKLY-HARD TASK MODEL To provide a comprehensive analysis framework, we need to examine what occurs in each time interval (π i ) i∈N , with π i = [a 0 + i · T, a 0 + (i + 1) · T ). In this context, an extension of the weakly-hard model is required to account for the given deadline miss handling strategy, denoted with the symbol H. The definition above differs from the original weakly-hard model of [2], since (i) it explicitly introduces the handling strategy H; and (ii) it focuses on the presence of a new control command at the end of each time interval π i , instead of checking the deadline miss events, which guarantees its applicability also for strategies different than Kill. We now require an expressive alphabet Σ (H) to characterize the behaviour of task τ in each possible time interval. For both Kill and Skip-Next strategies, each interval π i contains at most one activated and one completed job. This restricts the possible behaviours to three cases: (i) a time interval in which the same job is both released and completed is denoted by H (hit); (ii) a time interval in which no job is completed is denoted by M (miss); (iii) a time interval in which no job is released, but a job (released in a previous interval) is completed, is denoted by R (recovery). By checking all unique combinations of job activations and completions in each interval, we obtain the alphabets for Kill and Skip-Next as Σ (Kill) = {M, H} and Σ (Skip-Next) = {M, H, R}, respectively. The recovery character R is used in the Skip-Next alphabet to identify the late completion of a job. As a consequence, R is treated equivalently to H when verifying the extended weakly hard constraints (EWHC). The algebra presented in Section II-A is extended to the new alphabet. We assign a character of the alphabet Σ (H) to each interval π i . A string ω = α 1 , α 2 , . . . , α N is used to represent a sequence of N outcomes for task τ , with α i ∈ Σ (H) representing the outcome associated to the interval π i . To enforce only feasible sequences, we introduce an order constraint for the R character with the following Rule. i λ H j if S (λ H i ) ⊆ S (λ H j ). A. Automaton representation of EWHC Any EWHC, as presented in Definition 6, can be systematically represented using an automaton. In this paper we build upon the WeaklyHard.jl automaton model presented in [25]. Here, a (minimal) automaton G λ H = (V λ H , E λ H ) associated to λ H consists of a set of vertices (V λ H ) and a set of directed labeled edges (E λ H ). Each vertex v i ∈ V λ H corresponds to a string of outcomes of the extended weaklyhard task executions. Trivially, there exists no vertices for strings that do not satisfy the EWHC. A directed labeled edge e i,j = (v i , v j , α) ∈ E λ H (also denoted transition) connects two vertices iff the outcome α ∈ Σ (H) -the edge's labelappended to the tail vertex's string representation (v i ) would result in the string equivalent to the one of the head vertex (v j ). Thus, a random walk in the automaton corresponds to a random string satisfying the EWHC. In particular, all the walks in the automaton corresponds to all strings in S (λ H ). Since the WeaklyHard.jl automaton model only uses the binary alphabet Σ = {M, H}, we require the additional character R to handle the Skip-Next strategy properly. Recall that both a hit (H) and a recovery (R) are considered job completions. Thus, for the Skip-Next strategy, we post-process the automaton by enforcing that Rule 1 is honoured and that the corresponding transitions are correct, i.e., switching the labels on some edges from H to R. We emphasise that despite the extended automaton model appear similar for the Kill and Skip-Next strategies, the differing transitions of the two automata significantly affect the corresponding closed-loop systems, as will be clear in Section IV. The WeaklyHard.jl automaton model also allows for the case where the task τ is subject to a set of multiple constraints. Since the stability analysis presented in this paper is invariant to the type (and amount) of the constraints acting on the control task τ , we henceforth say that τ is subject to a set of EWHC Λ H (unless stated otherwise). Extracting all transitions in E Λ H corresponding to a character α ∈ Σ (H) yields what is generally known as a directed adjacency matrix [30], denoted here as a transition matrix. Definition 7 (Transition matrix). Given an automaton G Λ H , the transition matrix F α (G Λ H ) ∈ R nV ×nV , with n V = |V Λ H | and α ∈ Σ (H), is computed as F α (G Λ H ) = {f i,j (α)} with f i,j (α) = 1, if ∃ e = (v j , v i , α) ∈ E Λ H 0, otherwise. Since at most one successor exists from each vertex with a transition labeled with α ∈ Σ (H), matrix F α will have a column sum of either 1 or 0. We now introduce a vector q t ∈ R nV called G-state, with n V = |V Λ H |, representing the state of the given automaton G Λ H at interval π t . Definition 8 (G-state q t ). Given an automaton G Λ H and a string ω ∈ Σ (H) N , ω = α 1 , α 2 , . . . , α N , for k = |v| , v ∈ V Λ H , we define q t ∈ R nV , where the i-th element q t,i is: q t,i = 1, if α t−k , . . . , α t−1 ≡ v i ∈ V Λ H 0, otherwise. The G-state q t is the vector representation of the vertex left at step t: here, q t = 0 means that the transition at step t − 1 was infeasible for the automaton. Given an arbitrary string ω = α 1 , . . . , α t , . . . , the G-state dynamics is defined as q t+1 = F α (G Λ H )·q t , and the following property holds [30]. Lemma 1 (Infeasible sequence). If ω / ∈ S N (Λ H ), then F ω (G Λ H ) = F αN (G Λ H ) · · · F α2 (G Λ H ) · F α1 (G Λ H ) = 0 Thus, if q t = 0 for an arbitrary t, then q t ′ = 0 for t ′ ≥ t. IV. STABILITY ANALYSIS Using the alphabet Σ (H) and the chosen actuator mode (i.e., zeroing, or holding the previous value), we compute the closed-loop behaviour of the controlled system. We identify one matrix for each dynamics corresponding to an interval π t associated by α ∈ Σ (H), building the set A H . Kill: Definingx K t = x T t z T t u T t T as the closed-loop state vector, we compute the discrete time closed-loop system dynamics A K H , corresponding to the character H: x K t+1 = A K Hx K t , A K H =   A p 0 B p −B c C p A c −B c D p −D c C p C c −D c D p   . For the case of M, the controller execution terminates prematurely and its states are not updated (z t+1 = z t ). Therefore, depending on the actuation mode, the controller output is either zeroed (u t+1 = 0) or held (u t+1 = u t ). The resulting closed-loop system in state-space form is denoted with A K M : x K t+1 = A K Mx K t , A K M =   A p 0 B p 0 I 0 0 0 ∆   . Here, ∆ = I (identity matrix) if the control signal is held and ∆ = 0 if zeroed. The set of dynamic matrices under the Kill strategy is then A Kill = A K H , A K M . Skip-Next: For the Skip-Next strategy, we introduce two additional statesx t andû t storing the old values of x t and u t while the controller awaits an update. The resulting state vector then becomesx S t = x T t z T t u T tx T tû T t T . When π t is associated to H, the two additional states mirror the behaviour of the states of which they are storing data. The resulting closed-loop system is described using A S H : x S t+1 = A S Hx S t , A S H =       A p 0 B p 0 0 −B c C p A c −B c D p 0 0 −D c C p C c −D c D p 0 0 A p 0 B p 0 0 −D c C p C c −D c D p 0 0       . For the case of M in π t ,x t andû t maintain their previous values. The resulting closed-loop is described by A S M : x S t+1 = A S Mx S t , A S M =       A p 0 B p 0 0 0 I 0 0 0 0 0 ∆ 0 0 0 0 0 I 0 0 0 0 0 I       . Finally, for the case of R, the new control command is calculated using the values stored inx t andû t . The resulting closed-loop system is described by A S R : x S t+1 = A S Rx S t , A S R =       A p 0 B p 0 0 0 A c 0 −B c C p −B c D p 0 C c 0 −D c C p −D c D p A p 0 B p 0 0 0 C c 0 −D c C p −D c D p       . The resulting set of matrices under Skip-Next strategy is then defined as A Skip-Next = A S H , A S M , A S R . A. Kronecker lifted switching system Combining the set of system dynamics A H with the associated automaton G Λ H , we seek to obtain an equivalent system model based on Kronecker lifting, characterized by a set of matrices denoted by L Λ H and behaving as an arbitrary switching system, such that ρ (L Λ H ) = ρ (A H , G Λ H ). In this way, powerful algorithms applicable to arbitrary switching system [23], [26] can be used to find tight stability bounds. We build upon the Kronecker lifting approach of [29]. Leveraging the vector q t of Definition 8, we introduce the lifted discretetime state ξ t ∈ R n·nV , defined as ξ t = q t ⊗ x t , where n V = |V Λ H | and ⊗ is the Kronecker product. By construction, ξ t is a vector composed of n V blocks of size n, where at most one block is equal to x t and all other blocks are equal to the 0 vector. Then, we build a set of lifted matrices P α (G Λ H ) ∈ R n·nV ×n·nV , which incorporates both the system dynamics and the possible transitions given a certain outcome α ∈ Σ (H): P α (G Λ H ) = F α (G Λ H ) ⊗ A H α , α ∈ Σ (H) .(5) The lifted dynamics of the closed loop system then become ξ t+1 = P α (G Λ H ) · ξ t . Formally, we obtain a system composed of a set of switching dynamic matrices, L Λ H . Definition 9 (Lifted switching set L Λ H ). Given a set of dynamic matrices A H and an automaton G Λ H , the switching set L Λ H is defined as: L Λ H = {P α (G Λ H ) | α ∈ Σ (H)} . Leveraging the mixed-product property of ⊗ and introducing a proper submultiplicative norm, it is possible to prove that ρ (L Λ H ) = ρ (A H , G Λ H ). For more details and a formal proof we refer the interested reader to [29]. B. Extended weakly hard and JSR properties We now provide a general relation between all EWHCs in terms of the joint spectral radii. Theorem 1 (JSR dominance). Given λ H 1 and λ H 2 as arbitrary EWHCs, if λ H 2 λ H 1 then ρ L λ H 2 ≤ ρ L λ H 1 . Proof. From Equation (3), for a generic EWHC λ H , ρ (L λ H ) = lim ℓ→∞ ρ ℓ (L λ H ) , ρ ℓ (L λ H ) = max a∈S ℓ (λ H ) A a 1/ℓ . Definition 3 gave us that λ H 2 λ H 1 iff S(λ H 2 ) ⊆ S(λ H 1 ) . Thus, if for a string b it holds that b ∈ S ℓ λ H 2 , then it also holds that b ∈ S ℓ λ H 1 . The set of all possible A b is thus included in the set of all possible A a , a ∈ S ℓ λ H 1 , thus: max b∈S ℓ (λ H 2 ) A b 1/ℓ ≤ max a∈S ℓ (λ H 1 ) A a 1/ℓ , ∀ℓ ∈ N \ 0. The theorem follows immediately when ℓ → ∞. Theorem 1 is the first result that provides an analytic, correlation between the control theoretical analysis and realtime implementation. Primarily, it implies that the constraint dominance from Definition 3 also carries on to the JSR, giving us a notion of JSR dominance. The results of Theorem 1 are strategy-independent, further reducing the coupling between the control analysis and real-time implementation, and are also independent of the controlled system's dynamics. λ H 2 = m k H , then ρ L λ H 2 ≤ ρ L λ H 1 . The conclusions drawn from Theorem 1 are theoretical, but its practical applicability lies in the algorithm used to find ρ LB and ρ UB , i.e., lower and upper bounds for the JSR value. Using these bounds we can determine the stability of the corresponding switching systems, as follows: ρ LB L λ H 2 ≤ ρ L λ H 2 ≤ ρ L λ H 1 ≤ ρ UB L λ H 1 . Regardless of the algorithm used to find the bounds, if λ H 2 λ H 1 and ρ UB (L λ H 1 ) < 1, the system under λ H 2 is switching stable. A similar relation holds for the lower bound. Theorem 1 can be further extended by relating the joint spectral radius of a single constraint to sets of constraints. Theorem 2. Given an arbitrary EWHC λ H , it holds that ρ (L Λ H ) ≤ ρ (L λ H ) , ∀Λ H ∋ λ H . Proof. For an arbitrary EWHC set Λ H = {λ H 1 , . . . , λ H N }, its satisfaction set is S ℓ (Λ H ) = i∈{1,...,N } S ℓ (λ H i ). Thus, for any λ H i ∈ Λ H it holds that S ℓ (Λ H ) ⊆ S ℓ (λ H ). If a string b is in S ℓ (Λ H ) it also belongs to S ℓ (λ H ). The set of all possible A b is thus included in the set of all possible A a , a ∈ S ℓ (λ H ). As a consequence it holds that max b∈S ℓ (Λ H ) A b 1/ℓ ≤ max a∈S ℓ (λ H ) A a 1/ℓ , ∀ℓ ∈ N > . The theorem follows immediately when ℓ → ∞. As in Theorem 1, the more we restrict the execution pattern of the control task with sets of constraints, the lower its JSR will be. Theorem 2 delivers the practical insight that enforcing tighter EWHC to a stable system will never destabilise it, as formally stated in the following corollary. Corollary 3. Given an arbitrary EWHC λ H , if ρ (L λ H ) < 1 then ρ (L Λ H ) < 1, ∀Λ H ∋ λ H . V. EVALUATION We apply the lifted dynamics model presented in Section IV to a representative plant for the process industry, controlled using a PI-controller, sampled with T = 0.5 s:            x t+1 =      u t y t = 1 0 0 x t z t+1 = z t + 0.359y t u t+1 = 0.454z t + 0.633y t . We analyse the stability of the control systems subject to different m k H constraints. We consider all combinations of strategy (Kill or Skip-Next) and actuator mode (zeroing or holding). For each combination, we generate the lifted set L λ H . Its JSR ρ (L λ H ) is then approximated using three different algorithms. First, a lower and upper bound of ρ (L λ H ) is computed using the JSR toolbox [23]. Then, an upper bound of the JSR is obtained via SOS relaxations, using both the dense and sparse algorithm from SparseJSR [26]. Table I displays our results, acquired on an Intel Core [email protected] CPU with 8GB RAM. Lower and upper bounds are denoted "LB" and "UB". All upper bounds obtained with JSR toolbox was found greater than the ones obtained with SOS, thus omitted from the Table. The symbol "−" means that the SDP solver runs out of memory. The SDP solver in SparseJSR uses a second-order method. Thus, a different solver (utilising a first-order method) could reduce memory usage at the cost of potential accuracy loss. Bold values represent stable systems under their corresponding EWHC, strategy, and actuator mode. Starred values represent stable systems inferred from Corollary 1. The JSR toolbox provides an accurate lower bound and a coarse upper bound. In contrast, the dense SOS method finds a better upper bound but takes more time. We compare the time to run both SOS methods, indicating with "×" the speedup factor to obtain the sparse bound w.r.t. the dense. All the upper bounds computed by JSR toolbox are greater than 1, while all lower bounds are below 1, thus we cannot draw any conclusion using the JSR toolbox. For all EWHC, m k H where m = 1 and 2 < k ≤ 6 the SOS upper bounds allow us to infer that the system is stable for all combinations of strategy and actuator mode, and also for k = 2 under the Skip-Next strategy. From Theorem 1, the stability will hold also for all constraints that are harder to satisfy; in particular, Corollary 1 implies stability for all m k H with m = 1 and k > 6. The speedup ratio is growing when k increases, yielding a particularly high benefit of exploiting sparsity for the Skip-Next strategy with actuation zeroing. VI. CONCLUSION This paper proposes a switching stability analysis framework for LTI systems with arbitrary weakly-hard constraints, extending the weakly-hard model and providing an analytic stability bound. The analysis allows us to assess whether computational errors (present in industrial controllers) affect the stability of the controlled systems. Future work will focus on the performance loss due to the presence of deadline misses following the extended weakly-hard model. Definition 6 ( 6Extended weakly-hard model τ ⊢ λ H ). A task τ may satisfy any combination of the four extended weakly-hard constraints (EWHC) λ H : (i) τ ⊢ m k H : in any window of k consecutive jobs, at most m intervals lack a job completion; (ii) τ ⊢ h k H : in any window of k consecutive jobs, at least h intervals have a job completion; (iii) τ ⊢ m k H : in any window of k consecutive jobs, at most m consecutive intervals lack a job completion; (iv) τ ⊢ h k H: in any window of k consecutive jobs, at least h consecutive intervals have a job completion with m, h ∈ N, k ∈ N \ {0}, m ≤ k, and h ≤ k, while using strategy H to handle potential deadline misses. Rule 1 . 1For any string ω ∈ Σ (Skip-Next) N , R may only directly follow M, or be the initial element of the string.The extended weakly-hard model also inherits all the properties of the original weakly-hard model. In particular, the satisfaction set of λ H can be defined for N ≥ 1 as S N (λ H ) = {ω ∈ Σ (H) N | ω ⊢ λ H }, and the constraint domination still holds as λ H TABLE I RESULTS IOBTAINED WITH OUR CONTROLLED EXAMPLE.JSR[23] Dense Sparse JSR[23] Dense Sparse m k H LB UB UB × LB UB UB × m, k Kill and zeroing Kill and holding 1, 2 0.960 1.070 1.070 0.86 0.926 1.029 1.029 0.83 1, 3 0.920 0.995 0.995 0.83 0.894 0.971 0.971 0.77 1, 4 0.890 0.945 0.996 1.06 0.894 0.957 1.025 * 1.25 1, 5 0.890 0.922 0.983 1.96 0.894 0.948 1.008 * 2.25 1, 6 0.890 0.920 0.975 4.36 0.894 0.942 0.995 3.68 2, 3 0.983 1.124 1.124 0.67 0.956 1.085 1.085 0.80 2, 4 0.960 1.079 1.079 0.74 0.927 1.039 1.039 0.86 2, 5 0.939 1.039 1.142 2.09 0.905 1.002 1.105 1.58 2, 6 0.920 1.007 1.096 12.3 0.903 0.974 1.080 19.2 m, k Skip-Next and zeroing Skip-Next and holding 1, 2 0.922 0.924 0.924 5.40 0.958 0.958 0.958 4.43 1, 3 0.898 0.974 0.974 10.5 0.917 0.988 0.988 10.4 1, 4 0.898 0.963 0.963 18.2 0.890 0.940 0.940 15.9 1, 5 0.898 0.954 0.954 17.6 0.890 0.929 0.929 20.8 1, 6 0.898 0.946 0.947 20.9 0.890 0.927 0.927 25.8 2, 3 0.953 1.034 1.039 4.45 0.982 1.070 1.076 5.91 2, 4 0.922 1.033 1.040 23.9 0.958 1.079 1.086 24.2 2, 5 0.898 0.999 1.005 77.8 0.937 1.038 1.043 58.1 2, 6 0.907 - * 1.007 - 0.917 - 0.991 - An empirical survey-based study into industry practice in real-time systems. B Akesson, M Nasri, G Nelissen, S Altmeyer, R Davis, Real-Time Systems Symposium. B. Akesson, M. Nasri, G. Nelissen, S. Altmeyer, and R. Davis. An empirical survey-based study into industry practice in real-time systems. In Real-Time Systems Symposium, 2020. Weakly hard real-time systems. G Bernat, A Burns, A Liamosi, IEEE Transactions on Computers. 50G. Bernat, A. Burns, and A. Liamosi. Weakly hard real-time systems. IEEE Transactions on Computers, 50, 2001. Analysis of overrun strategies in periodic control tasks. A Cervin, IFAC Proceedings Volumes. 38A. Cervin. Analysis of overrun strategies in periodic control tasks. IFAC Proceedings Volumes, 38(1), 2005. Job-class-level fixed priority scheduling of weakly-hard real-time systems. H Choi, H Kim, Q Zhu, Real-Time and Embedded Technology and Applications Symposium. H. Choi, H. Kim, and Q. Zhu. Job-class-level fixed priority scheduling of weakly-hard real-time systems. In Real-Time and Embedded Technology and Applications Symposium, 2019. A gel'fand-type spectral radius formula and stability of linear constrained switching systems. X Dai, Lin. Algebra and Applications. X. Dai. A gel'fand-type spectral radius formula and stability of linear constrained switching systems. Lin. Algebra and Applications, 2012. M Gaukler, T Rheinfels, P Ulbrich, G Roppenecker, arXiv:1912.09871Convergence rate abstractions for weakly-hard real-time control. arXiv preprintM. Gaukler, T. Rheinfels, P. Ulbrich, and G. Roppenecker. Convergence rate abstractions for weakly-hard real-time control. arXiv preprint arXiv:1912.09871, 2019. Efficient stability analysis approaches for nonlinear weakly-hard real-time control systems. M Hertneck, S Linsenmayer, F Allgöwer, Automatica. M. Hertneck, S. Linsenmayer, and F. Allgöwer. Efficient stability analysis approaches for nonlinear weakly-hard real-time control systems. Automatica, 2021. Saw: A tool for safety analysis of weakly-hard systems. C Huang, K.-C Chang, C.-W Lin, Q Zhu, International Conference on Computer Aided Verification. SpringerC. Huang, K.-C. Chang, C.-W. Lin, and Q. Zhu. Saw: A tool for safety analysis of weakly-hard systems. In International Conference on Computer Aided Verification. Springer, 2020. Formal verification of weakly-hard systems. C Huang, W Li, Q Zhu, Proceedings of the 22nd ACM International Conference on Hybrid Systems: Computation and Control. the 22nd ACM International Conference on Hybrid Systems: Computation and ControlC. Huang, W. Li, and Q. Zhu. Formal verification of weakly-hard systems. In Proceedings of the 22nd ACM International Conference on Hybrid Systems: Computation and Control, 2019. The berger-wang formula for the markovian joint spectral radius. V Kozyakin, Lin. Algebra and its Applications. 448V. Kozyakin. The berger-wang formula for the markovian joint spectral radius. Lin. Algebra and its Applications, 448, 2014. Global optimization with polynomials and the problem of moments. J Lasserre, SIAM Journal on optimization. J. Lasserre. Global optimization with polynomials and the problem of moments. SIAM Journal on optimization, 2001. Leveraging weakly-hard constraints for improving system fault tolerance with functional and timing guarantees. H Liang, Z Wang, R Jiao, Q Zhu, Conference On Computer Aided Design. H. Liang, Z. Wang, R. Jiao, and Q. Zhu. Leveraging weakly-hard constraints for improving system fault tolerance with functional and timing guarantees. In Conference On Computer Aided Design, 2020. Security-driven codesign with weakly-hard constraints for real-time embedded systems. H Liang, Z Wang, D Roy, S Dey, S Chakraborty, Q Zhu, Int. Conference on Computer Design. H. Liang, Z. Wang, D. Roy, S. Dey, S. Chakraborty, and Q. Zhu. Security-driven codesign with weakly-hard constraints for real-time embedded systems. In Int. Conference on Computer Design, 2019. Stabilization of networked control systems with weakly hard real-time dropout description. S Linsenmayer, F Allgower, Conference on Decision and Control. S. Linsenmayer and F. Allgower. Stabilization of networked control systems with weakly hard real-time dropout description. In Conference on Decision and Control, 2017. Linear weakly hard real-time control systems: Time-and event-triggered stabilization. S Linsenmayer, M Hertneck, F Allgower, IEEE Transactions on Automatic Control. S. Linsenmayer, M. Hertneck, and F. Allgower. Linear weakly hard real-time control systems: Time-and event-triggered stabilization. IEEE Transactions on Automatic Control, 2020. Control-System Stability under Consecutive Deadline Misses Constraints. M Maggio, A Hamann, E Mayer-John, D Ziegenbein, Euromicro Conference on Real-Time Systems. M. Maggio, A. Hamann, E. Mayer-John, and D. Ziegenbein. Control- System Stability under Consecutive Deadline Misses Constraints. In Euromicro Conference on Real-Time Systems, 2020. Approximation of the joint spectral radius using sum of squares. Lin. Algebra and its Applications. P Parrilo, A Jadbabaie, P. Parrilo and A. Jadbabaie. Approximation of the joint spectral radius using sum of squares. Lin. Algebra and its Applications, 2008. Adaptive design of real-time control systems subject to sporadic overruns. P Pazzaglia, A Hamann, D Ziegenbein, M Maggio, Design, Automation & Test in Europe Conference Exhibition. P. Pazzaglia, A. Hamann, D. Ziegenbein, and M. Maggio. Adaptive design of real-time control systems subject to sporadic overruns. In Design, Automation & Test in Europe Conference Exhibition, 2021. Deadline-Miss-Aware Control. P Pazzaglia, C Mandrioli, M Maggio, A Cervin, Euromicro Conference on Real-Time Systems. P. Pazzaglia, C. Mandrioli, M. Maggio, and A. Cervin. Deadline-Miss- Aware Control. In Euromicro Conference on Real-Time Systems, 2019. Stability of discrete-time switching systems with constrained switching sequences. M Philippe, R Essick, G Dullerud, R Jungers, Automatica. 72M. Philippe, R. Essick, G. Dullerud, and R. Jungers. Stability of discrete-time switching systems with constrained switching sequences. Automatica, 72, 2016. A note on the joint spectral radius. G Rota, W Strang, G. Rota and W. Strang. A note on the joint spectral radius. 1960. To zero or to hold control inputs with lossy links?. L Schenato, IEEE Transactions on Automatic Control. 545L. Schenato. To zero or to hold control inputs with lossy links? IEEE Transactions on Automatic Control, 54(5), 2009. Jsr: A toolbox to compute the joint spectral radius. G Vankeerberghen, J Hendrickx, R Jungers, International Conference on Hybrid Systems Computation and Control. G. Vankeerberghen, J. Hendrickx, and R. Jungers. Jsr: A toolbox to compute the joint spectral radius. In International Conference on Hybrid Systems Computation and Control, 2014. Stability and Performance Analysis of Control Systems Subject to Bursts of Deadline Misses. N Vreman, A Cervin, M Maggio, Euromicro Conference on Real-Time Systems. N. Vreman, A. Cervin, and M. Maggio. Stability and Performance Analysis of Control Systems Subject to Bursts of Deadline Misses. In Euromicro Conference on Real-Time Systems, 2021. jl: Scalable analysis of weakly-hard constraints. N Vreman, R Pates, M Maggio, Weaklyhard, IEEE Real-Time and Embedded Technology and Applications Symposium. N. Vreman, R. Pates, and M. Maggio. Weaklyhard.jl: Scalable analysis of weakly-hard constraints. In IEEE Real-Time and Embedded Technol- ogy and Applications Symposium, 2022. SparseJSR: A Fast Algorithm to Compute Joint Spectral Radius via Sparse SOS Decompositions. J Wang, M Maggio, V Magron, American Control ConferenceJ. Wang, M. Maggio, and V. Magron. SparseJSR: A Fast Algorithm to Compute Joint Spectral Radius via Sparse SOS Decompositions. American Control Conference, 2021. Tssos: A moment-sos hierarchy that exploits term sparsity. J Wang, V Magron, J Lasserre, SIAM Journal on Optimization. J. Wang, V. Magron, and J. Lasserre. Tssos: A moment-sos hierarchy that exploits term sparsity. SIAM Journal on Optimization, 2021. Improved deadline miss models for real-time systems using typical worst-case analysis. W Xu, Z Hammadeh, A Kröller, R Ernst, S Quinton, Euromicro Conference on Real-Time Systems. W. Xu, Z. Hammadeh, A. Kröller, R. Ernst, and S. Quinton. Improved deadline miss models for real-time systems using typical worst-case analysis. In Euromicro Conference on Real-Time Systems, 2015. Approximation of the constrained joint spectral radius via algebraic lifting. X Xu, B Acikmese, Trans on Automatic Control. X. Xu and B. Acikmese. Approximation of the constrained joint spectral radius via algebraic lifting. Trans on Automatic Control, 2020. Matrix expression and reachability analysis of finite automata. X Xu, Y Hong, Journal of Control Theory and Applications. X. Xu and Y. Hong. Matrix expression and reachability analysis of finite automata. Journal of Control Theory and Applications, 2012.
[]
[ "Radial Migration in Galactic Thick Discs", "Radial Migration in Galactic Thick Discs" ]
[ "Michael Solway \nDepartment of Physics & Astronomy\nRutgers University\n136 Frelinghuysen Road08854-8019PiscatawayNJUS\n", "J A Sellwood \nDepartment of Physics & Astronomy\nRutgers University\n136 Frelinghuysen Road08854-8019PiscatawayNJUS\n", "Ralph Schönrich \nMax-Planck-Institut für Astrophysik\nKarl-Schwarzschild-Str. 185741GarchingGermany\n" ]
[ "Department of Physics & Astronomy\nRutgers University\n136 Frelinghuysen Road08854-8019PiscatawayNJUS", "Department of Physics & Astronomy\nRutgers University\n136 Frelinghuysen Road08854-8019PiscatawayNJUS", "Max-Planck-Institut für Astrophysik\nKarl-Schwarzschild-Str. 185741GarchingGermany" ]
[ "Mon. Not. R. Astron. Soc" ]
We present a study of the extent to which the Sellwood & Binney radial migration of stars is affected by their vertical motion about the midplane. We use both controlled simulations in which only a single spiral mode is excited, as well as slightly more realistic cases with multiple spiral patterns and a bar. We find that rms angular momentum changes are reduced by vertical motion, but rather gradually, and the maximum changes are almost as large for thick disc stars as for those in a thin disc. We find that particles in simulations in which a bar forms suffer slightly larger angular momentum changes than in comparable cases with no bar, but the cumulative effect of multiple spiral events still dominates. We have determined that vertical action, and not vertical energy, is conserved on average during radial migration.
10.1111/j.1365-2966.2012.20712.x
[ "https://arxiv.org/pdf/1202.1418v1.pdf" ]
119,202,687
1202.1418
2816810f4d171e4de9ad9422d5b5f389eb1413c2
Radial Migration in Galactic Thick Discs 2011 Michael Solway Department of Physics & Astronomy Rutgers University 136 Frelinghuysen Road08854-8019PiscatawayNJUS J A Sellwood Department of Physics & Astronomy Rutgers University 136 Frelinghuysen Road08854-8019PiscatawayNJUS Ralph Schönrich Max-Planck-Institut für Astrophysik Karl-Schwarzschild-Str. 185741GarchingGermany Radial Migration in Galactic Thick Discs Mon. Not. R. Astron. Soc 0002011Printed 8 February 2012 (MN L a T E X style file v2.2)galaxies: kinematics and dynamics -galaxies: evolution -galaxies: structure We present a study of the extent to which the Sellwood & Binney radial migration of stars is affected by their vertical motion about the midplane. We use both controlled simulations in which only a single spiral mode is excited, as well as slightly more realistic cases with multiple spiral patterns and a bar. We find that rms angular momentum changes are reduced by vertical motion, but rather gradually, and the maximum changes are almost as large for thick disc stars as for those in a thin disc. We find that particles in simulations in which a bar forms suffer slightly larger angular momentum changes than in comparable cases with no bar, but the cumulative effect of multiple spiral events still dominates. We have determined that vertical action, and not vertical energy, is conserved on average during radial migration. INTRODUCTION It has recently become clear that galaxy discs evolve significantly over time due to both internal and external influences (see van der Kruit & Freeman 2011;Sellwood 2010, for reviews). Scattering by giant molecular clouds (e.g. Spitzer & Schwarzschild 1953) is now thought to play a minor role (Hänninen & Flynn 2002), while spirals (Sellwood & Carlberg 1984;Binney & Lacey 1988;Sellwood & Binney 2002) and interactions (Helmi et al. 2006;Kazantzidis et al. 2008) are believed to cause more substantial changes. The Milky Way is believed to have both a thin and a thick disc (Gilmore & Reid 1983;Munn et al. 2004;Jurić et al. 2008;Ivezić et al. 2008), as do perhaps many disc galaxies (Yoachim & Dalcanton 2006). Relative to the thin disc, the thick disc has a higher velocity dispersion, lags in its net rotational velocity (Chiba & Beers 2000), contains older stars with lower metallicities (Majewski 1993) and enhanced [α/Fe] ratios (Bensby et al. 2005;Reddy, Lambert & Allende Prieto 2006;Fuhrmann 2008). A number of different criteria have been used to divide stars into the two disc populations, and some of these trends may depend somewhat on whether a spatial, kinematic, or chemical abundance criterion is applied (Fuhrmann 2008;Schönrich & Binney 2009b;Lee et al. 2011). Recently, Bovy et al. (2011) suggest that there is no sharp distinction into two populations, but rather a continuous variation in these properties. Several models have been proposed for the formation of the thick disc: accretion of disrupted satellite galaxies (Abadi et al. 2003), thickening of an early thin disc by a minor merger (Quinn, Hernquist & Fullagar 1993;Villalo-bos & Helmi 2008), star formation triggered during galaxy assembly (Brook et al. 2005;Bournaud, Elmegreen & Martig 2009), and radial migration (Schönrich & Binney 2009a;Loebman et al. 2011). While it is likely that no single mechanism plays a unique role in its formation, our focus in this paper is on radial migration. As first shown by Sellwood & Binney (2002), spiral activity in a disc causes stars to diffuse radially over time. Stars near corotation of a spiral pattern experience large angular momentum changes of either sign that move them to new radii without adding random motion. As spirals are recurring transient disturbances that have corotation radii scattered over a wide swath of the disc, the net effect is that stars execute a random walk in radius with a step size ranging up to ∼ 2 kpc, which results in strong radial mixing. Note that this mechanism is distinct from the "blurring" caused by epicycle oscillations of stars about their home radii, R h , where L 2 z = R 3 h |∂Φ/∂R|R h in the midplane and Φ is the gravitational potential. Radial mixing is dubbed "churning" and causes the home radii themselves to change. Note that radial mixing caused by recurrent transient spirals has two related properties that distinguish it from other processes that have sometimes also been said to cause radial migration. The two distinctive properties are the absence of disc heating, already noted, and the fact that stars mostly change places, causing little change to the distribution of angular momentum within the disc and leaving the surface density profile unchanged. These are both features of scattering at corotation; spirals also cause smaller angular momentum changes at Lindblad resonances which do heat and spread the disc. Bar formation causes greater angular momentum changes than those of an individual spiral pattern (Friedli, Benz & Kennicutt 1994;Raboud et al. 1998;Grenon 1999;Debattista et al. 2006;Minchev et al. 2011;Bird, Kazantzidis & Weinberg 2011;Brunetti et al. 2011). However, because the bar persists after its formation, the associated angular momentum changes have a different character from those caused by a spiral that grows and decays quickly. Bar formation changes the radial distribution of angular momentum, and therefore also the surface density profile of the disc, as well as adding a substantial amount of radial motion (Hohl 1971). Although the bar may subsequently settle, grow, and/or slow down over time, the main changes associated with its formation probably occur only once in a galaxy's lifetime (for a dissenting view see Bournaud & Combes 2002) whereas spirals seem to recur indefinitely as long as gas and star formation sustain the responsiveness of the disc. Thus angular momentum changes in the outer disc beyond the bar should ultimately be dominated by the effects of radial mixing by dynamically-unrelated transient spirals, as we find in this paper. Note that Minchev & Famaey (2010) studied the separate physical process of resonance overlap using steadily rotating potentials to represent a bar and spiral, and therefore did not address possible mixing by transient spirals in a barred galaxy model. Both Quillen et al. (2009) and Bird, Kazantzidis & Weinberg (2011) find that external bombardment of the disc can enhance radial motion through the increase in the central attraction and also through possible angular momentum changes. However, the importance of bombardment is unclear since satellites may be dissolved by tidal shocking (D'Onghia et al. 2010) before they settle, and furthermore satellite accretion to the inner Milky Way over the past ∼ 10 10 years is strongly constrained by the dominant old age of thick disc stars (e.g. Wyse 2009). Sellwood & Binney (2002) discovered spiral churning in 2D simulations, and it has been shown to occur in fully 3D simulations (Roskar et al. 2008a,b;Loebman et al. 2011). Schönrich & Binney (2009a) present a model for Galactic chemical evolution that includes radial mixing, and point out (Schönrich & Binney 2009a,b;Scannapieco et al. 2011) that it naturally gives rise to both a thin and a thick disc, under the assumption that thick disc stars experience a similar radial churning. Loebman et al. (2011) show that extensive spiral activity in their simulations causes a thick disc to develop, and present a detailed comparison with data (Ivezić et al. 2008) from SDSS (York et al. 2000). Sánchez-Blázquez et al. (2009) and Martínez-Serrano et al. (2009) also generated galaxies with realistic break radii in the exponential surface brightness profiles using cosmological N -body hydrodynamic simulations that included radial migration. However, Bird, Kazantzidis & Weinberg (2011) report that no significant thick discs developed in their simulations. Lee et al. (2011) find evidence for radial mixing in the thin disc and for a downtrend of rotational velocity with metallicity. The presence of such a trend does not rule out migration in the thick disc, but shows that mixing for the oldest stars cannot be complete. Bovy et al. (2011) find a continuous change of abundance-dependent disc structure with increasing scale height and decreasing scale length which they note is the "almost inevitable consequence of radial migration". The decreasing metallicity gradient with age of Milky Way disc stars can also be attributed to radial mixing (Yu et al. 2012). While Loebman et al. (2011) present circumstantial evidence for radial migration in the thick disc, they do not show explicitly that it is occurring, neither do they attempt to quantify the extent to which it may be reduced by the weakened responses of thick disc stars to spiral waves in the thin disc. Bird, Kazantzidis & Weinberg (2011) find that mixing is more extensive when spiral activity is invigorated by star formation, although the level of spiral activity is strongly dependent on the "gastrophysical" prescription adopted. They show that mixing persists even for particles with large oscillations about the mid-plane, and they determine migration probabilities from their simulations. In this paper, we set ourselves the limited goal of determining the extent of radial migration in isolated, collisionless discs with various thicknesses and radial velocity dispersions that are subject to transient spiral perturbations. Following Sellwood & Binney (2002), we first present controlled simulations of two-component discs constructed so as to support an isolated, large-amplitude spiral wave, in order to study the detailed mechanism of migration in the separate thin and thick discs. We also report the responses of two-component discs to multiple spiral patterns, both with and without a bar. DESCRIPTION OF THE SIMULATIONS All our models use the constant velocity disc known as the Mestel disc (Binney & Tremaine 2008, hereafter BT08). Linear stability analysis by Toomre (1981, see also Zang 1976and Evans & Read 1998 revealed that this disc with moderate random motion lacks any global instabilities whatsoever when half its mass is held rigid and the centre of the remainder is cut out with a sufficiently gentle taper. 1 We therefore adopt this stable model for the thin disc, and superimpose a thick disc of active particles that has 10% of the mass of the thin disc. The remaining mass is in the form of a rigid halo, set up to ensure that the total central attraction in the midplane is that of the razor-thin, full-mass, untapered Mestel disc. Disc set up Ideally, we would like to select particles from a distribution function (DF) for a thickened Mestel disc. Toomre (1982) found a family of flattened models that are generalizations of the razor-thin Zang discs that have the isothermal sech 2 z vertical density profile (Spitzer 1942;Camm 1950). Unfortunately, these two-integral models have equal velocity dispersions in the radial direction and normal to the disc plane (see BT08), whereas we would like to set up models with flattened velocity ellipsoids. Since no three-integral DF for a realistic disc galaxy model is known, as far as we are aware, we start from the two-integral DF for the razor-thin disc (Zang 1976;Toomre 1977): fZang(E, Lz) ∝ L q z e −E/σ 2 R ,(1) where E is a particle's specific energy and Lz its specific zangular momentum. The free parameter q is related to the nominal radial component of the velocity dispersion through σR = V0(1 + q) −1/2 ,(2) with V0 being the circular orbital speed at all radii. The value of Toomre's local stability parameter for a single component, razor-thin Mestel disc would be Q ≡ σR σR,min = 2 3/2 π 3.36f (1 + q) 1/2 ,(3) where f is the active fraction of mass in the component; note that these expressions for both σR and Q are independent of radius. A composite model having two thickened discs, such as we employ here, will have some effective Q that is not so easily expressed. We choose different values of σR for the thin and thick discs, given in Table 1 below, in order that the thick disc has a greater radial velocity dispersion, as observed for the Milky Way. We limit the radial extent of the discs by inner and outer tapers f0(E, Lz) = fZang [1 + (Li/Lz) 4 ][1 + (Lz/Lo) 6 ] ,(4) where Li and Lo are the central angular momentum values of the inner and outer tapers respectively. The exponents are chosen so as not to provoke instabilities (Toomre, private communication). We employ the same taper function for both discs and choose Lo = 15Li. We further restrict the extent of the disc by eliminating all particles whose orbits would take them beyond 25Ri, where Ri = Li/V0 is the central radius of the inner taper. This additional truncation is sufficiently far out that the active mass density is already substantially reduced by the outer taper. We thicken these disc models by giving each the scaled vertical density profile ρ(z) ∝ 1 (e |z|/2 + 0.2e −5|z|/2 ) 2 ,(5) wherez = z/z0 with z0 independent of R. While both this function and the usual sech 2 z function have harmonic cores, we prefer eq. (5) because it approaches ρ ∝ e −|z| more rapidly, as suggested by data (e.g. Kregel, van der Kruit & de Grijs 2002); z0 is therefore the exponential scale height when z z0. We set the vertical scale height of the thick disc to be three times that of the thin disc, as suggested for the Milky Way (Jurić et al. 2008). We estimate the equilibrium vertical velocity dispersion at each z-height by integrating the 1D Jeans equation (BT08, eq. 4.271) in our numerically-determined potential. This procedure is adequate when the radial velocity dispersion is low, but the vertical balance degrades in populations having larger initial radial motions, which flare outwards as the model relaxes from initial conditions, as shown below. The tapered thin Mestel disc has a surface density that declines with radius as shown in Fig. 1. While there is no radial range that is closely exponential, we estimate an approximately equivalent exponential radial scale length R d = 4.0Ri as shown by the blue line. Accordingly, we choose z0 = 0.4Ri for the thin disc so that the ratio R d /z0 R / R i ln[ Σ / (V 0 2 /GR i ) ] 0 10 20 -4 -6 -8 -10 Thick Thin Figure 1. Initial surface density profiles of the tapered thin (top black curves) and thick (bottom green curves) Mestel discs in simulation M2 (solid curves) measured from the particles. The dashed red curves, which are almost perfectly overlaid by the solid curves, show the expected surface density of the tapered discs from integrating the DF over all velocities, while the dotted curves indicate the density profiles of the corresponding non-tapered discs. The groove in the thin disc is centred on R = 6.5R i and is broadened by epicycle excursions. The blue line of slope −0.25 indicates the adopted exponential profile of the thin disc. is similar to that of the Milky Way and the average volumecorrected ratio of 7.3 ± 2.2(1σ) found by Kregel, van der Kruit & de Grijs (2002) for 34 nearby galaxies. The values we adopt for each model are given in Table 1 below. The radial gravitational potential gradient in the midplane of our model is less than that of the full-mass, razorthin, infinite Mestel disc as a result of the reduced disc surface density, the angular momentum tapers, the finite thickness of the discs, and gravitational force softening. In order to create an approximate equilibrium, we therefore add a rigid central attraction to the self-consistent forces from the particles in the discs at each step. We tabulate the supplementary central attraction needed at the initial moment to yield a radial force per unit mass of −V 2 0 /R in the midplane, and apply this unchanging extra term as a spherically symmetric, rigid central attraction. We find that our model is close to equilibrium, with an initial virial ratio of the particles of ≈ 0.52. This value adjusts quickly to reach a steady virial ratio of 0.50 within the first 32 dynamical times (defined below). As noted above, the initial imbalance seems to arise mostly from the vertical velocity structure, for which we adopted the 1D Jeans equation. The adjustment of the model to equilibrium is illustrated in Fig. 2; the bottom left panel shows that thicknesses of both discs change as the system relaxes, increasing Table 1. Parameters of all the simulations described in this paper. Simulation Disc f The first column gives the simulation designation used in the text. The next five columns give the properties of each disc, one for each line: f is the mass fraction, z 0 its vertical scale height, σ R is the nominal radial velocity dispersion, and N is the number of particles in the disc. The final four columns give m, the active sectoral harmonic(s), ∆z the vertical spacing of the grid planes, ε the softening length, and the grid size. The "massless" discs of simulations M4b, M2b, and M2c describe massless thick discs composed of test particles. z 0 , ∆z, and ε are in terms of R i , and σ R is in terms of the circular velocity V 0 . z 0 R i σ R V 0 N m ∆z R i ε R iRings×Spokes×Planes in the outer parts and decreasing in the inner parts -notice that the change in thickness is least over the radius range 2Ri R 10Ri, where the surface mass density is closest to its untapered value (dotted curves in Fig. 1). The changes, which are larger for the radially-hotter thick disc, result from the radial excursions of the particles, which may take them far from their initial radii for which the vertical velocity was set. However, neither the radial balance (top left panel) nor the vertical velocity dispersion (right panel) of the thick disc change significantly from their initial values. After this initial relaxation from initial conditions, we do not observe any significant flaring. A negligibly small fraction (< 1%) of particles escape from our grid by the end of the simulation, and most particle loss takes place as the model settles. Following Sellwood & Binney (2002), we seed a vigorously unstable spiral mode by adding a Lorentzian groove in angular momentum to the DF of the thin disc only: f (E, Lz) = f0(E, Lz) 1 + βw 2 L (Lz − L * ) 2 + w 2 L .(6) Here β, a negative quantity, is the depth of the groove, wL is its width, and L * is the angular momentum of the groove centre. This change to the DF seeds a predictable spiral instability (Sellwood & Kahn 1991). The groove in the thin disc has the parameters β = −0.9, wL = 0.3RiV0, and L * = 6.5RiV0. We find that a deeper and wider groove is needed than that used by Sellwood & Binney (2002) in order to excite a strong spiral in our thickened disc. Since a groove of this kind will provoke instabilities at many sectoral harmonics, m, we restrict disturbance forces from the particles to a single non-axisymmetric sectoral harmonic to prevent other modes from growing. Corotation for a global spiral mode excited by this groove is at a radius somewhat greater than L * /V0, where local theory would predict. A large-scale mode is not symmetric about the groove centre due to the geometric variation of surface area with radius -an effect that is neglected in local theory. However, the radius of corotation approaches the local theory prediction for modes of smaller spatial scale, i.e. as m → ∞. As in Sellwood & Binney (2002), we choose the circu- left panel shows the initial z 2 1/2 of the thin (bottom curves) and thick (top curves) discs (solid black) compared to that at t = 32R i /V 0 (dashed red). lar velocity V0 to be our unit of velocity, Ri our unit of length, the time unit or dynamical time is τ0 = Ri/V0, our mass unit is M0 = V 2 0 Ri/G, and Li is our unit of angular momentum. One possible scaling to physical units is to choose Ri = 0.75 kpc, with R d = 4.0Ri being the equivalent scale length of the thin disc, and τ0 = 3.0 Myr, leading to V0 = 244 km s −1 and M0 = 1.04 × 10 10 M . 〈z 2 〉 ½ / R i R / R i R / R i Numerical procedure We use the 3D polar grid described in Sellwood & Valluri (1997) to determine the gravitational field of the particles. This "old-fashioned" method is not only well suited to the problem, but also has the advantage of being tens of times faster than the "modern" methods recently reviewed by Dehnen & Read (2011). We employ a grid having 120 rings, 128 spokes, and 243 vertical planes, and adopt the cubic spline softening rule recommended by Monaghan (1992), which yields the full attraction of a point mass at distances greater than two softening lengths (2ε). The Plummer rule used in Sellwood & Binney (2002) is suited for razor-thin discs where it mimics the effect of disc thickness but, because it weakens inter-particle forces on all scales, it is unsuited for 3D simulations where disc thickness is already included in the particle distribution. The value of ε, given in Table 1, exceeds the vertical grid spacing in order to minimize the grid dependence of the inter-particle forces. In the simulations described in § §3&4, we use quiet starts (Sellwood & Athanassoula 1986) for both discs to reduce the initial amplitude of the seeded unstable spiral mode far below that expected from shot noise. In 3D, this requires many image particles for each fundamental particle: we place two at each (R, φ, z) position with oppositely directed z-velocities and two more reflected about the midplane at the point (R, φ, −z). We then place images of these four particles at equal intervals in φ, each set having identical velocity components in cylindrical polar coordinates. For these simulations, the thin disc has 1 200 000 particles and the thick disc 1 800 000. All the particles in a single population have equal mass, but particle masses differ between populations in order to create the desired ratio of disc surface densities. We evaluate forces from the particles at intervals of 0.02τ0, and step forward some of the particles at each evaluation. Since the orbital periods of particles span a wide range, we integrate their motion when R > 2Ri using longer time steps in a series of five zones with the step doubling in length for each factor 2 in radius (Sellwood 1985). We also subdivide the time step for particles within R = 0.5Ri without updating forces, with further decreases by a factor of 2 for every factor of 2 decrease in radius (Shen & Sellwood 2004). Note that very few particles have R < 0.5Ri, which is well within the inner taper where the rigid component of the force dominates. Tests with shorter time steps yielded similar results. List of simulations For convenience, we summarize the parameters of all the simulations presented in this paper in Table 1. Simulation T is constrained to remain axisymmetric, while all those whose identifier begins with 'M' are highly controlled experiments designed to support a single spiral instability. We first present, in §3, a detailed description of M2, which supports a bisymmetric spiral, and briefly compare it to simulation T. Variants of M2, with many populations of test particles are presented in §3.4, while simulation TK, which has a thick disc only, is described in §3.5. Simulations that support a single spiral of higher angular periodicity are motivated and described in §4. The last three simulations in the Table, with identifier beginning with 'U', are uncontrolled experiments presented in §5 that explore the consequences of multiple spirals, with and without a bar. A SINGLE BISYMMETRIC SPIRAL Our first objective is to study radial migration in both the thick and thin discs due to a single spiral disturbance. We therefore present simulation M2, which is designed to support an isolated m = 2 spiral instability. We measure Am(t) = 2π R 2 R 1 Σ(R, φ, t)e imφ dR,(7) where Σ(R, φ, t) is the vertically-integrated mass surface density of the particles at time t, and we generally choose R1 = 1.5Ri and R2 = 19Ri. The top panel of Fig. 3 shows the time evolution of A2/A0, revealing that the mode grows exponentially, peaks, and then decays non-exponentially to a trough. The peak is 2.55 times higher than the minimum of the following trough. Continued evolution reveals that the amplitude rises again due to the growth of a secondary wave. In order to isolate the single initial spiral, we stop the simulation at the trough and compare quantities, such as the specific angular momenta of particles, at this final time with those at the initial time t = 0.0Ri/V0. The bottom panel plots the same quantity from simulation T in which disturbance forces from the particles were restricted to the axisymmetric (m = 0) term; the very slow rise is caused by the gradual degradation of the quiet start. Fig. 4 shows snapshots of the thin and thick discs of M2 at six different times of the simulation that are marked by the red arrows in Fig. 3. We find the period of exponential growth can be very well fitted by a single mode, using the apparatus described in Sellwood & Athanassoula (1986). Table 2 reports the fitted pattern speed Ωp and growth rate γ, together with other parameters of the spiral in M2. Furthermore, the non-linear evolution visible in Fig. 4 shows no evidence of a bar. Thus, the selected time period of the simulation M2 presents the opportunity to study radial migration due to a single, well isolated spiral wave. Note that the spiral pattern makes a full rotation every 2π/Ωp 45.5Ri/V0. Angular momentum changes Fig. 5 shows the change in the specific z-angular momenta ∆Lz of the particles in the thin disc (top) and the thick disc (middle) of simulation M2 against their initial Lz. The deficiency of particles at Lz(t = 0) = L * = 6.5RiV0 in the top panel is due to the groove in the thin disc. The vertical lines show the locations of the corotation (solid blue) and the Lindblad resonances (dashed green) for nearly circular orbits. The maximum changes in angular momentum occur near corotation, and lie close to the solid blues line of slope −2; these particles cross corotation, in both directions, to about the same radial distance away from it as they were initially. As for the razor-thin disc, we find that large angular momentum changes occur only around the time that the spiral saturates, for reasons explained by Sellwood & Binney (2002). While many particles near the Figure 4. Snapshots of the thin (left column) and thick (right column) discs in simulation M2 at the six times marked in Fig. 3. One in every 24 particles is plotted for the thin disc, and one in every 36 for the thick. The side views are shown only at the initial time because they change little throughout the simulation. The second column gives the angular periodicity of the spiral or the greatest active sectoral harmonic m in the simulation. The next three columns give the pattern speed Ωp, growth rate γ, and the corotation radius Rc of the spiral obtained by fitting it's exponential growth using the method in Sellwood & Athanassoula (1986). The next two columns provide the spiral's peak amplitude and Am t / (R i /V 0 ) = 0.0 t / (R i /V 0 ) = 131.2 t / (R i /V 0 ) = 216.0 t / (R i /V 0 ) = 300.8 t / (R i /V 0 ) = 342.4 t / (R i /V 0 ) = 387.2 z / R i z / R i x / R i y / R i z / R i x / R i y / R i x / R i y / R i x / R i x / R i y / R i y / R i x / R i x / R i y / R i y / R i x / R i x / R i y / R i y / R i x / R i x / R i y / R im Ωp/ V 0 R i γ Rc R i Am A 0 peak P/T tp/ R i V 0 , (MA 0 peak / Am A 0 trough . And the last column gives the duration of the peak amplitude, which we measure between the moment the amplitude reaches the trough and the moment preceding the peak at which the amplitude is at the same level as the trough amplitude. The second number in the peak duration column is the time in physical units using the adopted scaling at the end of section 2.1. inner Lindblad resonance lose angular momentum, only a tiny fraction end up on retrograde orbits. Note also that (∆Lz) 2 1/2 ≈ 9.0 × 10 −6 RiV0 in simulation T, where nonaxisymmetric forces were eliminated, showing that changes in Lz due to orbit integration and noise errors are tiny in comparison to those caused by the spiral. For each particle, we determine the initial value of Z = 1 2 v 2 z + 1 2 ν 2 z 2 ,(8) where vz is the vertical velocity component at distance z from the midplane and ν is the vertical frequency measured in the midplane at the particle's initial radius R. For particles whose vertical and radial oscillations are small enough to satisfy the epicycle approximation, Z = Ez,epi the energy of its vertical oscillation; the vertical potential is roughly harmonic for |z| 0.4Ri. Even though the epicycle approximation is not satisfied for the majority of particles, we compute an initial notional vertical amplitude ζ = 2Z ν 2 1/2 ,(9) for them all. The value of ζ defined in this way, i.e. at the initial moment only, yields a convenient approximate ranking of the vertical oscillation amplitudes of the particles, although it is clear that in most cases ζ < zmax, the maximum height a particle may reach. Note that since each disc component in our models has an initial thickness that is independent of radius, the distribution of ζ is independent of Lz(t = 0). Notice from Fig. 5 that changes for the thick disc particles are only slightly smaller than those for thin disc particles. In order to emphasize this point, the bottom panel displays only those thick disc particles for which ζ > 1.2Ri, or one scale height of the thick disc, revealing that even for these particles changes can be almost as large as those in the thin disc. Fig. 6 shows the evolution of the radial velocity dispersion for the thin and thick discs. As expected (Sellwood & Binney 2002), the large angular momentum changes near corotation cause little heating. Some heating occurs near the Lindblad resonances, and is greater near the inner resonance, again as expected. Table 3 lists the root mean square, maximum positive, and maximum negative changes in angular momentum for both all the particles and only those that satisfy ζ > z0 in each disc. Fig. 7 shows the 5th and 95th percentile values of the changes in angular momentum as a function of initial notional vertical amplitude, ζ (eq. 9), using bin widths of 0.3Ri. The affect of radial migration seems to decrease almost linearly with increasing ζ. Distribution of angular momentum change Radial migration should also be weaker for particles having larger radial oscillations or epicycles of large amplitude. As reasoned by Sellwood & Binney (2002), particles on more eccentric orbits cannot hold station with a steadily rotating spiral, because their angular velocities vary significantly as they oscillate radially. Fig. 8, which is for only those particles in the angular momentum range 2.5RiV0 Lz(t = 64) 10.0RiV0, confirms that the largest angular momentum changes occur among particles having the least eccentric orbits. 2 It shows angular momentum changes from, and eccentricities at, time t = 64Ri/V0, which is after the model has settled from its mild initial imbalance, but before any substantial angular momentum changes have occurred. We define eccentricity as = (Ra −Rp)/(Ra +Rp), in which Ra and Rp are respectively the initial apo-centre and peri-centre distance of the orbit of a particle having the same Lz, but whose motion is confined to the midplane of the axisymmetric potential. = 64R i /V 0 having 2.5R i V 0 Lz(t = 64) 10.0R i V 0 in simulation M2 as a function of their eccentric- ities at t = 64R i /V 0 . The plot for the thin disc is quite similar. The red horizontal line shows zero angular momentum change. Effect of radial migration on vertical oscillations In the previous subsection, we showed how the particles' angular momentum changes vary with initial amplitude of vertical motion. Here, we present the converse: how the vertical oscillations are affected by the radial excursions. Since the notional amplitude of vertical motion ζ (eq. 9) is accurate only in the epicycle approximation, we determine a particle's actual maximum vertical excursion, zmax, by integrating its motion in a frozen, azimuthally-averaged potential for many radial periods. We do this twice for each particle in simulation M2, at time t = 64Ri/V0 starting from the particle's phase space coordinates in the frozen potential at that moment and again at the final time t = 387.2Ri/V0. Fig. 9 shows that, on average, the vertical excursions of particles increase for those that move radially outwards and decrease for those that move inwards. This is expected, because restoring forces to the mid-plane are weaker at larger radii. For both discs, the mean (solid green) and median (dashed blue) curves show a roughly constant ∆zmax for a wide range of ∆Lz. For the thick disc, this constant ∆zmax is about twice as great as that for the thin. While the large majority of the particles lie in the contoured region, the outliers exhibit significant substructure. The dense group of points near a line of slope of −1 in the second quadrant are particles with initial home radii within and near the inner m = 2 vertical resonance 3 , which lies at the radius 2.23Ri, not far from the radial inner Lindblad resonance at 2.12Ri. Usually ν κ, which causes vertical resonances to be found significantly farther from corotation Table 3. Changes in angular momentum resulting from the spirals and bar. For each disc of every simulation, the first numbers in the third, fourth, and fifth columns give the root mean square, maximum positive, and maximum negative changes in angular momentum respectively. We calculate these for all the particles except those that escaped the grid for which the initial angular momentum lies in an interval of the spiral's main influence around its corotation. This interval, in values in terms of R i V 0 , is [2.5, 10] for simulations M2, M2b, and M2c, [4.5, 9.5] for M3, [5.0, 8.5] for M4 and M4b, and [3.5, 9.5] for TK. We do not confine Lz(t = 0) to such an interval for simulations UC, UCB1, and UCB2 since the influence of their spirals and bar span almost the entire range. The second numbers in the last three columns give the same results but only for particles having notional vertical amplitude ζ > z 0 . Disc (∆Lz ) 2 1/2 R i V 0 Max ∆Lz R i V 0 Max -∆Lz R i V 0 M2 than the radial resonances, but in our case the inner taper reduces the inner surface density so that ν ∼ κ in this part of the disc. Thus, these particles are scattered vertically at the inner vertical resonance at the same time as they lose angular momentum at the radial inner Lindblad resonance. Another outlying group can be seen in the first quadrant, with large ∆zmax for small ∆Lz; these particles have quite eccentric initial orbits that have particular radial phases just before the spiral saturates. Either they are at their pericentres and lie near corotation just trailing either spiral arm, or they are at their apocentres and surround the outer ends of the spiral arms. Being at these special locations when the spiral wave is strongest, gives them instantaneous angular frequencies about the centre that cause them to experience more nearly steady, and not oscillatory, torques from the perturbation, leading to some angular momentum gain. Thus these particles move onto even more eccentric orbits and their increased zmax occurs at their new larger apocentres. ΔL z / R i V 0 -4 -2 0 2 4 Δz max / R i Δz max / R Effects of disc thickness and radial velocity dispersion Our somewhat surprising finding from simulation M2 is that angular momentum changes in the thick disc are only 〈(∆L z ) 2 〉 ½ / R i V 0 z 0 / R i σ R (t = 0) / V 0 All particles Particles with ζ (t = 0) > z 0 All particles Particles with ζ (t = 0) > 1.2R i Figure 10. The filled symbols show (∆Lz) 2 1/2 as a function of vertical thickness (bottom axis) of particle populations of simulation M2b. The black squares are for all the particles in the angular momentum range 2.5R i V 0 Lz(t = 0) 10.0R i V 0 , while the red triangles are for only those particles having ζ > z 0 . The line is a least-squares fit to the black squares of the form (∆Lz) 2 1/2 ∝ e −z 0 /5.95R i . The green plus symbols show the variation of (∆Lz) 2 1/2 with initial radial velocity dispersion (top axis) of all the particles in the same Lz(t = 0) range from populations in simulation M2c. The blue crosses are for only those with ζ > z 0 = 1.2R i . slightly smaller than those in the thin, and also in the razorthin disc of (Sellwood & Binney 2002). Despite the fact that thick disc particles both rise to greater z heights and have larger epicycles, on average, than do thin disc particles, we observe only a mild decrease in their response to spiral forcing. This finding suggests that the potential variations of a spiral having a large spatial scale, such as the m = 2 spiral mode in simulation M2, couple well to particles having large vertical motions and epicycle sizes. To provide more detailed information about how the extent of angular momentum changes vary with disc thickness, we added seven test particle populations to some simulations. In M2b, all test particle populations have the same initial radial velocity dispersion as the massive thick discs, but have scale heights in the range 0.5Ri z0 2.4Ri. Simulation, M2c, employs seven test particle populations having the same scale height (z0 = 1.2Ri) as the massive thick disc, but with differing σR. Being test particles, they do not affect the dynamics of the spiral instability, but merely respond to the potential variations that arise from the instability in the massive components. These simulations have identically the same physical properties as M2 but lower numerical resolution as summarized in Table 1; the reduced numerical resolution remains adequate since the fitted spiral mode is little changed from that in simulation M2 ( Table 2). Variations of (∆Lz) 2 1/2 with both disc thickness, at fixed radial velocity dispersion, and of radial dispersion at fixed thickness, are displayed in Fig. 10. The decrease is somewhat more rapid in subpopulations of particles that start with a notional vertical amplitude ζ > z0, as seems reasonable. The fitted line indicates that (∆Lz) 2 1/2 decays approximately exponentially with disc thickness with a scale that can be related to theory -see §4.3. Table 3 gives both the plotted root mean square, as well as the maximum positive, and maximum negative ∆Lz for all the particles of each population and separately for those with ζ > z0 of each disc. We caution that the information in Table 3 and in Fig. 10 quantifies how the responsiveness of a test particle population scales when subject to a fixed perturbation. Self-consistent spiral perturbations may differ in strength, spatial scale, and/or time dependence causing quite different angular momentum changes. Thick disc only For completeness, we also present simulation TK, which has a single, half-mass active disc with a substantial thickness. We inserted the same initial groove, and restricted forces to m = 2 only. It is interesting that a groove in the thick disc still creates a spiral instability, but one that grows less rapidly and saturates at a lower amplitude than we found in M2. As a consequence, the spread in (∆Lz) 2 1/2 is about 1/2 as large as in M2. Again we find that the radial migration is reduced by increasing disc thickness, but not inhibited entirely. OTHER SIMULATIONS The potential of a plane wave disturbance in a thin sheet having a sinusoidal variation of surface density Σa in the x-direction is Φa(x, z) = − 2πGΣa |k| e ikx e −|kz|(10) where k is the wavenumber (BT08, eq. 5.161). The exponential decay away from the disc plane is steeper for waves of smaller spatial scale, i.e. larger |k|. While spirals are not simple plane waves in a razor-thin sheet, this formula suggests that we should expect radial migration to be weakened more by disc thickness for spirals of smaller spatial scale or higher angular periodicity -note that the azimuthal wavenumber, k φ = m/R. The simulations in this section study the effect of changing this parameter. Spirals of different angular periodicities We wish to create spiral disturbances that are similar to that in M2, but have higher angular periodicities. Since we expect migration to depend not only on the spatial scale relative to the disc thickness, but also the peak amplitude, Σa, and perhaps also the time dependence, we try to keep as many factors unchanged as possible. The mechanics of the instability seeded by a groove depends strongly on the supporting response of the surrounding disc (Sellwood & Kahn 1991), which in turn, according to local theory, depends on the vigour of the swing amplifier (Toomre 1981). Thus to generate a similar spiral disturbance with m > 2, we need to hold the key parameters X and Q at similar values. For an m-armed disturbance in a thin, single component Mestel disc with active mass fraction f , the locally-defined parameter X = 2 f m(11) is independent of radius. We therefore scale the active mass fractions in both components as f ∝ m −1 -recall that we used a half-mass disc for m = 2. In order to preserve the same Q value (eq. 3), the radial velocity dispersion of the particles also has to be reduced as σR ∝ f . Simulations M3 and M4 therefore have lower surface densities and smaller velocity dispersions in order to support similarly growing spiral modes of sectoral harmonic m = 3 and 4 respectively. A further simulation M4b is described below. These models are all seeded with a groove of the same form (eq. 6) and parameters as that in M2. Note that the instability typically extends between the Lindblad resonances, which move closer to corotation as m is increased. In the Mestel disc, the radial extent of the mode varies as ROLR RILR = m + √ 2 m − √ 2 ,(12) i.e. ROLR/RILR ≈ 5.8, 2.8 & 2.1, for m = 2, 3 & 4 respectively. Since we have also decreased the in-plane random motion in proportion to the surface density decrease, the decreased size of the in-plane epicycles somewhat compensates for the smaller scale of the mode, although the ratio is not exactly preserved. Table 2 gives our estimates of the spiral properties in each simulation; uncertainties in the measured frequencies are typically 2% (Sellwood & Athanassoula 1986). The pattern speeds of these instabilities do not change much with m, except that we find the radius of corotation lies closer to the groove centre L * /V0 = 6.5Ri, as expected. The table also gives the time during which the amplitude of the wave is equal to or greater than that at the post-peak trough, which again does not vary much with angular periodicity. However, both the growth rates and the peak amplitudes of the modes decrease from M2, to M3 and M4. Table 3 includes the root mean square, maximum positive, and maximum negative angular momentum changes for M3, M4, and M4b, measured in each case to the moment at which Am passes through the first minimum after the mode has saturated. Comparison with M2 reveals that increasing m causes a roughly proportionate decrease in the angular momentum changes, in part because the saturation amplitude is lower, but perhaps also because the spatial scale is reduced. Note that again some thick disc particles in both M3 and M4 have ∆Lz values almost as large as the greatest in the thin disc, as was also the case for M2. Fig. 7 shows 5th and 95th percentile values of ∆Lz versus initial notional vertical amplitude ζ for simulations M3, M4, and M4b. Compared to the curve of M2, the ∆Lz values are smaller for greater m -i.e. the extent of radial mixing is substantially lessened. Note that this difference could have a variety of causes, such as the lower limiting amplitude of the mode, or possibly the different growth rate of the mode, and/or the different disc thickness relative to the spatial scale of the mode. Results for m > 2 This last factor is one we are able to eliminate. The thickness of the discs of simulations M2, M3, and M4 were held fixed as we increased m and reduced the surface density. In order to eliminate a change in the ratio of disc thickness to spatial scale of the mode, we ran a further simulation M4b with the same in-plane parameters as in M4, but with half the disc thickness. We also halved the gravity softening length and the vertical spacing of the grid planes. This change restores the growth-rate of the mode in run M4b to a value quite comparable to that in simulation M2 ( Table 2). The saturation amplitude, while larger than in simulation M4, is still about half that in M2. In addition, we added a test particle population to simulation M4b that has parameters identical to those of the massive thick disc, including σR or q, except its vertical scale height z0 is that of the thick disc of M2, M3, and M4. Again as expected, Table 3 reveals that (∆Lz) 2 1/2 is substantially lower in the massless disc than in the thinner, massive disc. Comparison with theory In order to make sense of these results, we here compare with the theoretical picture developed by Sellwood & Binney (2002). First, we eliminate the possibility that the spiral in the simulations with higher m is "on" for too long for optimal migration. Sellwood & Binney (2002) argue that efficient mixing by the spiral requires the duration of the peak amplitude be less than half the period of a horseshoe orbit, so that each particle experiences only a single scattering. They show that the minimum period of a horseshoe orbit varies as |Ψ0| −1/2 , where the potential amplitude of the spiral perturbation at corotation varies with the spiral density amplitude, Σa, and sectoral harmonic as |Ψ0| ∝ Σa/m (eq. 10). Thus the weaker density amplitude that we find with higher m implies that the minimum periods of the horseshoe orbits are greater, and the condition for efficient mixing is more strongly fulfilled for m > 2. Theory also suggests that an m-dependence of the peak amplitude is unavoidable if the spiral saturates due to the onset of many horseshoe orbits, as was argued in Sellwood & Binney (2002). In their notation, orbits libratei.e. are horseshoes -when Ep < p 2 , where the frequency p ∝ m|Ψ0| 1/2 . Since |Ψ0| ∝ Σa/m (eq. 10), we see that p 2 ∝ mΣa, suggesting that as m increases, horeshoe orbits become important at a lower peak density. Comparing M2 with M4b, we find ( Table 2) the relative limiting amplitudes Σa ∝ m −1 , which is consistent with the idea that the spiral instability saturates when the importance of horseshoe orbits reaches very nearly the same level. The fixed thickness and softening length used in M2, M3 and M4 disproportionately weakens the potential of the spiral as m rises, and spoils this exact scaling. Note also that both (∆Lz) 2 1/2 and the extreme values measured from M4b are almost exactly half those in M2, which is also consistent with the horseshoe orbit theory de-veloped by Sellwood & Binney (2002). Since they showed that the maximum ∆Lz ∝ |Ψ0| 1/2 , the relations in the previous paragraph require ∆Lz,max ∝ m −1 as we observe. As the rms value also scales in the same way, it would seem the entire distribution of ∆Lz scales with m in the same way. Again, the constant disc thickness prevents this prediction from working perfectly for M3 and M4. We stress that this scaling holds because we took some care to ensure the key dynamical properties of the disc were adjusted appropriately. Spirals in a disc having a different responsiveness would have both a different growth rate and probably also peak amplitude, and the behaviour would not have manifested a simple m-dependence. Finally, we motivate the exponential fit to the variation of (∆Lz) 2 1/2 with z0, for the same m = 2 spiral disturbance. We have already shown that (∆Lz) 2 1/2 ∝ |Ψ0| 1/2 , and have argued that the spiral potential decays away from the mid-plane as e −|kz| (eq. 10), with k φ = m/R. Naïvely, we could set z = z0, k = k φ = m/R, m = 2, and R = Rc, since angular momentum changes are centred on corotation, leading to (∆Lz) 2 1/2 ∝ e −z 0 /Rc . The fitted scale is 5.95Ri, which is somewhat smaller than Rc = 7.24Ri. The dominant cause of this discrepancy is probably that spiral is not a plane wave, curvature is important for m = 2, and that its wavenumber is larger than k φ because the spiral ridges are inclined to the radial direction; we should therefore expect the potential to decay away from the mid-plane rather more rapidly, in the sense that we measure. We conclude that angular momentum changes scale with spiral amplitude in the manner predicted in Sellwood & Binney (2002) and, furthermore, the limiting amplitude itself is determined by their theory. The variation with disc thickness is also in the sense expected from the theory, but the quantitative prediction is not exact. UNCONSTRAINED SIMULATIONS Having studied at length the effects of a single spiral wave, we now wish to illustrate the effects of multiple spirals. We present three simulations, UC, UCB1, and UCB2, that have no initial groove and the initial positions of the particles are random (i.e. not a quiet start) since we wish spirals to develop quickly from random fluctuations. We include gravitational disturbance forces from all sectoral harmonics 0 m 8, except m = 1, which we omit in order to avoid imbalanced forces from a possibly asymmetric distribution of particles in a rigid halo with a fixed centre. All the many simulations of this type that we have run have ultimately developed a strong bar at the centre. We here compare radial migration in three separate cases: simulation UC avoids a bar for a long period and supports many transient spirals, while simulations UCB1 and UCB2 have identical numerical parameters but form a bar quite early. In all three cases, the combined thin and thick discs have a smaller active mass fraction, f = 0.44, compared with f = 0.55 for simulation M2. A smaller active mass and a larger gravitational softening length help to delay bar formation but also weaken the m = 2 spiral amplitudes somewhat. The numerical parameters of these simulations are also listed in Table 1. Since bar formation in these models is stochastic (e.g. Sellwood & Debattista 2009), the different bar-formation times simply arise from different initial random seeds. Multiple spirals only The black curve in Fig. 11 shows the evolution of A2/A0 in simulation UC. The spiral amplitudes rise to the point at which significant particle scattering begins by t ∼ 1 500Ri/V0 and we present the behaviour up to time 3 500Ri/V0 shortly before a bar begins to form. The period 1 500Ri/V0 < t < 3 500Ri/V0 during which particles are scattered by spiral activity corresponds to ∼ 6.0 Gyr with our adopted scaling. Fig. 12 illustrates the m = 2 power spectrum of disturbances. Each horizontally extended peak indicates a spiral of a particular pattern speed. Some 20 transient spirals of significant amplitude occur during this period spanning a wide range of angular frequencies and corotation radii. Since there are numerous disturbances with well scattered Lindblad resonances, random motion rises generally over the disc, as reported in Fig. 13, in contrast to Fig. 6 which shows the localized heating at the ILR of the single spiral case. The larger changes in Lz than those for the single spiral case are evident from the dot-dashed black curves of Fig. 14, which show the 5th and 95th percentiles of ∆Lz versus initial notional vertical amplitude ζ. Although the shapes of these curves are similar to those in Fig. 7, they are asymmetric about zero indicating that gains are larger than losses. The angular momentum changes illustrated in Fig. 15 reflect mostly the effects of the latest spirals that developed in the simulation in each Lz(t = 0) region. Contours of changes to the distribution of home radii for the particles (dotted black and solid green in Fig. 16) reveal that particles from all initial radii can move to new home radii, but that the changes are greatest around the mid range of initial radii where corotation resonances are more likely. Some 0.003% of the particles in the thin disc and 0.03% in the thick end up on retrograde orbits. Taken together with the results from the single spiral simulations described above, we again find that the changes in angular momenta and home radii are almost as great in the thick disc as in the thin. That is, radial migration is weakened only slightly by disc thickness. The top row of Fig. 17 shows changes in the UC particles' maximum vertical excursions versus changes in their angular momenta. We find that, on average, zmax increases except for the greatest losses in angular momentum. This is different from the single spiral case, in which the contours and the mean and median curves are centred on the origin (Fig. 9). This overall extra increase in vertical amplitude probably comes from the net heating by vertical resonances that the multiple transient spirals induce by the end of the simulation. Nevertheless, the trend of the mean and median with ∆Lz remains as in M2. An exception is again the group of points along the slope of −1 in the second quadrant. It is much more pronounced for the thin disc than in M2. The particles contributing to this feature are still initially from the inner region of the disc, but this region is more extended in UC since the inner vertical resonances occur at various radii for the numerous transient spirals. As in M2, changes in zmax remain about twice as great for the thick disc particles as for those of the thin for the same ∆Lz. Although the scale height of outward migrating particles does increase somewhat, as expected, the changes are not substantial enough to cause the thickness of outward migrating thin-disc particles to approach the scale height of the thick disk. The changes in the vertical motion of inward migrating particles are more minor than those for outward migrators. Thus we do not observe much of a tendency in our models for evolution to cause a significant degree of blurring between the separate populations. Multiple spirals with a bar We here study radial migration in two simulations, UCB1 and UCB2, that formed bars at an early stage of their evolution and compare them with simulation UC that did not form a bar for a long period. As noted in the introduction, it has long been known that bar formation causes some of the largest changes to the distribution of angular momentum within a disc. Here our focus is on the consequences of continued transient spiral behaviour in the outer disc long after the bar formed, which has not previously received much attention, as far as we are aware. Figure 16. The distributions of final home radii versus initial home radii for all the particles in simulation UC. The particle density in this plane is estimated using an adaptive kernel, the dotted black and solid green contours represent the thin and thick discs respectively, and the red line shows zero change in R h . Contour levels are chosen every 10% of the thick disc's maximum value from 5% to 95% for both thin and thick discs. The arrow light blue and dashed orange contours represent the same for the thin and thick discs of UCB1 respectively. The region inside the bar (R h (t = 3500) < 10R i ) is omitted. The red curve of Fig. 11 shows the evolution of A2/A0 in UCB1. The spirals become significant around the time ∼ 1 000Ri/V0, the bar forms around ∼ 2 000Ri/V0, and we stop the simulation at the same final time t = 3 500Ri/V0 as UC. Thus, the time interval of significant scattering is roughly 2 500Ri/V0 in length with a bar being present for the last ∼ 1 500Ri/V0, which correspond to 7.5 Gyr and 4.5 Gyr respectively. With our suggested scaling, the bar has a pattern speed of 31.8 km s −1 kpc −1 and corotation Rc ∼ 7.5 kpc. (This scaling makes the bar substantially larger than that in the Milky Way.) The long-dashed red curves in Fig. 14 show that the bar enhances the changes in Lz somewhat over those that arise due to spirals alone. Note that we measure the instantaneous value of Lz of each particle and it should be borne in mind that it changes continuously in the strongly non-axisymmetric potential of this model, especially so for particles in or near the bar. For this reason, we cannot extend the light blue (marked with arrow heads) and dashed orange contours in Fig. 16 to small home radii at the later time in the strongly nonaxisymmetric potential of the bar. However, a clear asymmetry can be seen; the distribution is biased above the zero change red line, indicating a systematic outward migration in the outer disc. A similar asymmetry can be seen without a bar in UC (dotted black and solid green contours), but the formation of the bar makes it more pronounced. Since total angular momentum is conserved in these simulations, there is a corresponding inward migration in the inner regions (see Fig. 5). Aside from the bar region, where systematic noncircular streaming biases the rms radial velocities, Fig. 18 shows that the bar does not appear to cause much extra heating over that in the unbarred case. Fig. 19 again illustrates that changes in Lz in the outer disc are more substantial in this barred model than in the unbarred case (Fig. 15), but the larger spread in ∆Lz in the inner disc is partly an artifact of using the instantaneous values of Lz at later times. Because we use the instantaneous Lz in the barred potential in this Figure, a particle in the region where ∆Lz < −Lz need not necessarily have been changed to a fully retrograde orbit. The asymmetries in Figs. 16 & 19 about the red lines of zero change are not caused by bar-formation alone, as the distributions (not shown) are approximately symmetric immediately after this event. Rather, we find they appear to be caused by multiple migrations of the same particles by separate events. Our second barred simulation, UCB2, again differs from UCB1 and UC only by the initial random seed. It formed a stronger bar, but a little later than in UCB1, as shown by the green curve of Fig. 11. Significant scattering occurs for the last ∼ 1 800Ri/V0 time units (5.4 Gyr) of which the bar is present for the last ∼ 1 100Ri/V0 (3.3 Gyr). The pattern speed (33.6 km s −1 kpc −1 ) and corotation radius (∼ 7.3 kpc) for our adopted scaling, are similar to the values in UCB1. We find the extent of radial migration (bottom two rows of Table 3) is further increased by the stronger bar, but not by much. Variants of Figs. 19 & 16 (not shown) are qualitatively similar with slightly larger changes, but the outer disc is again dominated by scattering due to the latest few spirals. In both these models, therefore, we see that the formation of a bar does indeed increase the net angular momentum changes. However the overall behaviour is similar to that of scattering by transient spirals without a bar. This is different from the effect found by Brunetti et al. (2011), in which essentially all the angular momentum changes occurred during bar formation. The lower two rows of Fig. 17 show ∆zmax versus ∆Lz for UCB1 and UCB2 respectively. The presence of a bar yields a much denser and more extended feature in the second quadrant, which is also visible in for thick disc. For UCB2, its contribution is so great that the mean ∆zmax keeps increasing with greater angular momentum loss. This extra apparent vertical heating is probably caused by the buckling of the bar. Unlike for M2, UC, and UCB1, we find that for positive ∆Lz in UCB2, the mean and median curves keep rising for larger ∆Lz. A CONSERVED QUANTITY? Radial migration results from angular momentum changes near corotation, which we have shown to be somewhat weakened by increased vertical motion. Schönrich & Binney (2009a), in their model of radial mixing in thin and thick discs, assumed that vertical and radial motions are decoupled and that vertical energy is conserved as stars migrate radially. We here try to identify a conserved quantity that can be used to predict vertical motion when particles suffer large changes in Lz, by comparing various measures of vertical amplitude at the initial and final times in simulations with a single spiral. Note that the "initial" value of each quantity we discuss in this section is measured at t = 64Ri/V0, which avoids possible effects of the settling of the model from its mild initial imbalance. Also, all quantities , and z-height (bottom right). The particle, which has a home radius of 6.5R i , a total energy of −2.0V 2 0 , a midplane eccentricity of 0.15, and a vertical excursion of 2.1R i , is integrated in the frozen initial axisymmetric potential of simulation M2. are computed in an azimuthally averaged potential, to eliminate variations with spiral phase, which remain significant at the final time. We focus on particles whose initial home radii lie in an annulus of width 6.0Ri centred at corotation of the spiral in M2, and measure quantities for only one particle per quiet start ring, meaning one in every twelve. This results in ∼ 40 000 particles from the thin disc and ∼ 73 000 from the thick. Although the entire thick disc is represented by just 50% more particles than is the thin, the effects of the inner and outer tapers, together with the groove in the thin disc cause the number of particles in the range 4.0Ri R h (t = 64) 10.0Ri to be some 82% larger for the thick disc than for the thin. While we endeavour to measure each quantity in this section for the same set of particles, from both simulations M2 and T, we have been able to estimate some quantities for only a subset of these particles, as noted below. Vertical energies We have tried a number of different way to estimate the energy of vertical motion. A simple definition might be Ez,R = 1 2 v 2 z + Φ(R, z) − Φ(R, 0),(13) with R being the instantaneous radius of the particle. However, Fig. 20 shows that Ez,R defined this way varies by some 30% as a particle oscillates in both radius and vertically in a static axisymmetric potential. The orbit of this particle is multiply periodic and clearly respects three integrals (as we will confirm below), in common with many orbits in axisymmetric potentials (BT08, section 3.2). Although such orbits can be described by action-angle variables, which im-ply three decoupled oscillations, the energy of vertical motion is clearly not decoupled from that of the radial part of the motion. In particular, the bottom left panel shows that Ez,R varies systematically with radius, which reflects in part the weakening of the vertical restoring force with the outwardly declining surface density of the disc. This behaviour considerably complicates our attempts to compare the vertical energies at two different times in the same simulation. The epicycle approximation (BT08, p. 164) holds for stars whose orbits depart only slightly from circular motion in the midplane. In this approximation, when a star pursues a near-circular orbit near the mid-plane of an axisymmetric potential, the vertical and radial parts of the motion are separate, decoupled oscillations, and the vertical energy is Ez,epi (= Z eq. 8) is constant. However, the epicycle approximation is a poor description of the motion of most particles, for which the radial and vertical oscillations are neither harmonic nor are the energies of the two oscillations decoupled. Table 4 gives the rms values of [Y (t final ) − Y (t initial )]/Y (t initial ) for particles in both the thin and the thick discs of simulations M2 and T. 4 Here Y represents one of several possible vertical integrals. The first rows give the fractional changes in the epicycyclic approximation, Y = Ez,epi, that are substantial. Furthermore, they are almost as large in simulation T, which was constrained to remain axisymmetric, as those in M2 in which substantial radial migration occurred, suggesting that the changes are mostly caused by the inadequacy of the approximation. The second rows in Table 4 show the rms fractional changes in the instantaneous values of Ez,R. While these values are slightly smaller than for the epicyclic estimate, they are again large for the thin disc and still greater for the thick. This is hardly surprising, as the particles in the simulation have random orbit phases at the two measured times. Particles on eccentric orbits generally spend more time at radii R > R h than inside this radius, which biases the instantaneous measure to a lower value, as shown in the third panel of Fig. 20. To eliminate this bias, and to reduce the random variations, we evaluate Ez,R h from eq. (13) at the home radius of each particle, which requires us to integrate the motion of each particle in the frozen potential of the appropriate time until the particle reaches its home radius. A tiny fraction, ∼ 500 of the ∼ 113 000 sample particles, never cross R = R h at the initial time and ∼ 200 more at the final time; these particles rise to large heights above the mid-plane and have meridional orbits resembling that shown in the top right of Fig. A2, but appear to be confined to R > R h . The fact that this is possible seems consistent with the description of vertical oscillations developed by Schönrich & Binney (2012). The third rows of Table 4 give the fractional rms changes in Ez,R h for all the remaining particles; the changes in simulation M2 are a little smaller than those of the instantaneous values and significantly so in simulation T. The biweight estimated standard deviation of the fractional change in variable Y listed in the third column for both the thin and thick discs in simulations M2 and T. We give two values for the standard deviation in some rows: the value in the fourth column is measured from all the particles, that in the fifth column is from only the 48% (in M2) of particles for which ∆Jz is calculable. Since Ez,R is multiply periodic (Fig. 20), we have also estimated an orbit-averaged value, Ez,R , by integrating the motion for many periods until the time average changed by < 0.1% when the integration is extended for an additional radial period. The fourth rows of Table 4 give the rms changes in Ez,R , which are still considerable. Note that we did not compute this time-consuming estimate for all the particles, but for only the subset used for other values in the fifth column of this table. The choice of this subset is described below. Generally, we find that none of these estimates of the vertical energy is even approximately conserved, except for the orbit-averaged energy when the simulation is constrained to remain axisymmetric (simulation T). In this case, the potential at the two times differs slightly due to radial variations in the mass distribution -values of this orbit-averaged quantity in an unchanged potential would be independent of the moment at which the integration begins. In all cases, changes are larger in simulation M2 where significant radial migration occurs. Fig. 21 illustrates that the changes in the estimated vertical energy correlate with ∆Lz. The excesses of particles in the second and fourth quadrants indicate that changes have a tendency to be neg- Changes in E z,epi (first row), E z,R (second row), E z,R h (third row), and E z,R (fourth row) versus ∆Lz for the thin (left column) and thick (right column) discs of simulation M2. Just as in Fig. 9, the horizontal and vertical red lines show zero changes, the linearly spaced magenta contours show number density, and the bold solid green and dashed blue curves show the mean and median changes in each ordinate respectively. Note that the vertical scales differ in each plot. ative for outwards migrating particles and positive for inwards migrating particles. The significance of this trend is discussed below. Vertical actions The various actions of a regular orbit in a steady potential are defined to be (2π) −1 times the appropriate crosssectional area of the orbit torus (BT08, pp. 211-215). One advantage of actions is that they are the conserved quantities of an orbit when the potential changes slowly -i.e. they are adiabatic invariants under conditions that are defined more carefully in BT08 (pp. 237-238). The vertical action is defined as Jz ≡ 1 2π żdz,(14) where z andż are measured in some suitable plane that intersects the orbit torus. Since Lz (≡ J φ ) is conserved in a steady axisymmetric potential, the orbit can be followed in the meridional plane (BT08, p159) in which it oscillates both radially and vertically, as illustrated in the first panel of Fig. 20. To estimate Jz, we need to integrate the orbit of a particle in a simulation from its current position in the frozen, azimuthally averaged potential at that instant and construct the (z,ż) surface of section (SoS) as the particle crosses R h withṘ > 0, say. The integral in eq. (14) is the area enclosed by an invariant curve in this plane. Before embarking on this elaborate procedure, we consider two possible approximations. When the epicycle approximation holds, the vertical action is Jz,epi = Ez,epi/ν (BT08 p. 232). The fifth rows of Table 4 for each disc show that changes in this quantity are large and again they are similar to those in simulation T, confirming once again that the epicycle approximation is inadequate. Since most orbits reach beyond the harmonic region (|z| 0.4Ri) of the vertical potential, an improved local approximation is to calculate Jz| R,φ = (2π) −1 żdz R,φ(15) at the particle's fixed position in the disc. This local estimate still ignores the particle's radial motion, but gives a useful estimate for the average vertical action that is also used in analytic disc modeling (Binney 2010). The functioṅ z(z) at this fixed point is simply determined by the vertical variation of Φ(R, φ, z), and the area is easily found. We evaluate Jz| R,φ at the particle's instantaneous position at the initial and final times using the corresponding azimuthally averaged frozen potential. The rms variation of ∆Jz| R,φ is given in the sixth rows of Table 4 for each disc. We find that ∆Jz| R,φ in the thin discs of both M2 and T are small, suggesting that this estimate of vertical action is more nearly conserved. However, the changes for the thick discs are still large, and we conclude that this local estimate is still too approximate. We therefore turn to an exact evaluation of eq. (14) using the procedure described in the Appendix. Unfortunately, we can evaluate ∆Jz only if the consequents in the SoS form a simple invariant curve that allows Jz to be estimated at both times. We find that only 48% of the ∼ 113 000 particles have closed, concave invariant curves at the initial time and only 73% of those retain these properties at the final time. Although the thick disc contains more particles than the thin, we are able to calculate ∆Jz for a smaller fraction: we succeed with ∼ 26 000 in the thin disc but only ∼ 13 000 in the thick. Column five of Table 4 gives the rms changes of all estimates of vertical energy and action for only these particles. While the fractional rms values of the different energy estimates remain large, they are generally smaller than those for the all particles given in the fourth column, but only slightly so for Jz| R,φ . Thus the orbits for which ∆Jz can be computed are not a random subset, but are biased to those for which energy changes are smaller on average. The seventh (final) rows of Table 4 for each disc list the rms values of the fractional change in ∆Jz. They are just a few percent for particles in simulation T, in which the disc is constrained to remain axisymmetric, and are smaller than for any other tabulated quantity in the discs perturbed by a spiral. The small scatter about the red line of unit slope in Fig. 23 indicates that differences in the final and initial values of Jz in both discs for M2 are indeed small. Fig. 22 shows that changes in all three estimators of the vertical actions are uncorrelated with ∆Lz, and that the mean and median changes are close to zero. Comparison with the systematic changes in Fig. 21 suggest that Jz is, in fact, conserved. To see this, note that changes in Lz shift the home radius of each particle to a region where the average vertical restoring force differs, the amplitudes of the vertical oscillations increase due to the weaker average restoring force when ∆Lz > 0, and conversely decrease for ∆Lz > 0. Thus if an estimator of vertical motion is not conserved, we should expect a systematic variation with ∆Lz, as we observe for all the energy estimators in Fig. 21. The fact that there is no systematic variation of the median or mean change in vertical action suggest that it is conserved. If Jz is a conserved quantity, it may seem puzzling that the rms changes in our measurements are not smaller. Both Jz,epi and Jz| R,φ are approximate, which could be respon- sible for apparent substantial changes, but changes in the "exact" estimate of Jz remain significant, even for simulation T. It is possible that our estimate of ∆Jz is inaccurate. For example, we must eliminate non-axisymmetric structure from the potential to compute Jz, introducing small errors when the potential is mildly non-axisymmetric. Larger differences could arise, however, if the invariant curve changes its character, especially since entering or leaving a trapped area in phase space is not an adiabatic change. It is possible an orbit that appears to be unaffected by resonant islands in phase space at the initial and final times could have experienced trapping about resonant islands for some of the inter-mediate evolution, allowing Jz to change even when changes to the potential are indeed slow. We have verified that Jz is conserved to a part in 10 4 in a further simulation with a frozen, axisymmetric potential. Note this is not a totally trivial test, since we use the N -body integrator and the grid-determined accelerations to advance the motion for ∼ 350 dynamical times before making our second estimate. Therefore the non-zero changes in simulation T are indeed caused by changes to the potential. Even though the potential variations are small, substantial action changes indicate that changes to the orbit were non-adiabatic, which can happen for the reason given in the previous paragraph. Our "initial" Jz values are estimated at t = 64Ri/V0, when we believe the model has relaxed from the initial set up; we therefore suspect that the small potential variations are more probably driven by particle noise, which can be enhanced by collective modes (Weinberg 1998). This hypothesis is supported by our finding variations in Jz that were about ten times greater than in simulation T when we employed 100 times fewer particles. SUMMARY We have presented a quantitative study of the extent of radial migration in both thin and thick discs in response to a single spiral wave. We find angular momentum changes in the thick disc are generally smaller than those in the thin, although the tail to the largest changes in each population is almost equally extensive. We have introduced populations of test particles into a number of our simulations in order to determine how changes in Lz vary with disc thickness and with radial velocity dispersion when subject to the same spiral wave, finding an exponential decrease in (∆Lz) 2 1/2 , as shown in Fig. 10. We find that spirals of smaller spatial scale cause smaller changes. When we were careful to change all the properties of the model by the appropriate factors, we were able to account for smaller changes to (∆Lz) 2 1/2 as being due to a combination of the weakened spiral amplitude near corotation and the change in the value of m. Furthermore, we found evidence that the saturation amplitude scales inversely as m, in line with the theory developed by Sellwood & Binney (2002). Note the simple scaling holds true only when the principal dynamical properties, such as Q, X, thickness, and gravity softening, are held fixed relative to the scale of the mode. Nevertheless, it seems reasonable to expect smaller changes to (∆Lz) 2 1/2 in general for spirals higher m. The exponential decrease of (∆Lz) 2 1/2 with increasing disc thickness applies only for different populations subject to the same spiral perturbation. We have also run slightly more realistic simulations to follow the extent of churning in both thick and thin discs that are subject to large numbers of transient spiral waves having a variety of rotational symmetries. Fig. 16 shows that changes in the home radii of thick disc particles are smaller on average than those of the thin disc, but again the tails of the distributions in both populations are almost coextensive. As found in previous work (Friedli, Benz & Kennicutt 1994;Raboud et al. 1998;Grenon 1999;Debattista et al. 2006;Minchev et al. 2011;Bird, Kazantzidis & Weinberg 2011), the formation of a bar also causes substantial angular momentum changes within a disc, but we find that the churning effect from multiple spiral patterns still dominates changes in the outer disc after the bar has formed. We find that vertical action is conserved during radial migration, despite the fact that relative changes in our estimated Jz, which can be measured for only about half the particles, are as large as ∼ 15%. Because the vertical restoring force to the mid-plane decreases outwards, an increased (decreased) home radius causes the particle to experience a systematically weaker (stronger) vertical restoring force, making it impossible for both vertical energy and action to be conserved. We find a clear systematic variation of the vertical energy with ∆Lz, but none with ∆Jz leading us to conclude that vertical action is the conserved quantity. The residual scatter in our measured values of ∆Jz could be caused by trapping and escape from multiply-periodic resonant islands in phase space, as well as numerical jitter in the N -body potential, as we discuss at the end of §6. In the absence of these complications, we believe that the vertical action is conserved. Thus conservation of vertical action, and not of vertical energy, should be used to prescribe the changes to vertical motion in models of chemo-dynamic evolution. It would be especially useful to obtain diffusion coefficients that include the effects of radial migration as functions of radius and height due to a transient spiral or a combination of consecutive spirals with various corotation radii. We leave this work to a future paper. Figure A1. Surfaces of section in the (z,ż) plane, when R = R h andṘ > 0V 0 , for test particles in the initial potential of simulation M2. All particles in each panel have the same total energy and corresponding zrmmax, which are noted in the top right corner. The outer most green curves mark the zero-velocity curves, at whichṘ = 0V 0 for the given z, Lz and E. Finally, the bold red curves are from the three orbits shown in Fig. A2. Figure 2 . 2The circular velocity in disc midplane (top left) and vertical velocity dispersion of the thick disc particles (right) at five equally spaced times during simulation M2. The times, which include the initial and final moments of the simulation, are colour coded in temporal order by solid black, red, green, blue, and dotted cyan respectively. The dashed horizontal line in the top left panel shows the theoretical circular velocity V 0 = 1. The bottom Figure 3 . 3The time evolution of A 2 /A 0 in simulations M2 (top) and T (bottom). Note the difference in the vertical scales. The numbered arrows in the top plot mark the six times of M2 at which the snapshots of the discs are shown inFig. 4. The top axis shows time scaled to physical units using the adopted scaling at the end of section 2.1. Figure 5 .Figure 6 .Figure 7 .Figure 8 . 5678Angular momentum changes of particles in M2 as a function of their initial angular momenta. The top panel is for thin disc particles, the middle for all thick disc particles, and the bottom for thick disc particles having notional vertical amplitude ζ > 1.2R i . The horizontal red line denotes zero change. The vertical lines mark the Lindblad resonances (dashed green) and corotation (solid blue). The solid blue line with a slope of −2 illustrates the locus of particles whose changes would be symmetric about corotation. The dashed red line of slope −1 shows the ∆Lz = −Lz(t = 0) locus below which particles end up on retrograde orbits. The radial variations of σ R in the thin (bottom curves) and thick (top curves) discs of M2 at the same five different times as used in the top left and right panels of Fig. 2 with the same line colour and style coding. The horizontal dot-dashed curves show the theoretical initial dispersions from the untapered discs, while the vertical lines mark the principal resonances of the spiral, colour and style coded as in Fig. The 5th and 95th percentiles of the angular momentum changes in the thick discs of simulations M2 (dot-dashed black), M3 (short-dashed red), M4 (solid green), the massive thick disc of M4b (long-dashed blue), and the massless thick disc of M4b (dotted orange) as a function of notional vertical amplitude ζ.Although we have smoothed the curves, the rising noise with increasing ζ is caused by the decrease in the number of particles per bin. The curves stop when the number of particles per bin drops below twenty. Angular momentum changes of thick disc particles between the final time and t Figure 9 . 9Changes in maximum vertical excursions of particles in the thin (top) and thick (bottom) discs of M2 versus changes in their angular momenta. Contours are linearly spaced in number density and we plot only those points that lie outside the lowest contour. The mean (solid green) and median (dashed blue) show systematic variations, as expected. Note that the vertical scales differ in the two plots. Figure 11 . 11The time evolution of A 2 /A 0 in simulations UC (black), UCB1 (red), and UCB2 (green). The top axis shows time scaled to physical units using the adopted scaling. Figure 12 .Figure 13 . 1213The power spectrum of m = 2 density variations in simulation UC. The solid curve indicates the radius of corotation for the given frequency, while the dashed curves show the radii of the Lindblad resonances. Radial variations of σ R in simulation UC for the thin (solid) and thick (dashed) discs. The curves are drawn at equal time intervals and σ R rises monotonically in the inner and outer discs. Figure 15 . 15Same as Fig. 5 for simulation UC. c 2011 RAS, MNRAS 000, Figure 17 .Figure 18 . 1718Same asFig. 9for simulations UC (top row), UCB1 (middle row), and UCB2 (bottom row). The left column shows ∆zmax as a function of ∆Lz for the thin discs and the right for the thick. Same asFig. 13for UCB1. Figure 19 . 19Same asFig. 15for simulation UCB1. The vertical blue line marks the approximate corotation resonance of the bar. Figure 20 . 20The top left panel shows a typical thick disc orbit in the meridional plane, and the other three panels show how E z,R varies with time (top right), radius (bottom left) Figure 21 . 21Figure 21. Changes in E z,epi (first row), E z,R (second row), E z,R h (third row), and E z,R (fourth row) versus ∆Lz for the thin (left column) and thick (right column) discs of simulation M2. Just as in Fig. 9, the horizontal and vertical red lines show zero changes, the linearly spaced magenta contours show number density, and the bold solid green and dashed blue curves show the mean and median changes in each ordinate respectively. Note that the vertical scales differ in each plot. Figure 22 . 22Same as Fig. 21 for changes in J z,epi (first row), Jz| R,φ (second row), and Jz (third row) of the particles in M2. Figure 23 . 23Comparison of Jz for ∼ 40 000 particles from simulation M2 at the initial and final times. Values are calculated from eq. (14). The top panel is on a linear scale, the bottom on a log scale to reveal the behaviour for very small actions. The red line has unit slope. Thin disc particles are marked in black, thick disc in green. Table 2 . 2Measured values from fits to the spiral modes. Table 4 . 4Biweight standard deviation of fractional changes in various estimates of vertical energy and action.Simulation Disc Variable Y σ(∆Y /Y initial ) M2 Thin E z,epi 30.7% 28.9% E z,R 28.3% 27.2% E z,R h 25.0% 26.1% E z,R 23.6% J z,epi 22.7% 20.7% Jz| R,φ 16.5% 16.0% Jz 15.6% Thick E z,epi 59.4% 39.4% E z,R 47.6% 31.5% E z,R h 34.9% 26.0% E z,R 22.3% J z,epi 55.7% 34.9% Jz| R,φ 35.0% 19.7% Jz 15.4% T Thin E z,epi 22.5% 19.5% E z,R 17.9% 15.4% E z,R h 10.0% 8.8% E z,R 5.8% J z,epi 16.5% 13.8% Jz| R,φ 9.4% 8.2% Jz 6.1% Thick E z,epi 53.6% 34.3% E z,R 40.8% 22.8% E z,R h 23.3% 11.4% E z,R 3.0% J z,epi 47.5% 27.8% Jz| R,φ 24.8% 11.0% Jz 3.4% Sellwood (2012) finds that particle realizations of this disc are not completely stable, but on a long time scale when N is large. c 2011 RAS, MNRAS 000, 1-22 c 2011 RAS, MNRAS 000, 1-22 The energy cut-off we apply (see after eq. 4) eliminates the most eccentric orbits from this plot. c 2011 RAS, MNRAS 000, 1-22 Vertical resonances occur when m(Ω − Ωp) = ±ν For all quantities in this table, we use the biweight estimator of the standard deviation(Beers et al. 1990), which ignores heavy tails. We also discard a few particles with initial values < 10 −4 in order to avoid excessive amplification of errors caused by small denominators.c 2011 RAS, MNRAS 000, 1-22 A regular orbit trapped about a resonant island has a different set of actions that are not of interest here. c 2011 RAS, MNRAS 000, 1-22 ACKNOWLEDGMENTWe thank the referee, Rok Roskar, for a thorough and helpful report. This work was supported in part by NSF grant AST-1108977 to JAS.Figure A2. Orbits in the meridional plane of the three test particles drawn in bold red in the second, fourth and fifth panels ofFig. A1. Note that we show only a small fraction of the full orbit used to produce the invariant curves in the (z,ż) plane. The vertical red lines mark R h at which the SoS is constructed. The initial vertical position and radial and vertical velocities of each of these three particles are noted.APPENDIX A: NUMERICAL ESTIMATE OF VERTICAL ACTIONFig. A1 illustrates the SoS for five different energies, all for Lz = 7. Each invariant curve is generated by a test particle in the initial potential of M2. All the particles in one panel have the same energy and angular momentum (noted in the top right corner), but differ in the extent of their vertical motion. The zero-velocity curve, where a particle's radial velocity must be zero for the Lz and E values adopted is the outer most green curve. We see that most of phase space is regular, but not all consequents mark out simple closed invariant curves; some significant fraction of phase space is affected by islands in the SoS that surround multiply periodic orbits. Chaotic motion, in which consequents fill an area rather than lie on a closed curve in the SoS, is extensive only for the greatest energy. The paths in the meridional plane of three multiply periodic orbits are shown inFig. A2. They correspond to the bold red invariant curves in the second, fourth, and fifth SoS panels ofFig. A1. The last orbit has almost equal radial and vertical periods and circulates only clockwise in the meridional plane. Since we plot only when the particle passes R h withṘ > 0V0, the consequents lie only in the first quadrant of the (z,ż) plane. Were we to plot the other crossing instead, whereṘ < 0V0, the invariant curve would be a reflected image in the second quadrant. Also, had we chosen the negative root for initial radial velocity, the particle would have circulated only counterclockwise and yielded a 180 • -rotationally symmetric invariant curve in the third quadrant. This accounts for the skewness of the invariant curves in the SoS, which diminishes for particles of lower energy.The area in the SoS enclosed by only those orbits having a simple invariant curve yields a numerical estimate of the vertical action Jz.5We find all the consequents for a particle as its motion advances over 7 × 10 4 dynamical times from its position in the frozen, azimuthally averaged, potential, and make a numerical estimate of the area integral (eq. 14).We repeat this exercise at both the initial and final times in the simulation. . M G Abadi, J F Navarro, M Steinmetz, V R Eke, ApJ. 59721Abadi, M. G., Navarro, J. F., Steinmetz, M. & Eke, V. R. 2003, ApJ, 597, 21 . T C Beers, K Flynn, K Gebhardt, AJ. 10032Beers, T. C., Flynn, K. & Gebhardt, K. 1990, AJ, 100, 32 . T Bensby, S Feltzing, I Lundström, I Ilyin, A&A. 433185Bensby, T., Feltzing, S., Lundström, I. & Ilyin, I. 2005, A&A, 433, 185 . J Binney, MNRAS. 4012318Binney, J. 2010, MNRAS, 401, 2318 . J J Binney, C G Lacey, MNRAS. 230597Binney, J. J. & Lacey, C. G. 1988, MNRAS, 230, 597 J Binney, S Tremaine, Galactic Dynamics. PrincetonPrinceton Univ. PressBinney, J. & Tremaine, S. 2008, Galactic Dynamics. Prince- ton Univ. Press, Princeton (BT08) . J C Bird, S Kazantzidis, D H Weinberg, arXiv:1104.0933MN-RASsubmittedBird, J. C., Kazantzidis, S. & Weinberg, D. H. 2011, MN- RAS, submitted (arXiv:1104.0933) . F Bournaud, F Combes, A&A. 39283Bournaud, F. & Combes, F. 2002, A&A, 392, 83 . F Bournaud, B G Elmegreen, M Martig, ApJ. 7071Bournaud, F., Elmegreen, B. G. & Martig, M. 2009, ApJ, 707, L1 . J Bovy, H-W Rix, C Liu, D W Hogg, T C Beers, Y S Lee, arXiv:1111.1724Bovy, J., Rix, H-W., Liu, C., Hogg, D. W., Beers, T. C. & Lee, Y. S. 2011, arXiv:1111.1724 . C B Brook, B K Gibson, H Martel, D Kawata, ApJ. 630298Brook, C. B., Gibson, B. K., Martel, H. & Kawata, D. 2005, ApJ, 630, 298 . M Brunetti, C Chiappini, D Pfenniger, A&A. 53475Brunetti, M., Chiappini, C. & Pfenniger, D. 2011, A&A, 534, A75 . G L Camm, MNRAS. 110305Camm, G. L. 1950, MNRAS, 110, 305 . M Chiba, T C Beers, AJ. 1192843Chiba, M. & Beers, T. C. 2000, AJ, 119, 2843 . V P Debattista, L Mayer, C M Carollo, B Moore, J Wadsley, T Quinn, ApJ. 645209Debattista, V. P., Mayer, L., Carollo, C. M., Moore, B., Wadsley, J. & Quinn, T. 2006, ApJ, 645, 209 . E D&apos;onghia, V Springel, L Hernquist, D Keres, ApJ. 7091138D'Onghia, E., Springel, V., Hernquist, L. & Keres, D. 2010, ApJ, 709, 1138 . N W Evans, J C A Read, MNRAS. 300106Evans, N. W. & Read, J. C. A. 1998, MNRAS, 300, 106 . D Friedli, W Benz, R Kennicutt, ApJL. 430105Friedli, D., Benz, W. & Kennicutt, R. 1994, ApJL, 430, L105 . K Fuhrmann, 384173Fuhrmann, K. 2008, MNRAS384, 173 . G Gilmore, N Reid, MNRAS. 2021025Gilmore, G. & Reid, N. 1983, MNRAS, 202, 1025 . M Grenon, Ap. Sp. Sci. 265331Grenon, M. 1999, Ap. Sp. Sci., 265, 331 . J Hänninen, C Flynn, MNRAS. 337731Hänninen, J. & Flynn, C. 2002, MNRAS, 337, 731 . A Helmi, J F Navarro, B Nordström, J Holmberg, M G Abadi, M Steinmetz, MNRAS. 3651309Helmi, A., Navarro, J. F., Nordström, B., Holmberg, J., Abadi, M. G. & Steinmetz, M. 2006, MNRAS, 365, 1309 . F Hohl, ApJ. 168343Hohl, F. 1971, ApJ, 168, 343 . Ž Ivezić, AJ. 684287Ivezić,Ž. et al.2008, AJ, 684, 287 . M Jurić, AJ. 673864Jurić, M. et al.2008, AJ, 673, 864 . S Kazantzidis, J S Bullock, A R Zentner, A V Kravtsov, L A Moustakas, ApJ. 688254Kazantzidis, S., Bullock, J. S., Zentner, A. R., Kravtsov, A. V. & Moustakas, L. A. 2008, ApJ, 688, 254 . M Kregel, P C Van Der Kruit, R De Grijs, 334646MN-RASKregel, M., van der Kruit, P. C. & de Grijs, R. 2002, MN- RAS, 334, 646 . Y S Lee, ApJ. 738187Lee, Y. S. et al. 2011, ApJ, 738, 187 . S R Loebman, R Roskar, V P Debattista, Z Ivezić, T R Quinn, J Wadsley, ApJ. 7378Loebman, S. R., Roskar, R., Debattista, V. P., Ivezić,Z., Quinn, T. R. & Wadsley, J. 2011, ApJ, 737, 8 . S R Majewski, ARAA. 31575Majewski, S. R. 1993, ARAA, 31, 575 . F J Martínez-Serrano, A Serna, M Doménech-Moral, R Domínguez-Tenreiro, ApJ. 705133Martínez-Serrano, F. J., Serna, A., Doménech-Moral, M. & Domínguez-Tenreiro, R. 2009, ApJ, 705, L133 . L Mestel, MNRAS. 126553Mestel, L. 1963, MNRAS, 126, 553 . J J Monaghan, ARAA. 30543Monaghan, J. J. 1992, ARAA, 30, 543 . I Minchev, B Famaey, ApJ. 722112Minchev, I. & Famaey, B. 2010, ApJ, 722, 112 . I Minchev, B Famaey, F Combes, P Di Matteo, M Mouhcine, H Wozniak, A&A. 527147Minchev, I., Famaey, B., Combes, F., Di Matteo, P., Mouhcine, M. & Wozniak, H. 2011, A&A, 527, A147 . J A Munn, AJ. 1273034Munn, J. A. et al. 2004, AJ, 127, 3034 . A C Quillen, I Minchev, J Bland-Hawthorn, M Haywood, MNRAS. 3971599Quillen, A. C., Minchev, I., Bland-Hawthorn, J. & Hay- wood, M. 2009, MNRAS, 397, 1599 . P J Quinn, L Hernquist, D P Fullagar, ApJ. 40374Quinn, P. J., Hernquist, L. & Fullagar, D. P. 1993, ApJ, 403, 74 . D Raboud, M Grenon, L Martinet, R Fux, S Udry, A&A. 33561Raboud, D., Grenon, M., Martinet, L., Fux, R. & Udry, S. 1998, A&A, 335, L61 . B E Reddy, D L Lambert, C Prieto, MNRAS. 3671329Reddy, B. E., Lambert, D. L. & Allende Prieto, C. 2006, MNRAS, 367, 1329 . R Roskar, V P Debattista, T R Quinn, G S Stinson, J Wadsley, ApJL. 68479Roskar, R., Debattista, V. P., Quinn, T. R., Stinson, G. S. & Wadsley, J. 2008a, ApJL, 684, L79 . R Roskar, V P Debattista, G S Stinson, T R Quinn, T Kaufmann, J Wadsley, ApJL. 67565Roskar, R., Debattista, V. P., Stinson, G. S., Quinn, T. R., Kaufmann, T. & Wadsley, J. 2008b, ApJL, 675, L65 . P Sánchez-Blázquez, S Courty, B K Gibson, C B Brook, MNRAS. 398591Sánchez-Blázquez, P., Courty, S., Gibson, B. K. & Brook, C. B. 2009, MNRAS, 398, 591 . C Scannapieco, S D White, V Springel, P B Tissera, MNRAS. 417154Scannapieco, C., White, S. D. M, Springel, V. & Tissera, P. B. 2011, MNRAS, 417, 154 . R Schönrich, J Binney, MNRAS. 396203Schönrich, R. & Binney, J. 2009, MNRAS, 396, 203 . R Schönrich, J Binney, MNRAS. 3991145Schönrich, R. & Binney, J. 2009, MNRAS, 399, 1145 . R Schönrich, J Binney, MNRAS. 4191546Schönrich, R. & Binney, J. 2012, MNRAS, 419, 1546 . J A Sellwood, MNRAS. 217127Sellwood, J. A. 1985, MNRAS, 217, 127 J A Sellwood, arXiv:1006.4855Planets Stars and Stellar Systems. v.5, ed. G. GilmoreHeidelbergSpringerto appearSellwood, J. A. 2010, in Planets Stars and Stellar Sys- tems, v.5, ed. G. Gilmore (Heidelberg: Springer) to appear (arXiv:1006.4855) . J A Sellwood, ApJ. submittedSellwood, J. A. 2012, ApJ, submitted . J A Sellwood, E Athanassoula, MNRAS. 221195Sellwood, J. A. & Athanassoula, E. 1986, MNRAS, 221, 195 . J A Sellwood, J J Binney, MNRAS. 336785Sellwood, J. A. & Binney, J. J. 2002, MNRAS, 336, 785 . J A Sellwood, R G Carlberg, ApJ. 28261Sellwood, J. A. & Carlberg, R. G. 1984, ApJ, 282, 61 . J A Sellwood, V P Debattista, MNRAS. 3981279Sellwood, J. A. & Debattista, V. P. 2009, MNRAS, 398, 1279 . J A Sellwood, F D Kahn, MNRAS. 250278Sellwood, J. A. & Kahn, F. D. 1991, MNRAS, 250, 278 J A Sellwood, M Preto, Disks of Galaxies. Kinematics. Dynamics and Perturbations. E. Athanassoula & A. Bosma A.San FranciscoAstron. Soc. PacSellwood, J. A. & Preto, M. 2002, In "Disks of Galaxies. Kinematics. Dynamics and Perturbations.", Astron. Soc. Pac., Eds. E. Athanassoula & A. Bosma A. (San Fran- cisco) . J A Sellwood, M Valluri, MNRAS. 287124Sellwood, J. A. & Valluri, M. 1997, MNRAS, 287, 124 . J Shen, J A Sellwood, ApJ. 604614Shen, J. & Sellwood, J. A. 2004, ApJ, 604, 614 . L Spitzer, ApJ. 95329Spitzer, L. 1942, ApJ, 95, 329 . L Spitzer, M Schwarzschild, ApJ. 118106Spitzer, L. & Schwarzschild, M. 1953, ApJ, 118, 106 . A Toomre, ARAA. 15437Toomre, A. 1977, ARAA, 15, 437 A Toomre, The Structure and Evolution of Normal Galaxies. S. M. Fall & D. Lynden-BellCambridgeCambridge Univ. Press111Toomre, A. 1981, In "The Structure and Evolution of Nor- mal Galaxies", Eds. S. M. Fall & D. Lynden-Bell (Cam- bridge, Cambridge Univ. Press) p. 111 . A Toomre, ApJ. 259535Toomre, A. 1982, ApJ, 259, 535 . P C Van Der Kruit, K Freeman, ARAA. 49301van der Kruit, P. C. & Freeman, K. C 2011, ARAA, 49, 301 . Á Villalobos, A Helmi, MNRAS. 3911806Villalobos,Á. & Helmi A. 2008, MNRAS, 391, 1806 . M D Weinberg, MNRAS. 297101Weinberg, M. D. 1998, MNRAS, 297, 101 R F G Wyse, P Yoachim, J J Dalcanton, arXiv:0809.4516The Galaxy Disk in Cosmological Context. J. Andersen, J. Bland-Hawthorn & B. NordströmCambridge, CambridgeUniversity Press131226IAU Symposium 254Wyse, R. F. G. 2009, In "The Galaxy Disk in Cosmolog- ical Context", IAU Symposium 254. Eds. J. Andersen, J. Bland-Hawthorn & B. Nordström (Cambridge, Cam- bridge University Press) p. 179 (arXiv:0809.4516) Yoachim, P. & Dalcanton, J. J. 2006, AJ, 131, 226 . D G York, AJ. 1201579York, D. G., et al. 2000, AJ, 120, 1579 . J-C Yu, J A Sellwood, C Pryor, J-L Hou, C Li, ApJ. submittedYu, J-C., Sellwood, J. A., Pryor, C. Hou, J-L. & Li, C. 2012, ApJ, (submitted) . T A Zang, PhD. thesisZang, T. A. 1976, PhD. thesis, MIT
[]
[ "Guaranteed energy-efficient bit reset in finite time", "Guaranteed energy-efficient bit reset in finite time" ]
[ "Cormac Browne \nAtomic and Laser Physics\nClarendon Laboratory\nUniversity of Oxford\nParks RoadOX13PUOxfordUnited Kingdom\n", "Andrew J P Garner \nAtomic and Laser Physics\nClarendon Laboratory\nUniversity of Oxford\nParks RoadOX13PUOxfordUnited Kingdom\n", "Oscar C O Dahlsten \nAtomic and Laser Physics\nClarendon Laboratory\nUniversity of Oxford\nParks RoadOX13PUOxfordUnited Kingdom\n\nCenter for Quantum Technologies\nNational University of Singapore\nRepublic of Singapore\n", "Vlatko Vedral \nAtomic and Laser Physics\nClarendon Laboratory\nUniversity of Oxford\nParks RoadOX13PUOxfordUnited Kingdom\n\nCenter for Quantum Technologies\nNational University of Singapore\nRepublic of Singapore\n" ]
[ "Atomic and Laser Physics\nClarendon Laboratory\nUniversity of Oxford\nParks RoadOX13PUOxfordUnited Kingdom", "Atomic and Laser Physics\nClarendon Laboratory\nUniversity of Oxford\nParks RoadOX13PUOxfordUnited Kingdom", "Atomic and Laser Physics\nClarendon Laboratory\nUniversity of Oxford\nParks RoadOX13PUOxfordUnited Kingdom", "Center for Quantum Technologies\nNational University of Singapore\nRepublic of Singapore", "Atomic and Laser Physics\nClarendon Laboratory\nUniversity of Oxford\nParks RoadOX13PUOxfordUnited Kingdom", "Center for Quantum Technologies\nNational University of Singapore\nRepublic of Singapore" ]
[]
Landauer's principle states that it costs at least kBT ln 2 of work to reset one bit in the presence of a heat bath at temperature T . The bound of kBT ln 2 is achieved in the unphysical infinitetime limit. Here we ask what is possible if one is restricted to finite-time protocols. We prove analytically that it is possible to reset a bit with a work cost close to kBT ln 2 in a finite time. We construct an explicit protocol that achieves this, which involves thermalising and changing the system's Hamiltonian so as to avoid quantum coherences. Using concepts and techniques pertaining to single-shot statistical mechanics, we furthermore prove that the heat dissipated is exponentially close to the minimal amount possible not just on average, but guaranteed with high confidence in every run. Moreover we exploit the protocol to design a quantum heat engine that works near the Carnot efficiency in finite time.Introduction.-Landauer's principle [1-3] states that resetting a bit or qubit in the presence of a heat bath at temperature T costs at least kTln2 of work, which is dissipated as heat. It represents the fundamental limit to heat generation in (irreversible) computers, which is extrapolated to be reached around 2035[4].The principle is also a focal point of discussions concerning how thermodynamics of quantum and nano-scale systems should be formulated. Of particular interest to us here is the single-shot approach to statistical mechanics[5][6][7][8]. This concerns statements regarding what is guaranteed to happen or not in any single run of an experiment, as opposed to what happens on average. This distinction is important for example in nano-scale computer components, in which large heat dissipations in individual runs of the protocol could cause thermal damage, even if the average dissipation is moderate.In [5] Landauer's principle was assumed to hold in the strict sense that one can reset a uniformly random qubit at the exact work cost of kT ln 2 each run of an experiment. This assumption can be showed to be justified if one allows quasistatic protocols[6,8].Real experiments take place in finite time[9][10][11][12][13][14][15][16][17][18]. Our present Letter is motivated by the concern that fluctuations might be much greater in finite time scenarios, and that the single-shot optimality expressions to date may therefore not be physically relevant. Therefore we extend the protocol for bit reset used in [5] to the finite time regime and analyse what changes. In this regime thermalisation is imperfect, and the quantum adiabatic theorem fails so that one cannot a priori assume that shifting energy levels does not change occupation probabilities. Moreover there are correlations between occupation probabilities at different times. We prove analytically that it is in fact possible to reset a qubit in finite time at a guaranteed work cost of kT ln 2, up to some small errors. We also make a natural extension of our results to a qubit heat engine which operates at the Carnot efficiency in finite time up to a small error. We derive bounds on the errors, proving that they fall off expo-
10.1103/physrevlett.113.100603
[ "https://arxiv.org/pdf/1311.7612v2.pdf" ]
20,990,407
1311.7612
82352bfbcf7864edf32feda28dd24aa562b65df8
Guaranteed energy-efficient bit reset in finite time Cormac Browne Atomic and Laser Physics Clarendon Laboratory University of Oxford Parks RoadOX13PUOxfordUnited Kingdom Andrew J P Garner Atomic and Laser Physics Clarendon Laboratory University of Oxford Parks RoadOX13PUOxfordUnited Kingdom Oscar C O Dahlsten Atomic and Laser Physics Clarendon Laboratory University of Oxford Parks RoadOX13PUOxfordUnited Kingdom Center for Quantum Technologies National University of Singapore Republic of Singapore Vlatko Vedral Atomic and Laser Physics Clarendon Laboratory University of Oxford Parks RoadOX13PUOxfordUnited Kingdom Center for Quantum Technologies National University of Singapore Republic of Singapore Guaranteed energy-efficient bit reset in finite time (Dated: September 16, 2014) Landauer's principle states that it costs at least kBT ln 2 of work to reset one bit in the presence of a heat bath at temperature T . The bound of kBT ln 2 is achieved in the unphysical infinitetime limit. Here we ask what is possible if one is restricted to finite-time protocols. We prove analytically that it is possible to reset a bit with a work cost close to kBT ln 2 in a finite time. We construct an explicit protocol that achieves this, which involves thermalising and changing the system's Hamiltonian so as to avoid quantum coherences. Using concepts and techniques pertaining to single-shot statistical mechanics, we furthermore prove that the heat dissipated is exponentially close to the minimal amount possible not just on average, but guaranteed with high confidence in every run. Moreover we exploit the protocol to design a quantum heat engine that works near the Carnot efficiency in finite time.Introduction.-Landauer's principle [1-3] states that resetting a bit or qubit in the presence of a heat bath at temperature T costs at least kTln2 of work, which is dissipated as heat. It represents the fundamental limit to heat generation in (irreversible) computers, which is extrapolated to be reached around 2035[4].The principle is also a focal point of discussions concerning how thermodynamics of quantum and nano-scale systems should be formulated. Of particular interest to us here is the single-shot approach to statistical mechanics[5][6][7][8]. This concerns statements regarding what is guaranteed to happen or not in any single run of an experiment, as opposed to what happens on average. This distinction is important for example in nano-scale computer components, in which large heat dissipations in individual runs of the protocol could cause thermal damage, even if the average dissipation is moderate.In [5] Landauer's principle was assumed to hold in the strict sense that one can reset a uniformly random qubit at the exact work cost of kT ln 2 each run of an experiment. This assumption can be showed to be justified if one allows quasistatic protocols[6,8].Real experiments take place in finite time[9][10][11][12][13][14][15][16][17][18]. Our present Letter is motivated by the concern that fluctuations might be much greater in finite time scenarios, and that the single-shot optimality expressions to date may therefore not be physically relevant. Therefore we extend the protocol for bit reset used in [5] to the finite time regime and analyse what changes. In this regime thermalisation is imperfect, and the quantum adiabatic theorem fails so that one cannot a priori assume that shifting energy levels does not change occupation probabilities. Moreover there are correlations between occupation probabilities at different times. We prove analytically that it is in fact possible to reset a qubit in finite time at a guaranteed work cost of kT ln 2, up to some small errors. We also make a natural extension of our results to a qubit heat engine which operates at the Carnot efficiency in finite time up to a small error. We derive bounds on the errors, proving that they fall off expo- Landauer's principle states that it costs at least kBT ln 2 of work to reset one bit in the presence of a heat bath at temperature T . The bound of kBT ln 2 is achieved in the unphysical infinitetime limit. Here we ask what is possible if one is restricted to finite-time protocols. We prove analytically that it is possible to reset a bit with a work cost close to kBT ln 2 in a finite time. We construct an explicit protocol that achieves this, which involves thermalising and changing the system's Hamiltonian so as to avoid quantum coherences. Using concepts and techniques pertaining to single-shot statistical mechanics, we furthermore prove that the heat dissipated is exponentially close to the minimal amount possible not just on average, but guaranteed with high confidence in every run. Moreover we exploit the protocol to design a quantum heat engine that works near the Carnot efficiency in finite time. Introduction.-Landauer's principle [1][2][3] states that resetting a bit or qubit in the presence of a heat bath at temperature T costs at least kTln2 of work, which is dissipated as heat. It represents the fundamental limit to heat generation in (irreversible) computers, which is extrapolated to be reached around 2035 [4]. The principle is also a focal point of discussions concerning how thermodynamics of quantum and nano-scale systems should be formulated. Of particular interest to us here is the single-shot approach to statistical mechanics [5][6][7][8]. This concerns statements regarding what is guaranteed to happen or not in any single run of an experiment, as opposed to what happens on average. This distinction is important for example in nano-scale computer components, in which large heat dissipations in individual runs of the protocol could cause thermal damage, even if the average dissipation is moderate. In [5] Landauer's principle was assumed to hold in the strict sense that one can reset a uniformly random qubit at the exact work cost of kT ln 2 each run of an experiment. This assumption can be showed to be justified if one allows quasistatic protocols [6,8]. Real experiments take place in finite time [9][10][11][12][13][14][15][16][17][18]. Our present Letter is motivated by the concern that fluctuations might be much greater in finite time scenarios, and that the single-shot optimality expressions to date may therefore not be physically relevant. Therefore we extend the protocol for bit reset used in [5] to the finite time regime and analyse what changes. In this regime thermalisation is imperfect, and the quantum adiabatic theorem fails so that one cannot a priori assume that shifting energy levels does not change occupation probabilities. Moreover there are correlations between occupation probabilities at different times. We prove analytically that it is in fact possible to reset a qubit in finite time at a guaranteed work cost of kT ln 2, up to some small errors. We also make a natural extension of our results to a qubit heat engine which operates at the Carnot efficiency in finite time up to a small error. We derive bounds on the errors, proving that they fall off expo-nentially or doubly exponentially in the time taken for the protocol. A quasistatic bit reset protocol.-We examine a simple two-level system, with access to a heat bath and a work reservoir, which we will manipulate with a timevarying Hamiltonian in the regime as set out by [19]. The evolution of the system takes place through two mechanisms [19] (see also [8,20]): 1. Changes to the energy spectrum -identified with work cost/production dE i for occupied energy level i, which has no effect on the occupation probabilities. 2. Changes to probability distributions via interactions with a heat bath (thermalisation), with no changes to the energy spectrum, and thus no associated work cost/production. Initially, the two degenerate energy levels of a random qubit are equally likely to be populated. The system is coupled to a heat bath at temperature T , and one energy level is quasistatically and isothermally raised to infinity, until the lower energy level is definitely populated (see Fig. 1). If we want to perform bit reset ("erasure" in Landauer's terminology), the system is then decoupled from the heat bath, and the second energy level is returned to its original value, such that the initial and final energy level configurations are the same, and only the populations have changed. In the quasistatic limit, each stage of raising the energy level has an average cost of P 2 dE where P 2 is the (thermal) population of the upper energy level. Thus raising the second level from 0 to infinity, the work cost of the entire protocol is given: W = ∞ 0 P 2 (E)dE = k B T ln 2. (1) FIG. 2: A few steps of the protocol. Thermal (dashed) and actual (solid) upper energy level populations with respect to time. At each time, t(n), the upper energy level is raised, altering the thermal state. We now describe how these two elementary steps are impacted by the extension to finite time. Level-shifts in finite time: the negative role of coherence.-A development of coherences during the level shifts would increase the amount of work that must be invested to perform a reset. For example, suppose the initial state is ρ i = p(a)|a a| + p(b)|b b|, where |a , |b are the energy eigenstates with occupation probabilities p(a), p(b) respectively. Suppose the Hamiltonian changes very quickly and the new energy eigenstates are |a , |b . Using the scheme of [21], we perform a projective measurement in the new energy eigenbasis, |a , |b and define the work input of the step as the energy difference between the initial energy eigenstate and the final. The average work cost can be written as W = p(b)(p(b → b )(E b − E b ) + p(b → a )(−E b )) + p(a)(p(a → a )(0) + p(a → b )(E b )) = p(b)(E b − E b ) + (p(a) − p(b))p(a → b )E b ,(2) where E i is the energy of associated with state i, and p(a → b ) indicates the probability of transitioning from a to b . (In our protocol, E a =E a = 0; the second line follows by noting that the transition probabilities are doublystochastic due to the Born rule). Since E b > E b ≥ 0, and p(a) ≥ p(b), in order for this expression to contribute the least work cost possible, we set the transition probability p(a → b ) to zero, i.e. by not causing any coherent excitations. We therefore avoid coherence, using either one of two ways: (i) By allowing for the experimenter to choose a path of Hamiltonians that share the same energy eigenstates and only differ in energy eigenvalues, as in the standard model for Zeeman splitting (Appendix A 1), (ii) If the experimenter is instead given a fixed Hamiltonian path which is not of this kind, she may actively remove the coherences by an extra unitary being applied (Appendix A 2). This method of active correction is similar to the strategy employed in super-adiabatic processes [22][23][24]. In either case, at any point of the protocol the density matrix of the system will be diagonal in the instantaneous energy eigenbasis. Partial thermalisation can be represented by partial swap matrices.-The stochastic 'partial swap' matrix is equivalent to all other classical models of thermalisation for a two level system [25,26]. In this model, the heat bath is a large ensemble of thermal states which have some probability p of being swapped with the system state in a given time interval. We model a period of extended thermalization as being composed of a series of t partial swap matrices, each representing a unit time step. Multiplying these together gives a single swap matrix with probability P sw (t) of swapping with the thermal state: P sw (t) = 1 − (1 − p) t .(3) (See Appendix B 1 for details.) The influence of a finite thermalisation time, t, manifests as a degradation of the thermalisation quality through a lowered P sw (t). The average extra work cost of finite time is exponentially suppressed.-It is possible to quantify to what extent the state at the end of a period of thermalisation differs from a thermal state by employing the trace-distance δ(ρ, σ) := 1 2 Tr( (ρ − σ) † (ρ − σ)) (see e.g. [27]). Throughout the protocol we monotonically raise the energy of the second level and thus monotonically decrease the occupation probability of the second level. This makes it possible to bound the trace-distance after a period of thermalisation by the trace-distance between the desired thermal state and the occupation probability with degenerate energy levels: δ ≤ 1 1 + exp (−βE 2 ) − 1 2 (1 − p) t .(4) (Full derivation in Appendix B 2.) We thus see that the occupation probabilities get exponentially close to to the thermal distribution as a function of the number of time steps spent thermalizing. It follows that the average work W required to raise the upper level from zero to some maximum energy E max , only thermalising for t time steps at each stage, is bounded by W quasi ≤ W ≤ W quasi + (1 − p) t E max 2 − W quasi ,(5) where W quasi is the quasistatic work cost of raising the second level to E max : W quasi = k B T ln   2 1 + exp − Emax kBT   .(6) (Full derivation in Appendix B 3.) Eq. 5 shows that the average extra work cost of reset decreases exponentially in the thermalisation time, t. Work fluctuations in a single shot reset are doubly exponentially suppressed.-The preceding discussion applies to the average work cost over many bit resets. However, the average work cost is not always the most useful parameter if we wish to design a physical device which performs a bit reset. Another practical parameter is the guaranteed work from single shot thermodynamics, which provides an upper bound on the work required per reset, W max . One could envision scenarios where if this value is exceeded the device breaks: either because the work must ultimately be dissipated as heat which above a certain level could damage the device, or if the work reservoir itself has a finite capacity and would be exhausted by an attempt to draw on too large a value. To derive such a parameter, or other similar parameters (including the average work value), it is useful to generate a work distribution: the probability density function for the work cost of the bit reset process. It has been established in the quasistatic case that the work cost in any procedure involving shifting energy levels (such as bit reset) is tightly peaked around the average value [6,8]. Applying the method of Egloff and co-authors [8], the deviation from the average work in the quasistatic case when raising the second level by E per step over N steps is bounded by: P (|W − W | ≥ ω) ≤ 2 exp − 2 ω 2 N E 2 .(7) This result arises from modelling the work input as a stochastic process, and applying the McDiarmid inequality [28] which bounds the deviation from the mean value. By thermalising fully in every step, the question of which energy level is occupied at each step can be represented by a sequence of independent random variables with distributions given by the thermal population associated with the particular energy gaps. The work cost is a function of these random variables. We note that altering the energy level at one stage in the perfectly thermal regime will only affect the total work cost by the difference associated with that one step. However with partial thermalisation, the states the system is in at two different stages are not independent of each other. Altering the energy level of one stage has a knock-on effect on the work cost for all subsequent stages. We show in Appendix C 1 that this effect falls off in a manner inversely proportional to the probability of swapping, and thus derive a new finite time version of Eq. 7, which explicitly takes into account this sensitivity: P (|W − W | ≥ ω) ≤ 2 exp − 2 ω 2 P sw (t) 2 N E 2 . (8) When t → ∞ such that P sw (t) → 1, we recover the perfect thermalisation case. As P sw (t) is itself an exponential function of time (Eq. 3), we see this limit rapidly converges on the quasistatic variant as t increases. Hence, fluctuations are supressed as a double exponential function of time spent thermalising. When P sw = 0 the bound becomes meaningless, as changing what happens at one stage in the protocol can have an unbounded effect on the final work cost. We can combine this spread with the average result in Eq. 5 to determine W max : the maximum work cost in a single shot, guaranteed only to be exceeded with failure probability (mathematically, W max is defined to satisfy W max −∞ P (W )dW = 1 − for work distribution P (W )): W max ≤ 1 − (1 − p) t W quasi + 1 2 (1 − p) t E max + 1 1 − (1 − p) t ln (2/ ) 2N E max .(9) (Full derivation in Appendix C 2.) From Eqs. 5, 7 & 9 we see that spending time thermalising has two-fold benefit: it lowers the average cost, and reduces the spread of possible work values around this average. Reset error is exponentially suppressed.-Initially, we have complete uncertainty about the system -the energy levels are degenerate and the particle is equally likely to occupy either of them. At the end of the raising procedure, the system has been 'reset' to one energy level, with probability of failure, P f ail (defined as the occupation probability of the second energy level), upper bounded by (using the probability distribution derived in Appendix B 2) P fail ≤ 1 + (1 − p) t exp (−βE max ) 1 + exp (−βE max ) − 1 2 (1 − p) t . (10) The work cost of resetting multiple bits scales favourably.-In [5] the general protocol involves resetting n qubits in the state ρ = 1/2 ⊗ 1/2 · · · ⊗1/2 and we now consider how the errors scale in this case. Recall that for a single qubit, thermalising for time t bounds the trace-distance δ between the actual state and the true thermal state according to Eq. 4. This results in an additional work cost of up to δE max across the protocol. For n bits, in the very worst case each bit will be δ away from its thermal state, and so trivially the extra work cost will be bounded by nδE max . Thus the average work cost of reset scales linearly with the number of bits (as with the quasistatic case). This argument can be naturally modified to the single-shot case (more detail in Appendix D), and one may therefeore say that the the multi-qubit statements in [5,7] also remain relevant in the case of finite time. Work extractable in finite time.-Although we have thus far concentrated on bit reset, it is also possible to consider the inverse protocol: work extraction by lowering the upper energy level in contact with a hot bath. The derivation of this proceeds exactly as before, with the limits of the integration reversed. This allows us to extract work bounded by: − W quasi ≤ W out ≤ − W quasi + (1 − p) t E max 2 − W quasi . (11) If we have access to two baths at different temperatures, we can extract net work by running a bit reset coupled to a cold bath at temperature T C followed by a work extraction coupled to a hot bath at temperature T H . The engine cycle formed (shown in Fig. 3) has net work exchange per cycle bounded by (12) where Z max C/H = 1 + exp −E max /k B T C/H and W net is the difference between the work input (Eq. 5) and the work output (Eq. 11) with the appropriate temperatures. W net ≤ −P sw (t)k B (T H − T C ) ln 2 −P sw (t)k B (T C ln (Z max C ) + T H ln (Z max H )) + (1 − P sw (t)) E max , = P sw (t) W quasi + (1 − P sw (t)) E max , The final line follows because the first two lines are the same as in the quasistatic case. The final term is independent of the temperatures of the baths; but instead accounts for the effect of finite time on the degree of overpopulation of the upper level when raising, and underpopulation of the upper level when lowering. Adjusting the energy level in finite steps has necessitated a move away from the truly isothermal process of the quasistatic regime. In a two-level system, any distribution where the lower energy level is more populated than the upper may be written as a thermal state for some temperature. During bit reset, after the upper energy level is raised, it is over-populated with respect to the thermal population associated with the cold bath temperature. We can interpret this as a rise in the system temperature appearing as the saw-tooth pattern in Fig. 3. There is a speed limit for positive power output.-By reconciling the concept of time with these processes, we can now talk meaningfully about the power of the cycle, P: the net work exchange divided by the total time taken to complete a full cycle of bit reset and extraction. The total time is found by multiplying the amount of time spent thermalising in one stage, t, by the number of stages in both halves of the protocol, N = 2E max /E. We can upper bound the power of the entire cycle as a function of the maximum energy gap E max and of t: P ≤ P sw (t) W quasi + (1 − P sw (t)) E max 2 E max E t .(13) A positive P indicates the system will draw energy in from its surroundings, and therefore we note that the engine does not produce work in all parameter regimes. If one attempts to operates an engine (with W quasi < 0) quicker than the limit t < −1 ln(1 − p) ln 1 − E max W net quasi ,(14) then the partial thermalisation can potentially contribute an excess work cost greater than the engine's quasistatic work output. Near-Carnot efficiency can be achieved in finite time.-The efficiency η of a cycle may be defined as the net work extracted divided by the maximum work value of one bit of information (−k B T H ln 2). We write this as η quasi − (1 − p) t E max k B T H ln 2 ≤ η ≤ η quasi ,(15) where η quasi is the quasistatic efficiency of raising to E max over an infinitely long time, and can be related to the Carnot efficiency η C by η quasi = η C − ln (Z max H ) − T C T H ln (Z max C ) ln 2 .(16) We see that if t is small, the cost of raising the populated upper energy level takes its toll on the efficiency, potentially plunging it into negative values (work loss) when t does not satisfy Eq. 14. Conversely, as t → ∞, the process is maximally efficient, but has no power. Conclusion. -We have showed that one can reset a qubit in finite time at a guaranteed work cost of kTln2 up to some errors that fall off dramatically in the time taken for the protocol. As mentioned, several key results in single-shot statistical mechanics assume that this can be achieved perfectly [5][6][7][8]29]. Our Letter accordingly shows that the optimality statements of those papers are still relevant to finite time protocols. Moreover our results naturally extended to show that the Carnot effi-ciency can be achieved by a qubit engine in finite time up to an error that falls of exponentially in the cycle time. Acknowledgments.-We acknowledge fruitful discussions with Alexia Auffèves, Felix Binder, Dario Egloff, John Goold, Kavan Modi, Felix Pollock and Nicole Yunger Halpern. We are grateful for funding from the EPSRC (UK), the Templeton Foundation, the Leverhulme Trust, the EU Collaborative Project TherMiQ (Grant Agreement 618074), the Oxford Martin School, the National Research Foundation (Singapore), and the Ministry of Education (Singapore). We can avoid inducing quantum coherences when raising energy levels, by choosing to describe the energy levels by Hamiltonians which are diagonal in the same basis at each stage, written as: H(n) = E 1 (n) 0 0 E 2 (n) = 0 0 0 En .(A1) Because they are all diagonal in the same basis, they will always commute; [H(n), H(m)] = 0 for all n and m. This is a reasonable regime to physically implement, for example, by varying two hyperfine levels in an atom by the application of a time-dependent external magnetic field. Defining P 1 (n) and P 2 (n) as the populations at stage n of lower and upper energy levels respectively, we write the density matrix ρ(n) in the energy eigenbasis as: ρ(n) = P 1 (n) 0 0 P 2 (n) .(A2) In order to apply the Schrödinger equation, we can interpolate into a continuous case and consider the transition from stage n to stage n + 1 as given by a time parameter t that varies from 0 to τ in each step: H(n, t) = 0 0 0 E (n + t/τ ) ,(A3) such that H(n, τ ) = H(n + 1, 0). By using the Schrödinger equation to generate an equation for unitary evolution from stage n to stage n + 1, as we have chosen a H which commutes with itself throughout the process, we can write down: U (n → n+1) = exp − i τ 0 H(t)dt . (A4) H(t) is always diagonal and so it follows that τ 0 H(t)dt must also be diagonal. Taking the exponent of this sum multiplied by a constant (i/ ) implies that U must also be a diagonal operator. The evolution of a mixed state from time t i = 0 (i.e. step n) to time t f can thus be written as ρ(n+1) = U (n → n+1) ρ(n) U (n → n+1) † .(A5) Providing we always start in a diagonal state, all terms on the right hand side are diagonal, and thus ρ(m) is diagonal for all m. In fact, we can go further and explicitly calculate U due to the simple form of Eq. A3: U (n → n+1) = exp − i 0 0 0 E n + 1 2 τ , = 1 0 0 exp − i E n + 1 2 τ . (A6) This shows us that the only effect of evolution is to introduce a phase between the upper and lower energy levels, φ = E n + 1 2 τ . If the upper and lower energy levels are an incoherent mixture, then this transformation will have no effect whatsoever on the final state; rewriting Eq. A2 as ρ(n) = P 1 (n)|E 1 (n) E 1 (n)|+P 2 (n)|E 2 (n) E 2 (n)|, the evolution in Eq. A5 reduces to ρ(n+1) = U (P 1 (n)|E 1 (n) E 1 (n)| + P 2 (n)|E 2 (n) E 2 (n)|) U † , = P 1 (n)|E 1 (n) E 1 (n)| + P 2 (n)e −iφ |E 2 (n) E 2 (n)|e iφ , = P 1 (n)|E 1 (n) E 1 (n)| + P 2 (n)|E 2 (n) E 2 (n)|, = ρ(n).(A7) 2. The cost of coherent excitations can be actively reversed. If the different Hamiltonians are not intrinsically diagonal, we can consider an energy level step as taking us from the initial Hamiltonian H = i E i |E i E i | to the final HamiltonianH = iẼ i |Ẽ i Ẽ i |, where not only have the eigenvalues changed, but there is also a change of basis. A change in basis can always be associated with unitary U such that |Ẽ i Ẽ i | = U |E i E i |U † ,(A8) and one can expressH Consider the initial state of the system, given by a density matrix written diagonally in the initial Hamiltonian basis: ρ = i P i |E i E i |. The system energy before the change in Hamiltonian is defined as Tr (ρH) and afterwards as Tr ρH , and so we see that the change in system energy (and thus by conservation of energy, the work cost from reservoir) is given by ∆W = Tr(ρH) − Tr(ρH). = iẼ i U |E i E i |U † . (A9) A visualisation that this might result in an work cost is apparent in Figure 4 that the state ρ appears higher in the sphere in terms ofH than H. One must take care, however, to note that the energy scale along the vertical axis of each Bloch sphere is different. For bit reset, the energy gap is increasing andẼ 2 −Ẽ 1 > E 2 − E 1 . If we were to let the system decohere, the final work cost would be exactly this value (the density matrix in the first term is replaced by a new density matrix with the terms off-diagonal inH removed; but as these terms only appear inside a trace, they do not contribute to the value). However, if instead we apply an active unitary transformation on ρ to bring it into a new state ρ , which is diagonal in the new energy eigenbasis ρ = i P i |E i E i | = i P i U † |Ẽ i Ẽ i |U and so the obvious choice is to take ρ = U ρU † given as ρ = i P i |Ẽ i Ẽ i |. (A10) If we now consider the overall change in energy between the initial state before the change in Hamiltonian, and the final state after the change in Hamiltonian followed by the application of the correcting unitary: ∆W = Tr(ρ H ) − Tr(ρH) (A11) = Tr   i P i |Ẽ i Ẽ i | jẼ j |Ẽ j Ẽ j |   −Tr   i P i |E i E i | j E j |E j E j |   (A12) = i P i Ẽ i − E i ,(A13) and noting that ∆E i =Ẽ i − E i , this recovers the same work cost as if there had been no coherences at all: ∆W = i P i ∆E i .(A14) We have compensated for our passive transformation U on the Hamiltonian basis by actively applying the same transformation to the density matrix. Appendix B: Average work cost Partial swap model of thermalisation Consider now the process of thermalisation. At stage n, when the energy levels are split by ∆E = nE, the associated thermal populations are given by the Gibbs distribution: P th (n) = 1 1 + e −βnE 1 e −βnE (B1) Consider a stochastic transformation in which we have a probability P swap of replacing the current state with the appropriate thermal state, and probability (1 − P swap ) of leaving the state alone. This is expressed as the stochastic matrix M (n) = (1 − P swap ) 1 + P swap M th (n), (B2) where M th (n) = P th 1 (n) P th 1 (n) P th 2 (n) P th 2 (n) (B3) and P th i (n) is the i th component of P th (n). We verify that this matrix has the defining behaviour of a thermalising process by considering its eigenvectors, and noting that it evolves all probability distributions towards the thermal distribution. Under the assumption that we raise the energy levels in such a way that the population of the levels is undisturbed, we can write a recursive relationship between the populations P(n) at the end of each stage (that is the populations having adjusted the energy level for the n th time, and then allowing it to partially thermalise): P(n) = M (n) t(n) P(n − 1),(B4) where t(n) is the number of times we apply the partial thermalisation matrix at stage n. We attempt instead to express this energy level population as the ideal thermal distribution P (n) th perturbed by a small difference. P(n) = P th (n) + δ(n) (B5) We calculate the correction term δ explicitly by noting that the probability of not swapping with with P th (n) in any stage is given by (1−P swap ) t(n) , and so our correction is to subtract this amount of the thermal population and add instead the same amount of the population of the previous stage: δ(n) = (1 − P swap ) t(n) −P th (n) + P(n − 1) (B6) Further note that, as P th (n) and P(n) are both valid probability distributions, the sum of the components of δ must sum to zero. This can specifically seen to be the case here by noting that the second component of each contributing part of δ is just just one minus the first component. Indeed for a two level system, we can express δ as: δ(n) = ±δ(n) 1 −1 ,(B7) where δ is a quantity known also as the variational distance (see e.g. [27]). The sign of δ depends on whether we are raising or lowering the upper energy level. Incomplete thermalisation after raising the upper energy level leaves a higher probability of being in the upper level than the true thermal probability. There is a lower than thermal probability when lowering the upper energy level. Bounding the variational distance. We can bound the variational distance δ(n) by noting that, since we increase the energy of the second level throughout the protocol, the effect of thermalization is always to increase the occupation probability of the first energy level P (n−1) 1 ≥ P (0) 1 . This can be seen from Eq. B6. Thus we have: δ(n) = (1 − p) t 1 1 + exp (−nβE) − P (n−1) 1 ≤ (1 − p) t 1 1 + exp (−nβE) − 1 2 . (B8) Having derived a bound on the variational distance we can now write the probability distribution (after some rearranging) as: P(n) ≤    (1 + (1 − p) t ) 1 1 + exp (−nβE) − 1 2 (1 − p) t (1 + (1 − p) t ) exp (−nβE) 1 + exp (−nβE) − 1 2 (1 − p) t    (B9) 3. Bounding the average work cost. In the worst case scenario, at each stage before raising the energy level, we are exactly δ away from the true thermal population. Raising the energy level by E at stage n has an associated work cost of P 2 (n)E = P th 2 (n) + δ E. Writing out P th 2 (n) explicitly (using Eq. B1), and summing over all N stages in the bit reset, the (worst case) work cost is W = N n=0     exp −nE k B T 1 + exp −nE k B T + + (1 − p) t     1 1 + exp −nE k B T − 1 2         E. (B10) For small E, we can approximate this sum as an integral and, writing Edn = dE 2 , and N E = E max : W ≈ Emax 0 dE 2   exp − E2 kBT 1 + exp − E2 kBT + + (1 − p) t   1 1 + exp −E2 kBT − 1 2     = 1 − (1 − p) t k B T ln   2 1 + exp − Emax kBT   + 1 2 (1 − p) t (E max ) (B11) Note that the term in the denominator of the argument of the logarithm is just the partition function associated with an energy level splitting of E max . If E max → ∞, then this term turns to 1, and provided we have perfect thermalisation such that if δ = 0, we recover the quasistatic Landauer cost k B T ln 2 (as derived in Eq. 1). Appendix C: Single-shot work cost 1. Bounding the work cost fluctuations. In a single shot classical regime, we assume that at the end of each stage the system is in one of the two energy levels; and we can express each choice of energy level as a sequence of random variables {X i } i=1...N . Noting that at each stage, by raising the splitting of the energy levels by E, if the system is in the upper energy state at a particular stage, then this stage contributes a work cost of E. With this in mind, it is useful to label the two energy levels as 0 or 1 such that the work contribution at each stage is given by X i E, and thus the actual work cost of a bit reset is given by the function acting on the random variables: W (X 1 , . . . , X N ) = E N i=1 X i .(C1) It is possible to take the average of this function over some or even all of the random variables. For example, if we take the average over all X 1 . . . X N we arrive at: W (X 1 , . . . , X N ) X1...X N = E N i=1 X i , = E N i=1 P 2 (i) = W ,(C2) where W is the value we would typically call the average work cost-the average work cost of the procedure calculated before we know the outcome of any of the random variables. This is the value that we have calculated in the prior sections of this article. There is, in fact, a series of N intermediate stages between W and W , in which given knowledge of the first n values of X i (that is, the exact cost of the first n steps of the procedure), we make an estimate of what the final work cost will be. This series evolves as a random walk starting at the average value W and finishing at the actual value W . Thus if the first n steps of the protocol are in energy levels X 1 = x 1 , X 2 = x 2 , etc, then we write the series D(n) as: D(n) = W (x 1 , . . . x n , X n+1 , . . . , X N ) X N −n ...X N (C3) D(n) undergoes a special type of random walk known as a Doob martingale [? ]. It is a martingale because at every step n, the expected value of the next step is the value of the current step: D(n + 1) = D(n).(C4) For Doob's martingale, this is true by construction as W (n) is defined to be the expectation value of W over future steps. There is a statistical result known as the Azuma inequality [? ] which bounds how far a martingale random walk is likely to deviate from its initial value. When specifically applied to a Doob martingale, this gives us the special case known as the McDiarmid inequality (see 6.10 in [28]) which bounds how far the actual value (D(N ) = W ) deviates from the expectation value (D(0) = W ): P (|W − W | ≥ ω) ≤ 2 exp −2 ω 2 N i=1 |c n | 2 (C5) where c n is the maximum amount our adjustment of the work estimate will change by knowing the outcome of the random variable X n . For pedagogical purposes we calculate this expression first for the quasistatic regime of perfect thermalisation (as discussed in the appendices of Egloff and coauthors [13]). In this regime, each X i is an independent random variable, with a probability distribution given by the thermal populations associated with that energy level (Eq. B1). Switching any particular X i from 0 to 1 or vice-versa will therefore have at most an effect of E on W . Hence, c i = E for all i, and so we arrive at Eq. 7 (repeated here for clarity): P (|W − W | ≥ ω) ≤ exp − 2 ω 2 N E 2 . When we enter the finite time regime, in which thermalisation is only partially achieved, X i are no longer perfectly independent. If we treat P swap as the probability over the entire period of thermalisation that we exchange our system with the thermal state, then X i+1 will take the value of X i with probability (1 − P swap ), and only with probability P swap will it be given by the random thermal distribution. To calculate the impact of exchanging one stage X n , we must explicitly evaluate the difference between D(n) = W (x 1 , . . . x n−1 , 0, X n+1 , . . . , X N ) X N −n ...X N and W (x 1 , . . . x n−1 , 1, X n+1 , . . . , X N ) X N −n ...X N . To evaluate this, it is necessary to consider the expectation work cost at every stage of the protocol between n and the end (N ), and how this changes depending on the value of X n . We recall the stochastic matrix for evolution between a state n and n + 1 can be written as M (n), given in Eq. B2. The stochastic evolution from state n to n + k is thus given by the left-product of matrices M(n → n + k) = M (n + k − 1) . . . M (n + 1)M (n). (C6) The form of M swap (n) allows us to simplify this product. Over k steps are there are k + 1 possible outcome states of the system corresponding swapping with one of the thermal states P th (n + j) (where j = 0 . . . k − 1) or doing nothing at all. Because of the nature of a swap operation (in particular because P swap is independent of the state of the system), only the final swap is importantall intermediate swaps will be 'overwritten'. This means there is always a probability of P swap that the system is in the state it last had some chance of swapping with (i.e. P th (n + k − 1)). Provided it has not swapped with this state, there is then a chance P swap that the system has swapped into the state before it-giving an overall probability of swapping into this state of (1 − P swap )P swap . Working backwards with this logic, we see that the probability of the system after k steps being in the state associated with swapping after j steps is (1−P swap ) k−j P swap . Finally, we note that the only way for the system to have not changed at all is for it to have not swapped at any of the opportunities; and this has a probability of (1 − P swap ) k . With all of this in mind, we can now write out M(n → n + k) in a simple linear form: M(n → n + k) = (1 − P sw ) k 1 + P sw k−1 j=0 (1 − P sw ) k−j−1 M th (n + j), (C7) where M th (i) (as defined in Eq. B3) is the matrix that perfectly exchanges any state with the thermal state P th (i). If P swap = 1, then M reduces to the thermalising matrix for the final stage M th (n + k − 1). The expected energy cost of step n + k given that the state starts in the lower energy level is then given by the top right component of M multiplied by E, and the expected cost if the state starts in the upper energy level is given by the lower right component multiplied by E. We can thus express the difference between these two values as E 0 1 M(n → n + k) 1 −1 . (C8) Again, considering the special case P swap = 1, we note that this value is zero for all k ≥ 1; the change in expected contribution from all future steps as a result of altering the current state is zero when the future steps are independent of the current state. Finally, we write out the predicted difference in estimated final work cost, c n , as the sum: c n = 0 E 1 + N k=1 M(n → n + k) 1 −1 . (C9) In this most general form, the expression is difficult to evaluate analytically. However, noting that the components of M th are the thermal populations at each stage, which are in the range [0, 1], we can bound the sum: 0 ≤ j (1−P sw ) j P th (j) ≤ j (1−P sw ) j 1 1 1 1 (C10) Evaluating equation C9 effectively involves subtracting the bottom right element of the matrix in the middle from the bottom left. And as a maximal value of |c n | would be the most deleterious for our bound (causing the widest spread of work values from the mean) we therefore take the upper bound on this sum for the bottom right and the lower bound for the bottom left. This allows us to bound |c n | by: |c n | ≤ E 1 − (1 − P sw ) N −n P sw .(C11) This tells us that when we are far away from the end of the protocol, such that n N , the effect of changing one stage has an effect that scales like 1/P sw . Closer to the end of the protocol, the effect is diminished, as there are fewer chances to swap, which thus truncates the influence. We consider the sum of terms n |c n | 2 , as required for the McDiarmid inequality (Eq. C5): n |c n | 2 ≤ E 2 P 2 sw n 1 − (1 − P sw ) N −n 2 .(C12) The leading term of the sum is N , and so again we can upper bound this value: n |c n | 2 ≤ N E 2 P 2 sw . (C13) By taking only the first term we slightly over-estimate the importance of a change at one step. This approximation encodes the assumption that any change has the full number of chances to influence the state. This means the bound we place on the deviation away from the average value of work is not as tight as it might otherwise be. We finally substitute this value into the McDiarmid inequality to arrive at the claim we made in Eq. 8: P (|W − W | ≥ ω) ≤ exp − 2 ω 2 P sw 2 N E 2 . 2. Calculating W max . We can express the above result in the language of single-shot statistics by calculating the maximum work, except with some probability of failure , defined: or equivalently W max is defined by the integral W max −∞ P (W )dW = 1 − .(C15) We can re-centre this definition around the expectation value of work, W , such that: P W − W > W max − W = (C16) and in this form, we note that can be bounded by Eq. 8, with ω = W max − W (see Figure 5): ≤ 2 exp − 2 (W max − W ) 2 P sw 2 N E 2 .(C17) Re-arranging, we arrive at an upper bound on W max : W max ≤ W + E P sw N ln(2/ ) 2 ,(C18) or explicitly in terms of thermalising time (using Eq. 2): W max ≤ W + E 1 − (1 − p) t N ln(2/ ) 2 .(C19) We combine this bound with the influence of finite time on W (from Eq. 5 substituting E max = N E) to get: W max ≤ W quasi + 1 2 (1 − p) t N E + 1 1 − (1 − p) t ln(2/ ) 2N N E. under this limit is given by (1 − ) n , such that we can bound: P (W tot > nW tot ) ≤ 1 − (1 − ) n ≤ n ,(D1) where the final inequality can be proved by induction. In the worst case scenario, the failure of a single bit to reset under W max causes the entire protocol to fail. We can thus write a bound on the work cost of resetting n bits as W max ≤ nW max where = 1 − (1 − ) n , and as ≤ n it follows that W n ≤ nW max . Hence, when resetting H max bits: W Hmax ≤ H max W max . (D2) FIG. 1 : 1'Bit reset' by raising E2 to infinity. The numbers indicate the occupation probability for the energy levels. We consider the extension of this protocol, running in finite time. FIG. 3 : 3Entropy-temperature diagram for the engine cycle. The block shaded area shows the net work exchanged by the system, which is less than that of the quasistatic (Carnot) cycle shown by the dashed rectangle filled with dots behind. For illustrative purposes, the number of energy level adjustments has been greatly reduced, and size of the associated shifts in temperature greatly exaggerated. FIG. 4 : 4Coherent excitation. (a) shows the state in the Bloch sphere of the initial Hamiltonian, and (b) of the final.The state ρ, diagonal in the initial Hamiltonian, is a coherently excited state in the final Hamiltonian. The same passive transformation U between initial and final Hamiltonians can be applied actively on ρ to map it to ρ , compensating for the work cost of coherent excitation. PFIG. 5 : 5(W > W max ) := (C14) An example probabilistic work distribution.Work values are always below W max , except with probability of failure given by the area of the shaded region on the right. The sum of the areas of both the shaded regions on the left and right indicates the probability of failing to be within ω of W (as given by Eq. 8). Appendix D: Failure probability for resetting many qubits.We note that our calculated W max has failure probability upper bounded by , such that P (W > W max ) ≤ . For n random bits, the probability that every bit resets . L Szilard, Zeitschrift für Physik. 53840L. Szilard, Zeitschrift für Physik 53, 840 (1929). . R Landauer, IBM Journal of Research and Development. 5183R. Landauer, IBM Journal of Research and Development 5, 183 (1961). . C H Bennett, International Journal of Theoretical Physics. 21905C. H. Bennett, International Journal of Theoretical Physics 21, 905 (1982). M P Frank, IEEE 43rd International Symposium on Multiple-Valued Logic 0. 168M. P. Frank, 2013 IEEE 43rd International Symposium on Multiple-Valued Logic 0, 168 (2005). . L Del Rio, J Aberg, R Renner, O C O Dahlsten, V Vedral, Nature. 47461L. del Rio, J. Aberg, R. Renner, O. C. O. Dahlsten, and V. Vedral, Nature 474, 61 (2011). . J Aberg, arXiv:1110.6121Nature Communications. 4J. Aberg, Nature Communications 4 (2011/2013), arXiv:1110.6121. . O C O Dahlsten, R Renner, E Rieper, V Vedral, New Journal of Physics. 1353015O. C. O. Dahlsten, R. Renner, E. Rieper, and V. Vedral, New Journal of Physics 13, 053015 (2011). . D Egloff, O C O Dahlsten, R Renner, V Vedral, arXiv:quant-ph/1207.0434D. Egloff, O. C. O. Dahlsten, R. Renner, and V. Vedral, (2012), arXiv:quant-ph/1207.0434. . C Jarzynski, Phys. Rev. Lett. 782690C. Jarzynski, Phys. Rev. Lett. 78, 2690 (1997). . K Sekimoto, S Sasa, Journal of the Physical Society of Japan. 663326K. Sekimoto and S. Sasa, Journal of the Physical Society of Japan 66, 3326 (1997). . V Serreli, C Lee, E R Kay, D A Leigh, Nature. 445523V. Serreli, C. Lee, E. R. Kay, and D. A. Leigh, Nature 445, 523 (2007). . J Thorn, E Schoene, T Li, D Steck, Physical Review Letters. 100J. Thorn, E. Schoene, T. Li, and D. Steck, Physical Review Letters 100 (2008). . S Toyabe, T Sagawa, M Ueda, E Muneyuki, M Sano, Nature Physics. 6988S. Toyabe, T. Sagawa, M. Ueda, E. Muneyuki, and M. Sano, Nature Physics 6, 988 (2010). . E Aurell, C Mejia-Monasterio, P Muratore-Ginanneschi, arXiv:1012.2037Physical Review Letters. 106E. Aurell, C. Mejia-Monasterio, and P. Muratore- Ginanneschi, Physical Review Letters 106 (2011), arXiv: 1012.2037. . A Bérut, Nature. 483187A. Bérut et al., Nature 483, 187 (2012). . J Bergli, Y M Galperin, N B Kopnin, arXiv:1306.2742Physical Review E. 88J. Bergli, Y. M. Galperin, and N. B. Kopnin, Physical Review E 88 (2013), arXiv: 1306.2742. . P R Zulkowski, M R Deweese, arXiv:1310.4167arXiv: 1310.4167cond-matP. R. Zulkowski and M. R. DeWeese, arXiv:1310.4167 [cond-mat] (2013), arXiv: 1310.4167. . J V Koski, V F Maisi, J P Pekola, D V Avering, 1402.5907J. V. Koski, V. F. Maisi, J. P. Pekola, and D. V. Avering, (2014), 1402.5907. . R Alicki, M Horodecki, P Horodecki, R Horodecki, Open Systems & Information Dynamics. 11205R. Alicki, M. Horodecki, P. Horodecki, and R. Horodecki, Open Systems & Information Dynamics 11, 205 (2004). . T D Kieu, Phys. Rev. Lett. 93T. D. Kieu, Phys. Rev. Lett. 93 (2004). . H T Quan, H Dong, arXiv:0812.4955arXiv: 0812.4955cond-matH. T. Quan and H. Dong, arXiv:0812.4955 [cond-mat] (2008), arXiv: 0812.4955. . M V Berry, Journal of Physics A: Mathematical and Theoretical. 42365303M. V. Berry, Journal of Physics A: Mathematical and Theoretical 42, 365303 (2009). . S Deffner, C Jarzynski, A Del Campo, arXiv:1401.1184Physical Review X. 4S. Deffner, C. Jarzynski, and A. del Campo, Physical Review X 4 (2014), arXiv: 1401.1184. . A Campo, J Goold, M Paternostro, arXiv:1305.3223arXiv: 1305.3223cond-mat, physics:quant-phA. del Campo, J. Goold, and M. Paternostro, arXiv:1305.3223 [cond-mat, physics:quant-ph] (2013), arXiv: 1305.3223. . M Ziman, arXiv:quant-ph/0110164M. Ziman et al., (2001), arXiv:quant-ph/0110164. . V Scarani, M Ziman, P Štelmachovič, N Gisin, V Bužek, Physical Review Letters. 88110088V. Scarani, M. Ziman, P.Štelmachovič, N. Gisin, and V. Bužek, Physical Review Letters 88, 097905 (2002), 0110088. D A Levin, Y Peres, E L Wilmer, Markov chains and mixing times. American Mathematical SocD. A. Levin, Y. Peres, and E. L. Wilmer, Markov chains and mixing times (American Mathematical Soc., 2009). . C Mcdiarmid, London Mathematical Society Lecture Note Series. 141C. McDiarmid, London Mathematical Society Lecture Note Series 141 (1989). . P Faist, F Dupuis, J Oppenheim, R Renner, arXiv:1211.1037ArXiv e-printsP. Faist, F. Dupuis, J. Oppenheim, and R. Renner, ArXiv e-prints (2012), 1211.1037, arXiv:1211.1037. Appendix A: Quantum coherences 1. Quantum coherence during energy level steps can be avoided. Appendix A: Quantum coherences 1. Quantum coherence during energy level steps can be avoided.
[]
[ "When Screen Time Isn't Screen Time Tensions and Needs Between Tweens and Their Parents During Nature- Based Exploration", "When Screen Time Isn't Screen Time Tensions and Needs Between Tweens and Their Parents During Nature- Based Exploration" ]
[ "Saba Kawas [email protected] \nUniversity of Washington\n98195SeattleWA\n", "Nicole S Kuhn [email protected] \nUniversity of Washington\n98195SeattleWA\n", "Kyle Sorstokke \nUniversity of Washington\n98195SeattleWA\n", "Emily E Bascom [email protected] \nUniversity of Washington\n98195SeattleWA\n", "Alexis Hiniker [email protected] \nUniversity of Washington\n98195SeattleWA\n", "Katie Davis [email protected] \nUniversity of Washington\n98195SeattleWA\n" ]
[ "University of Washington\n98195SeattleWA", "University of Washington\n98195SeattleWA", "University of Washington\n98195SeattleWA", "University of Washington\n98195SeattleWA", "University of Washington\n98195SeattleWA", "University of Washington\n98195SeattleWA" ]
[]
We investigated the experiences of 15 parents and their tween children (ages 8-12, n=23) during nature explorations using the NatureCollections app, a mobile application that connects children with nature. Drawing on parent interviews and inapp audio recordings from a 2-week deployment study, we found that tweens' experiences with the NatureCollections app were influenced by tensions surrounding how parents and tweens negotiate technology use more broadly. Despite these tensions, the app succeeded in engaging tweens in outdoor nature explorations, and parents valued the shared family experiences around nature. Parents desired the app to support family bonding and inform them about how their tween used the app. This work shows how applications intended to support enriching youth experiences are experienced in the context of screen time tensions between parents and tween during a transitional period of child development. We offer recommendations for designing digital experiences to support family needs and reduce screen time tensions.CCS CONCEPTS • Human-centred computing → Empirical studies in HCI; Empirical studies in ubiquitous and mobile computing and design; • Social and professional topics → Early Adolescents.Additional
10.1145/3411764.3445142
[ "https://arxiv.org/pdf/2103.06938v1.pdf" ]
232,222,386
2103.06938
64d78dbdd4c558315894d7c0edb3cfc650517f5a
When Screen Time Isn't Screen Time Tensions and Needs Between Tweens and Their Parents During Nature- Based Exploration Saba Kawas [email protected] University of Washington 98195SeattleWA Nicole S Kuhn [email protected] University of Washington 98195SeattleWA Kyle Sorstokke University of Washington 98195SeattleWA Emily E Bascom [email protected] University of Washington 98195SeattleWA Alexis Hiniker [email protected] University of Washington 98195SeattleWA Katie Davis [email protected] University of Washington 98195SeattleWA When Screen Time Isn't Screen Time Tensions and Needs Between Tweens and Their Parents During Nature- Based Exploration 10.1145/3411764.3445142and Phrases: Child-computer interactionsearly adolescencetweensparent-child relationshipfamiliessmartphonesoutdoor mobile technologiesnature-based explorations ACM Reference Format: We investigated the experiences of 15 parents and their tween children (ages 8-12, n=23) during nature explorations using the NatureCollections app, a mobile application that connects children with nature. Drawing on parent interviews and inapp audio recordings from a 2-week deployment study, we found that tweens' experiences with the NatureCollections app were influenced by tensions surrounding how parents and tweens negotiate technology use more broadly. Despite these tensions, the app succeeded in engaging tweens in outdoor nature explorations, and parents valued the shared family experiences around nature. Parents desired the app to support family bonding and inform them about how their tween used the app. This work shows how applications intended to support enriching youth experiences are experienced in the context of screen time tensions between parents and tween during a transitional period of child development. We offer recommendations for designing digital experiences to support family needs and reduce screen time tensions.CCS CONCEPTS • Human-centred computing → Empirical studies in HCI; Empirical studies in ubiquitous and mobile computing and design; • Social and professional topics → Early Adolescents.Additional Introduction Tweens are characterized by a critical stage in development as they transition from childhood into adolescence [51,61]. They begin to separate from their parents, acquiring their sense of autonomous self, and they begin to spend more time with their peers [51,61]. Yet, tweens continue to rely on their parents for support and guidance through this transition [13,51,61]. In the midst of this shift, when tweens are negotiating their relationship with their parents and asserting their autonomy, many tweens receive their own smartphone [69]. As they start experiencing their autonomy and form their identity, they turn to their devices as portals for exploration and to connect with their peers [5,37]. Researchers reported that the phone has become a source of tension in parent-tween relationships [4,15,23]; some refer to the phone as a "transitional object" accompanying tweens' development from child to adolescent [15]. Popular media and common wisdom often portray screen time and technology use by tweens as a cause for concern [39]. Prior research describes the tension around technology use in families [1,4,15,43], parents' reactions to their children's technology use [23,60], and parent-teen perspectives on technology rules [23,34]. Parents of tweens report experiencing guilt and concern about the fast pace of their tweens' technology adoption and constant use [23,47,60]. Parents often feel the pressure of managing their tweens' and teens' screen time, including what they are doing with their devices [4,15]. This pressure is particularly salient during early adolescence, when many parents make the initial decision to give their tween a smartphone [15,21,59]. In addition to condemning screen time and smartphones as the cause of parent-tween relationship tensions, increasingly technologies are portrayed as harmful to our well-being [55,56]. In particular, screen time is often blamed for decreasing tweens' time spent outside in nature [40,42]. Research shows that time spent in nature plays a pivotal role in supporting children's learning, attention, physical health, and mental and emotional wellbeing [7,16,26,33,57]. Furthermore, naturebased activities and meaningful experiences in nature promote children's interest in natural environments, wildlife and plant species [9,10]. Yet, children today are spending less time in nature than previous generations, due to a variety of factors, such as fewer opportunities to access outdoor spaces in urban areas, parental safety concerns, busy schedules, lack of interest, and increased screen time [35,40,54,67,68,70]. A growing body of research demonstrates how mobile technologies can actually support children's positive, fun experiences outdoors, increase children's time outdoors, and support their engagement with and learning about nature [2,25,28,31,32,38,65,66]. Despite technology-related tensions in families, research also shows how technology can bring families together around shared interests through joint entertainment, play, and learning. The concept of joint media engagement (JME) illustrates the active, collaborative nature of children's technology use when it occurs in the context of family interactions and shared time [53]. Research on families' JME shows how it can support families' learning, playing, and co-creation of media content [3,36,46,49,52,53]. Recently, HCI researchers have studied JME in the context of outdoor spaces [49], specifically when designing for families and their children to support joint gaming and digital play outside. Location-based mobile games such as Pokémon GO and Ingress can motivate children and their parents to spend time outside and engage in enjoyable, fun activities together [29,49]. However, location-based mobile games were not designed specifically to encourage nature explorations by children and their families; moreover, spending time outdoors while searching for augmented reality (AR) creatures is quite different from focusing deliberately on nature interactions. In prior work, we designed NatureCollections, a mobile application that connects tween children to nature and encourages them to go outside to collect and curate collections of nature photos [30]. The app allows children to classify and describe their nature photos and complete challenges to earn badges. In a prior evaluation study, the NatureCollections app successfully promoted children's curiosity of and connectedness to their natural surroundings [31]. The app succeeded at engaging children in enjoyable, meaningful nature explorations. Additionally, the app design is positioned to encourage nature-based interactions between tweens and their parents due to the app's emphasis on engaging youth in personally relevant activities, supporting focused attention on nature, encouraging social interactions (including with parents and siblings), and providing opportunities for continued joint engagement with nature [30]. In the current work, we are interested in understanding parents' experiences, perspectives, and their family needs around using NatureCollections to support joint family engagement with the app in the context of tweens' nature-based explorations and ways to increase family time spent outdoors. This work is situated in the broader context of parent-tween techrelated tensions in the adoption of mobile applications such as NatureCollections by tweens and their families [4,15,23,34,43,59,60]. We ask the following research questions: RQ1: How do parents experience their tweens' NatureCollections app engagement in outdoor exploration during their tweens' transitional stage of tech use? RQ2: How could design be used to support family needs around tweens' outdoor exploration during a transitional period of tech use? This study was part of a larger app deployment experiment to evaluate multiple dimensions of tweens' app use. In this study, we examine parents' and tweens' shared experiences using NatureCollections. We provided 23 tweens (including eight sibling pairs), ages 8-12, from 15 unique families a phone with the NatureCollections (NC) app installed. Over the course of two weeks, we audio captured tweens and their families' interactions with NatureCollections, using in-app anchored audio recordings [22]. Following the two-week period, we conducted follow-up interviews with the parents. We found that the NC app successfully engaged tweens in nature-based explorations, and their parents valued their joint family experiences in nature. At the same time, tweens and their parents experienced these app interactions in the context of existing screen-time tensions. This work contributes: (1) empirical evidence from parent interviews and tweens' in-situ NatureCollections app use that describe tweens' and parents' experiences and perspectives on shared family nature activities; (2) insights on how parents and their tweens negotiate technology use during a transitional period of tween child-development; and (3) design implications to support joint family nature-based explorations and recommendations for designers to reduce family tensions around screen time. RELATED WORK Technology-Related Tensions between Parents and Tweens Digital devices are an integral part of tweens' modern lives. As tweens undergo a transitional developmental period from childhood into adolescence, many of them receive their own phone, at the average age of ten [69]. Tweens' developmental changes are often accompanied by parent-tween relationship tensions. As tweens begin to explore separation from their parents and establish their independence, they begin to spend less time with them and more time with their peers [13,51,61]. Yet, tweens continue to depend on their parents for instrumental support and guidance through this transition [13,61]. During this phase, tweens negotiate the transition from unilateral parental authority to a parent-child relationship that is marked by a greater degree of cooperation and compromise [13,61]. With this transitional parent-tween dynamic, tweens' phones often become the center of tension in the parent-tween relationship [4,15]. In fact, prior research referred to the phone as a "transitional object" accompanying tweens' developmental stage [15]. Tweens turn to their phones as portals for exploration and engagement with their peers as they form their identity and experience their autonomy. The phone's role also extends to keep tweens connected to their parents, as they spend increasing amounts of time away from them [4,15,34]. Many parents are worried about their children's screen time and concerned with the amount of time their tween children spend on their mobile devices, playing video games, watching YouTube, and talking to their friends over social media platforms [71]. These concerns are amplified by society and media messaging that stresses the risks of children's screen time (e.g., [56]). Consequently, parents are faced with pressures to mediate their tweens' technology use and enforce screen time limits. Research has found that children's views about their family's technology rules are somewhat conflicted. Children wanted their parents to guide their technology use and teach them how to be responsible with their devices, but at the same time, they wanted their parents to stop controlling their technology use and let them do what they want with their devices [23]. Prior research on family mediation strategies found that screen time rules focused on when, where, and how much teen children can use their devices. Parents controlled which games, apps, and social sites their tweens are allowed to access. Parental meditation theory describes three parental technology mediation roles: parents as co-users of media with their tween children; parents as monitors, actively monitoring their tween children's technology activities; and restrictive parents who restrict their children's online access and interactions [12,41]. Researchers have found that parental mediation of children's screen time is more nuanced in practice and is influenced by parent-child relationship dynamics, as well as parents' confidence in their technical knowledge to understand and manage their tween's technology use. Informed by the critical role of parent-tween tech-related tensions in prior research in family technology adoption, this paper investigates the nuanced interactions in parent-tween relationship dynamics using the NatureCollections app during nature-based explorations. Family Joint Media Engagement In addition to studying family tensions around screen time and phones, a growing body of research has explored how technologies and digital media use can bring family members together and documented the benefits of families' coengagement with digital media. These practices are commonly known as Joint Media Engagement (JME) [53]. When parents and children engage in conversations and meaning-making together, children understand media content better and how it fits their family values. Prior work examined JME in different contexts such as co-viewing (e.g., watching TV shows or movies) [53], play (e.g., video games) [52], learning (e.g., eBook readings) and augmented reality games (e.g., Pokemon Go) [49]. Joint Media Engagement research on parents' and children's shared game experiences found that virtual and inperson collaboration during and around video gameplay promoted social interactions [52]. These interactions allowed parents and their children to transfer and share knowledge around common interests [52,53]. Prior work found that access to devices created shared family moments during family time, encouraged family meaningful conversations around shared interests, and supported family collaboration and creativity [63,64]. Yu et al. described that families used their mobile devices to look up information to help family decision-making for activities during vacations and achieve consensus among members. Vacation photos that the family took on their devices also helped form positive experiences around shared memories and brought family members together [63]. Sobel et al. found that families that played Pokémon Go, a location-based mobile game, valued that the gameplay increased shared family time outdoors [49]. Parents also felt that playing the game with their children facilitated spontaneous conversations that led to family bonding experiences. Recently, HCI researchers have been interested in innovating new technologies to bring family members together, from supporting family collaboration and creativity to exploring new forms of interactive communication [20,58,62,64]. Yarosh et al. designed ShareTable, a system that allows parents and children who are separated to interact remotely and play together [62]. Other work investigated tangible storytelling systems to support communication between grandparents and their grandchildren [58]. Ferdous et al. designed a system, TableTalk, that transforms personal phones into a shared display to enrich family mealtime interactions and experiences [20]. Across all these studies, researchers found that the design of digital experiences plays a meaningful role in fostering joint family engagement, nurturing family connections and shared positive experiences with technology. The current work extends our understanding of families' Joint Media Engagement needs by considering tween-parent interactions in a new context: nature-focused exploration. Mobile-Based Technologies for Nature Explorations Our work designing NatureCollections is informed by prior HCI research that harnessed the affordances of mobile technologies to engage children in nature-based explorations. This work has primarily focused on supporting children's social play and science inquiry (e.g., [8,28,50,65]) by leveraging mobile, tangible, augmented reality, and sensor-based features. Interactive games that bridge physical and digital experiences with screenless devices, such as RaPIDO [50] and Scratch Node [24], have been shown to promote children's outdoor social game activities and embodied interactions. Pervasive popular games such as Geocaching [48] leverage location-based mobile features to support players in locating hidden treasures and collecting rewards in their physical world. Other augmented reality games like Pokemon Go [49] and Ingress [29] overlay co-located character avatars in the virtual game onto players' physical surroundings. Players can locate, capture, and battle virtual characters found by navigating spaces in the real world. A key goal across all these game design efforts is to support physical activity in outdoor gameplay and social interactions with other players. Another body of mobile technology design research aims to support children's science inquiry and nature-based learning in outdoor contexts. For instance, Tree Investigators [66] and EcoMOBILE [28] leverage mobile and augmented reality capabilities to guide learners during field trips. Learners' physical surroundings are augmented with a virtual overlay of images and information to support their science inquiry. iBeacons [65], GeoTagger [19], and Tangible Flags [11] share learning activities with children based on their location to relevant nature elements. Across all these projects, researchers have aimed to promote children's scientific observations and increase their interactions with peers during outdoor field explorations. Beyond the focus on children's physical play and learning outcomes, designing technologies that promote family joint nature-based explorations remains underexplored in HCI. At the same time, there is growing evidence in the learning sciences field that emphasizes the role of parents in supporting their children's sense-making and scientific observation in outdoor settings [17,44]. Here, we present insights from parent-tween experiences with the NatureCollections app during nature-based explorations and examine how specific affordances and design choices can support family joint nature-based engagement. Method In this paper, we examine tweens' and parents' shared experiences using NatureCollections, an app that encourages children to go outside and connect with the natural world [30]. We recruited 23 tweens (including eight sibling pairs), ages 8-12, from 15 unique families to use NatureCollections for a two-week period. This usage was part of a larger experimental deployment study that investigated multiple dimensions of the NatureCollections app [31]. Here, we examine only tweens' and parents' joint media engagement with NatureCollections. Participant demographics for this group are shown in Table 1. 15 Families : *n = 23 total tweens, 15 primary participants and 8 siblings Participants We recruited families that had at least one tween child between the ages of 8 and 12 years. We attempted to recruit a diverse sample with respect to race, household income, and education level using a variety of recruitment strategies. We distributed flyers at local libraries, schools, and community centers throughout the metropolitan region where the study took place. We also posted the study announcement via a campus-wide news post and shared it in a local magazine. Authors used their personal social media accounts to share the study and we posted it on local parent Facebook groups. We had 164 qualified families interested in the study. We divided the tweens' families into two groups based on whether they reported that tween owned a handheld smart device (e.g. smartphone or iPod touch) or not. We then emailed equal numbers of families from each group based on the order they had signed up for the study to schedule the initial interview for the deployment study. We contacted a total of 151 families to schedule the initial interviews, and it took 3 weeks to complete the recruitment for the larger study. We gave families the option to meet us on campus, located in the center of the city, or at their neighborhood library to encourage the participation of lower socioeconomic families. Our final sample skewed towards upper-and middle-class families; however, it mirrors the race distribution of the urban city where the study took place. Families also reported living in neighborhoods that were distributed across the city metropolitan area. NatureCollections App We designed the NatureCollections app in collaboration with children and guided by our interest-centered design framework [30]. NatureCollections encourages tween children to go outside to take photographs of nature, classify the plants, bugs and animals in their photos, and organize them into photo collections based on their species (e.g. insects and mammals). The app features are designed to spark tweens' interest in and connection to nature by supporting prolonged and direct interactions with nature. For example, the "Classification" feature, which provides a simple stepped visual prompt to guide the classification scheme for each photo Figure 1[3: Classification], allow tweens to direct their attention to the details of the natural elements in their surroundings. Similarly, the "Add Details" feature guides tweens with textbased prompts to enter descriptive information about their photos. These prompts lead tweens to closely examine the subject of their photo and reflect on the characteristics of the specific nature element. In prior work, we found that these features facilitated playful interactions around nature with parents, siblings, and peers [30]. Parents noticed an increase in their tween children's curiosity in their natural surroundings, and they observed that the app supported broader family engagement and conversations around nature elements [30]. To personalize tweens' NatureCollections experiences, the app records their current nature-related interests during the initial onboarding Figure 1[1: Onboarding]. Tweens are introduced to the app nature guide, a friendly moose character, who addresses children with their first name and prompts them to choose their interests. These interests are used to support child-driven interactions with nature by matching children's with the "Challenges" presented to them on the "HomePage." Tween children can organize their photo collections and create customized "My Collections" that tailor to their specific nature interests Figure 1[2: My Collections]. Children can view their photos taken to complete challenges and track their progress towards earning different badges. The app also includes a personalized "Profile Page" where tweens can track their accomplishments, including their challenges progress, photos collected, and earned badges. In addition, tweens can add their friends using a unique in-app username to "My Friends" and can view their friends' earned badges, number of photos taken, and completed challenges. We found that these personalized features extended tweens' engagement and increased social interactions with their siblings and peers, ranging from competing against to collaborating with each other to complete photo collections and challenges [31,32]. Study Procedures The three-week experimental study consisted of an initial interview with both parents and their tweens, a one-week baseline period, and two-weeks of NatureCollections app use. Once families completed all the study procedures, parents and their tweens were invited to participate in final interviews to reflect on their family experiences using the app. During the first week of the study, families logged their tweens' daily activities and technology use, which was used to collect baseline data for each tween. This data is outside the scope of the current investigation and is unrelated to the study research questions posed here. Families then received a phone with the NatureCollections app installed to use for two-weeks. After this two-week period, we interviewed parents while their tweens engaged in an outdoor activity showing us how they had used the app during the study. Families that completed all parts of the study received US$75 gift cards, in which $50 was given after completing the final interviews. Our study was approved by the university Institutional Review Board and both parents and their tweens provided consent or assent to participate in the study. In this paper we examined parent-tween experiences in the context of tweens' NatureCollections app use by analyzing: (1) parent self-reported descriptions of their experiences during exit interviews, and (2) tweens' and their families' real-time interactions when using the app, recorded using the Anchored Audio Sampling method (AAS) [22]. During the parent interviews, we used a semi-structured protocol and asked them about their family's and tweens' experiences with the NatureCollections app during their nature-based explorations. The length of these interviews ranged from 37 to 51 minutes (M= 43). All interviews were audio recoded and transcribed for data analysis. In-app AAS recordings were 3 minutes long and were captured during tweens' NatureCollections app use. AAS is a remote audio recording technique that is triggered seamlessly in response to a specific interaction to extract qualitative audio snippets during field deployment with children. These AAS recordings capture how children make sense of technologies and how they use them in their everyday life [22]. Each audio recording was randomly triggered either at the start, middle, or closing of a NatureCollections session to sample the full spectrum of a tween's app use experiences. The app displayed a notification on the app user interface that audio was being recorded. Recordings were stored locally on the device and then uploaded to our university servers once the device was connected to WiFi. At the end of the study, families had the option to delete some or all of the app recordings, though none of the families chose to delete any recordings. We captured a total of 704 recordings across all 23 tweens during the two-week period (M= 29 files per tween, SD= 19). Seven of the audio files were empty, indicating the participants did not say anything during the audio samplings. All audio files were transcribed verbatim. Data Analysis We took a joint inductive-deductive analysis approach to our qualitative data sets [14], the parent interviews and the app AAS recordings. Two researchers independently open coded 20% of the parent interviews following an inductive approach, and both researchers met regularly to discuss emerging themes and iterate on the codes for consistency [6]. Next, three researchers, including one who participated in coding the parent interviews, read through 20% of the transcribed AAS recordings from the phones, a total of 137 files. Following a similar inductive approach, the researchers coded the transcripts while writing memos on any new emerging themes, and discussed these themes in team meetings. Finally, following a deductive approach, all researchers looked through similarities across themes in both the parent interviews and the AAS recordings while iterating on the themes and adding any new codes to the codebook. We used Dedoose * to code both qualitative data sets, with one researcher coding the remaining parent interviews and the other researchers splitting the audio recording and coding them separately. The researchers met regularly to discuss the emerging themes and excerpts from the data. RESULTS Two overarching themes emerged from our data sources: (1) families' experiences of tweens' NC app use during naturebased explorations, and (2) concerns and tensions surrounding tweens' technology use during their transitional period of development. Families' Experiences of Tweens' App Use during Nature Exploration How tweens engaged their parents and siblings around app use Tweens engaged with their parents and siblings in joint nature-based explorations during their NC app use, often sharing their app activities and achievements with them. Tweens' app use increased their parents' attention to and engagement with nature elements in their surroundings. Additionally, tweens and their families worked together to make sense of nature elements. Some parents incorporated their pre-existing interests in nature activities and, at times, their knowledge around plants and other species while their tweens used the app. Tweens shared their app photos, activities, and achievements with both parents and siblings during and after their app use. We heard one tween tell his mother: " 12)]. ** Tweens' app use during regular family activities like going on walks, heading to the grocery store, and going out to dinner prompted an increase in their siblings' and parents' attention to nature and encouraged them to slow down, observe their natural surroundings, and engage in discussions about the photos they were taking. We heard a conversation between one tween and her mother: T12: "There's a snail right here. In some families, tweens' joint app engagement also supported family time outdoors. One mother noted: "We definitely just took more random [family] walks, that the kids initiated just to take photos of nature." She further explained that her daughter and son do not often play together, but since they had the NC app: "They were definitely interacting because of the app and with the app." She also observed her younger son develop an interest in something other than video games, like photography and nature (P12 [T12 (girl, age 12), S12(boy, age 10)]). Tweens worked with their parents and siblings to make sense of nature elements, inspiring togetherness as they named plants in their photos, engaged in app activities, discussed nature elements in detail, and looked up nature-related app content. One mother, P2, shared: " 10)]. While some tweens competed with their siblings for the highest number of photo collections, badges earned, and challenges completed, many tween sibling pairs also supported each other with other app activities like choosing photo names, adding photo details, and making sense of nature elements to classify photos. For instance, these two sisters helped each other describe a photo of a pine tree and figure out the tree type while going through the classification scheme in the app: T4: "How would you describe the pine tree?" S4: "The pine tree is spiky but soft." T4: "Spiky but soft. Spiky but elegant." S4: "I got half of it. T4 I was alright typing "spiky but," but ok. 9)]. Parents shared that the NC app brought families together during nature activities. For instance, one parent shared how she would regularly go on walks with her tween, but he would usually be scootering a couple blocks ahead of her. When he was using the NC app, in contrast, they walked together, looking at plants, talking about animals and even the crevices in the sidewalk. "We were actually having a conversation, usually we don't" P1 [ T1 (boy, age 10)]. Parents' nature-related interests such as gardening, walking, and hiking regularly provided context for tweens' NC app use. Some tweens also incorporated their parents' love of nature into their photos, with one parent, P13, describing: "It was so pretty like just a sunset and how the trees were. It was just so pretty. He knows that my favorite part of the day is the sunset. He's like, 'I'm going to take this picture, mom.'" P13 [T13 (boy, age 10)]. Parents also shared with their tweens their interest in specific nature elements, like teaching their tweens about different flower names and sharing details about their favorite flower as their tween took photos of it. One mother reported how she often visited their local P-patch with her tweens, noting how they engaged differently in these outings while they were using the NC app: "It was the first time they had really looked around and asked specific questions about the plants around them considering, 'Hey, what are other people growing? What is this?'" P12 [T12 (girl, age 12), S12 (boy, age 10)]. Parents' needs for engagement After spending two weeks engaging and observing their tweens' NC app use, parents provided their insights on additional app features that would increase both their tweens' app engagement and their joint family nature-based explorations. Three main areas of need emerged: (1) app features to increase access to information about nature; (2) app activity sharing to support parent-tween engagement; and (3) new opportunities for family collaboration and competition. Many parents expressed a strong interest in having more app-supported opportunities for their tweens to increase their knowledge about nature. One parent, P7, suggested access to nature information via the app would be a key element for her to support her tween's use of the app: "Having as much information about whatever you take a picture of, would be the most important thing that would get me to download [the app]." P7 [T7 (boy, age 10), S7 (boy, age 6)]. Parents often shared that their tweens wanted the app to help them more as they were choosing names, classifying photos, or working on challenges. Many parents also wanted to use this additional information themselves to engage with their tweens in information seeking about nature and to support nature-focused family conversations. As one mother, P9, suggested: "If it's a beetle, some parents might not know much about beetles but have casual like facts or research about it and you can have a conversation point." P9 [T9 (girl, age 9), S9 (girl, age 8)] Parents shared a common desire to see how their tweens used the app and the photos they took, either through automatic activity reports or through tweens selecting photos to share. One mother, P8, shared her ideas for a reporting feature: "…like having some kind of a little report card. This is what your kid did today, or this week. This is how many pictures he took, very little synopsis of what kind of pictures they were; were they all plants? Were they all animals? … I think it'd be interesting to know what interests him, is it all flowers.. animals..mountain scenes..?" P8 [T8 (boy, age 9), S8(girl, age 12)]. Parents largely expected this reporting to be facilitated electronically through the app, emails or text messages rather than by viewing the information on their tweens' phones. They also wanted the app to prompt their tweens to interact with them. as one parent, P3, suggested that tweens could: "…get a notification that they've taken so many pictures and … see a notification like, 'Oh, you've taken 20 pictures. Go share them with the parent,' or, 'Let's talk with somebody about it.'" P3 [T3 (girl, age 9), S3 (girl, age 10)]. Further, parents wanted to keep engaging their tweens by sharing comments on photos, titles, and classifications, and through a chat feature that would allow them to prompt their tween with questions. As one parent observed, this additional engagement through the app could serve as family conversation starters: "…there's at least little bits I can pull out to have bigger conversations or uncover things I should know that otherwise if I was like, 'Tell me about your day,' I'd never get anywhere." P14 [T14 (girl, age 9)]. Numerous parents wanted even more app-facilitated, family-focused activities, with many expressing a desire for parents and extended family members to participate in collaborative and competitive activities with their tweens. Some also felt that having the app offer these challenges would more effectively engage their tweens in family activities: "I know some kids that would have an issue with something being forced upon them from a parent, like, 'Go do this,' rather than the game actually prompting you to do it … a kid would be much more inclined to follow whatever incentive it is that the game actually promoted it, rather than the parent." P7 [T7 (boy, age 10), S17 (boy, age 6)]. Parents shared many ideas for activities that would increase their tweens' family and app engagement, including daily goal setting and competitions for taking a specific number of photos. One parent, P1, shared how she envisioned competition with her tween: " Tweens' Transitional Tech Use 4.2.1 How the NC app fit into parents' screen-time rules All parents indicated they have screen-time rules for their tweens, including rules about total screen time allowed per day, when and where tweens could use devices, parental screening of apps before downloading, and restrictions around chatting with friends on apps. Parents described the tensions they face around screen-time rules as their tweens transition between childhood and adolescence. A mother, P5, shared her experience with this tension: 10)]. Parents mentioned that they did not count their tweens' time on the NC app as screen time, and the app use therefore did not impact the amount of regular screen time their tween had on other devices. When we asked parents if this was due to participating in the study, they explicitly said it was not due to the study but rather because the NC app encouraged their tweens to go outside. One parent, P14, even suggested the NC app could be successfully marketed to parents with the tagline: "Help balance your child's screen time, help your child spend more time outdoors with their technology." This parent further explained: "They are not even going to have to read much further, they'll just be like, 'I'll spend the US $3.99.'" Parents also identified other aspects of the NC app that they felt set it apart from the typical screen time they try to regulate: (1) Connecting with and learning about nature: Parents expressed strong values associated with supporting their tweens' ability to connect with and learn about nature. One mother, P11, shared: "I think all families, anybody that you ask, will tell you that the struggle is real, trying to push them away or pull them away from this video game, anxiety of winning competition, and having this relationship with people that you don't even see or know or have, there's no real connection. And I think given the alternative to have this connection with nature, I think it goes a long way." P11 [T11 (boy, age 12), S11 (boy, age 10)]. (2) Family interactions and activities: Parents often remarked positively on the social interactions they observed when they had two children using the app together. A parent of a 12-year-old girl and a 10-year-old boy shared: "Most of the screen time they use is a personal thing. They sit with YouTube or whatever, and it's just them and the screen. The NatureCollections app just encouraged more sharing, especially between the two of them. They would go out together, and they would just talk about what they were going to take pictures of. They would show each other their pictures, so that was definitely different." P12 [T12 (girl, age 12), T12 (boy, age 10)]. (3) Creative engagement: Additionally, parents related other valued interactions to their tweens' NC app use such as creativity. A mother, P12, shared about her two children: "They had a lot of fun taking pictures with the app. I thought that was good because--Photography is a creative outlet, so I thought that was the positive use for technology to use it in that kind of way." P12 [T12 (girl, age 12), S12 (boy, age 10)]. Parents also noticed that NC app features had prompted their tweens' imagination while interacting with natural elements. One parent, P2, shared an instance where she and her daughter spent some time in a garden and she noticed her tween: "…had a whole different look of imaginations of how the fruits and the vegetables and the pictures. That was kind of fun to see, watch her imagination be a little bit-something that we normally wouldn't do. That she's using her imagination calling, labeling, and trying to describe flowers and things. That was kind of fun." P2 [T2 (girl, age 9)]. Despite these valued outcomes, tensions still arose around phone use and the NC app. For instance, a few parents had family rules that restricted technology use outdoors. One mother shared: "Because she's 9 and she doesn't have a phone, her go-to attitude about electronics is not to take them outside. As a family, she's not allowed to really take her iPad outside. It's like if you're going to go outside, part of why you're going outside is to disconnect from electronics, not take them with you" P14 [T14 (girl, age 9)]. Another parent affirmed the general lack of familiarity among parents with the concept of tweens using technology outside. When she was asked about her child's use of technology outside during the study, she responded: "I would always be confused by that question, because to me, spending time outside and technology don't go together, just because devices can get damaged. 'Cause when my kids play outside, it's all about being in motion." P11 [T11 (boy, age 12), S11 (boy, age 10)]. When tweens took the phone to new outdoor spaces, they were often challenged with taking responsibility for it. One mother shared how she did not want to carry both of her tweens' phones for the study: 4.2.2 Negotiating independence in phone and app use Some parents shared that they did not spend much time helping their tween learn how to use the app after its initial setup. One parent commented: " 6)]. Many parents mentioned that they and their tweens preferred this level of independent app use, with one parent stating her daughter's preference for independence using the app: "She likes to do it mostly on her own. The only time we really had interactions if she had a question about something ...Then she was wanting a lot of interaction to try to help her figure out and that was towards the beginning. I think it was still fairly new to her, but after that she didn't ask for a lot of guidance with it. She just likes to do it on her own." P10 [T10 (girl, age 9)]. At the same time, many parents also voiced concern about the safety measures that would be available to parents to monitor and manage tweens' social interactions. Numerous parents stated a desire to manage their tweens' friends on the app, with one father commenting: "I think the ability to have a lot of control I think as a parent that's, I think, important. If she were to have her own device, we would want control over who she actually makes friend relationships with and it wouldn't be her decision alone. I would expect that to be on the other side as well." P6 [T6 (girl, age 9)]. Parents also expressed concerns about their tweens' activities with friends on the app, including messaging and sharing photos: "It's a double-edged sword. It would be nice to have, but then I can also see it getting out of hand, and maybe they start sharing pictures of all kinds of stupid random things...There would have to be some kind of parental control." P7 [T7 (boy, age 10)]. The same parent suggested parental controls to limit the number of chats being sent to friends and regular reports to parents with the photos shared between friends. Additionally, numerous parents wanted to receive regular updates on their tweens' NC app use, including the amount of time they spent using the app. Several parents also voiced concerns over privacy: " Another dimension of negotiating independence in phone and app use related to the geographic limitations experienced by the tweens in this study, which impacted where they could use the NC app and their ability to venture out to new spaces without their parents. Parents reported that their tweens mostly used the app in their own yards and in their neighborhoods, close to their homes. When tweens used the app farther from home, they were largely accompanied by a parent, often going on walks, riding bikes, walking their dog, and running errands. One mother shared her tween's limited access to nature spaces: "We don't have a neighborhood where he can go out on his own. It's our backyard or when I was close with him." P1 [T1 (boy, age 10)]. Two siblings in the study discussed their geographic limits: T12: "Okay, are we going? Okay, well bye!" … P12: "Don't go farther than, you know, don't go far" ... T12: "What did she say our limit was?" ... S12: "She didn't say what [our] limit was. I think our limit is up to the fence back there" ... P12: "Geezz, I think the limit is any parking lot. Whoops! We just crossed one! L-O-L X-D." [T12 (girl, age 12), S12 (boy, age 10)]. Sometimes, tweens tried to negotiate with their parents to go to new places, such as the zoo, a nearby beach, or on family hikes to explore and take photos of new nature elements. DISCUSSION AND DESIGN IMPLICATIONS In this study, we describe how parents experienced their tweens' NatureCollections app engagement in outdoor exploration and uncover family needs around app use. Our results showed that the NC app succeeded in engaging tweens in nature explorations and that parents valued family joint activities promoted by the NC app. We also found, however, that the digital experiences intended to connect tweens to nature and enrich their outdoor explorations were influenced by parents' screen-time rules and parent-tween negotiations around technology use. In our discussion, we draw from our empirical insights and prior work to present recommendations for designers to support family joint nature explorations. We also identify several opportunities to reduce parent-tween screen-time tensions during tweens' transitional period of development. Joint nature-based exploration: Our results showed that the NC app facilitated Joint Media Engagement (JME) [53], by supporting family togetherness during nature-based activities and promoting behaviors that aligned with family values. This work extends our understanding of JME by considering these interactions in a new context: nature-focused exploration involving tweens who are experiencing a transitional stage of technology use. We uncovered families' needs that highlight parents' desire to support their tweens' autonomy while maintaining connection and shifting some of the burden of guiding their tweens to the app. Similar to prior JME research [20,63], we found that tweens' app activities sparked meaningful family interactions with nature. Parents, however, also expected the NC app to support deeper conversations with their tweens by providing access to nature-related knowledge and by sending them contextual information about their tweens' photos and app activities. Like Sobel et al., our results show that the NC app encouraged families to create more family bonding experiences, including spending more time together outdoors [49]. Parents and their tweens worked together while using the NC app to identify and learn about nature, consistent with prior insights about parents taking on new facilitation roles to support family social learning experiences [44]. Yet, parents in our study also wanted to take on this new role through their own parent version of the NC app, allowing them to remotely share photos, collaborate on activities, and complete family challenges. Additionally, parents wanted the app to facilitate a variety of joint family outdoor activities such as scavenger hunts and time-or location-based challenges. NC app use during a time of transitional tech use: Tweens are in a transitional period of development, seeking to renegotiate parental boundaries as they establish greater personal autonomy [13,61]. These dynamics can lead to tension between tweens and parents related to technology ownership and use [15,34]. On one hand, parents appreciated how the app encouraged their tweens' independence to initiate time outdoors without parental nudging, and they believed their tweens would be more willing to engage in these activities if they were suggested by the app rather than by them. On the other hand, tensions emerged around the integration of the NC app into parents' existing screen-time rules. Parents indicated that they normally enforced screen-time limits for their tweens' technology use. However, they made exceptions to these rules for the NC app, citing how the app supported valued family activities that engaged their tweens with positive outcomes like spending time outdoors, connecting with and learning about nature, family bonding activities, and creative activities over passive media consumption. Additionally, tweens negotiated geographic boundaries that determined where they could go without parental supervision, defining the limits of their outdoor app use. These struggles are similar to prior work examining parent-teen tensions related to the phone [4,15], but shaped in distinct ways by the specific context of outdoor nature exploration. In addition to the needs identified for family joint nature-based engagement, parents expressed concerns about parental controls and safety concerns, including privacy, safety, and social connections with non-family members through the app. Parents desired a range of engagement options to address these concerns, mirroring parental mediation styles [12,41]. Some parents wanted to restrict who their tweens could connect with on the app and their level of social engagements; other parents wanted to monitor their tweens' overall app activity; and still other parents wanted to become co-users with their tweens with a parents' version of the app that integrated with their tweens' NC app. With these empirical insights in mind, we offer the following design recommendations: Facilitate Digital Experiences that Mind the Context Gap Our findings demonstrate that parents felt they were missing context around their tweens' NC app engagement, including app-related nature knowledge. Parents explained that lacking this context impacted how and to what extent they reacted and engaged with their tweens' nature-based experiences. Parents' engagement needs spanned the desire to learn what interested their tweens about the photos they were sharing with them, the type of nature element in focus (e.g., plant or bug), and the number of photos and badges their tweens collected. Parents expressed needing access to contextual nature information to increase their knowledge to be able to support their tweens in identifying the species in their photos and surroundings. Parents also desired nature prompts and questions to facilitate sense-making and deeper conversations about nature with their tweens. Previous research in the learning sciences field has found that when parents support their children's observations and sense-making during family nature-based explorations, it facilitates children's development of scientific thinking and shapes their problem-solving skills [17,45]. These findings provide a design opportunity to facilitate digital experiences that bridge the context gap around app interactions and nature information to increase joint family nature-based engagement. Designers could consider supporting in-app audio and visual snap features to collect the context of tweens' app use. This data could be displayed alongside the photo metadata, such as location and photo details entered by the tween. The app could incorporate just-in-time features to recall photo-related nature information and nature prompts to support family conversations during their joint nature explorations. Support Co-Located and Remote Family Activities In our study, parents expected to jointly interact with their tweens' app activities both while being together and when being apart. Parents had competing demands and responsibilities, yet they desired the app to support continued engagement with their tweens' nature explorations, even when they were physically separated. Other parents desired features to support family activities during outdoor trips and even suggested having a parent version of the app, where parents and their tweens work together on family collections and challenges. Parents felt that these additional features would increase their joint family engagement around nature and facilitate meaningful family and nature interactions. Prior research found that when parents and children engage in meaningful nature-based experiences together, it advances their children's connection to nature and supports their conservation values and environmental stewardship development [18]. Our work suggests that families can share nature-based experiences both when they are physically together and also when they are apart. These insights provide a design opportunity to combine co-located and remote digital family activities to support family togetherness in nature-based explorations. Designers could consider facilitating parent-tween remote connection and interactions around nature by supporting features for parents to engage digitally with their tweens' shared photos. For joint family activities, the app design could include co-play features that support collaborative and competitive nature-based challenges to increase family engagement and interactions with nature. Create opportunities for technology to reduce screen time tensions In addition to design needs to increase family joint nature-based engagement, we found that tweens' experiences with the NatureCollections app were influenced by how parents and tweens negotiate screen-time and technology-use tensions. Prior work identified that the sources of parent-tween tensions are influenced by the transitional period of development that accompanies tweens' technology use [13,51,61]. We found that tweens explored their autonomy and independence from their parents when using the NatureCollections app. Tweens negotiated geographic boundaries that limited their outdoor explorations of where they could go without parental supervision. We also observed in our study that parents' roles to mitigate their concerns about their tweens' privacy, safety, and social interactions with technology use matched the parental mediation roles of restrictive, active, and co-use [12,41]. Recent work investigating parental mediation roles has found that parental mediation roles are not discrete but fall on a spectrum that parents employ in different contexts [27]. These findings provide design opportunities to create technology that reduces screen-time tensions by supporting the diversity of parental mediation approaches and augmenting them with strategies that support tweens' autonomy needs. Facilitating tweens' sense of choice is central for tween autonomy development and proper separation from their parents on the path to adolescence and adulthood [13,61]. Designers could consider supporting parent-tween negotiations around screen time by facilitating features that: (1) guide clear limit setting; (2) elicit a meaningful rationale for limits; (3) mitigate conflict by acknowledging tweens' perspectives; and (4) provide choices and options. Designs could support features for tweens to self-direct boundary setting, in agreement with their parents, around which apps to use on their device and for how long by providing easy app drag-and-drop or feature selection options. In the context of nature-based exploration, the app design could support parents and tweens in jointly establishing geographic boundaries where tweens can explore. The app might send a notification to tweens when those boundaries are approaching. Additionally, designs could provide guiding prompts to parents to support conversations with tweens around their family's shared values and explanations for technology restrictions. These prompts could also engage tweens in reflective practices that encourage them to take ownership of their actions. In these ways, technology designs could promote meaningful input from both parents and their tweens and enable tweens' involvement in decision-making around their technology use. LIMITATIONS AND FUTURE WORK There are several limitations to our study data. Although we made efforts to recruit demographically diverse families, our participants were mostly from middle-to upper-class backgrounds. Although other racial and ethnicity demographics mirrored the urban city population where the study took place, our sample is not representative of the broader US population. Furthermore, our data is highly skewed toward mother participants representing the parents' perspectives. Although this study includes audio recordings from in-situ app use by the tweens, it does not include tweens' perspectives around their screen time and technology use. Future research could work towards involving participants from culturally and socio-economically diverse families, as well as include greater representation of fathers. Additionally, future research could include tweens' perspectives on their lived experiences during this transitional stage of development, particularly around their transitional tech use and screen-time tensions that surround those experiences. CONCLUTION In this study, we explored parents' and their tweens' experiences during family joint nature engagement in the context of using the NatureCollections app for two weeks. NatureCollections succeeded at engaging tweens in nature-based explorations, and parents valued the activities that the app facilitated, including family shared nature experiences. At the same time, our results show that tweens' app experiences were influenced by screen-time tensions between parents and tweens. Parents desired a variety of functionalities to support them with their tweens' transitional technology use in addition to their family needs to increase engagement in nature. We present a set of recommendations for designing technology-based experiences to support family bonding in nature. We highlight opportunities for designers to promote tweens' autonomy, support positive transitional tween-tech use, and reduce family screen-time tensions. Figure 1 : 1Screens of the Nature Go app 1: Onboarding "What are your interests?" 2: My Collections. 3: Classification "I'm not carrying it, because I already have the dog's stuff, my own stuff, my own water .. they decided they wanted to take it and … I remember my husband ended up carrying them." P5 [T5 (boy, age 10), S5 (girl, age 8)]. Table 1 : 1Demographic characteristics of families and their tweens.Demographic Variable NC App* Gender Girl Boy 12 11 Age Age 6 1 Tweens sought their parents' and siblings' help in taking photos for the app, with one tween who asked his father: Sibling pairs were each provided with phones for the study and engaged with one another, sharing photos and achievements while using the app collaboratively. A brother-sister pair talked about their photos and helped each other find nature elements:S5: "There's a lot of types of birds." And her brother asked "T5: Do you have a lot of birds [photos]?" S5: "No. There is a lot of types of birds. Stop." T5: "Let me see." Then the brother pointed to his sister: T5: "There was a bird over there, look." AAS [T5 (boy, age 10), S5 (girl, age 8)]. Many parents noticed that their tweens engaged in outdoor activities differently when using the app, with one mother,Can I show you my favorite picture? I wanna show you my favorite picture. It's amazing, okay. ... I'm doing a night tree series" to which she responded encouragingly, "I like that one, it is a really good picture. Oh, that's a pretty one. Yeah, very good." AAS [T7 (boy, age 10)]. ** T7's father commented, "He was super jazzed about it. He was showing me every single picture he took that evening that he first tried it out. He had actually some really good pictures. It was cool." P7 [T7 (boy, age 10)]. "Right before you go to bed, do you think you could wake me up and take me outside, cause I wanna take a picture of the moon, cause one of my categories is moons?" AAS [T7 (boy, age 10)]. P8, sharing: "He never said anything to me, but he said, 'Oh, I need to take a picture of this marigold.' That was really hard to, but he had never, ever mentioned the marigold that we've had for six years, since before. Yes, it did actually help him interact and verbalize what he's seeing, definitely." P8 [T8 (boy, age 9), S8 (girl, age ** T refers to tween, P refers to parent, S refers to sibling "There was a day when we were coming home from her grandma's house and we were driving through the neighborhood. Then she yelled out, 'Oh, there's that bush! I remember that bush. I took a picture of that bush for the NatureCollections app and then I recorded it, why it was important to me.' I was like, 'Oh, I've never really noticed that bush.'[laughing] She was like, 'I know, now I'm always going to remember because I took a picture with it'." P14 [T14 (girl, age 9)]." P12: "Where? I don't want to step on him!" T12: "He's right there. I'm taking a photo of it." P12: "It's here? Let [your brother] get a picture of it." AAS [T12 (girl, age 12), S12 (boy, age 10)]. This same mother shared with us about her two tweens: "I remember there was this bird on the fence that they were trying to get a picture of, and I remember we were just all sitting around for a while waiting for this bird to come back because it was flitting around. They were just super focused trying to get a picture of this bird. I can't remember if they actually got it [chuckles]." P12. Another parent, P14, mentioned: Parents reported making exceptions to screen-time limits when technology use facilitated a valued family activity. They explained that they positively viewed technology engagements that support spending time outdoors, connecting with and learning about nature, social interactions with family, and creative activities over mindless tech use. One parent, P3, told us:"It would be nice to be able to have something on a device, an activity that was more educational or help them see the world in a different way. Not just sit there. That's my biggest issue with the devices and the TV."I'm kind of trying to just keep the 30-minute lockdown right now, for as long as I can, and that's all he knows. But I feel like that his being in fifth grade next year, the junior high kids start getting phones and there's less limitations on it, so I don't know, it's gonna be a whole different thing to kind of deal with." P5 [T5 (boy, age 10), S5 (girl, age 8)]. You're just sitting there being a zombie. You're plugged in, and that's it." P3 [T3 (girl, age 9), S3 (girl, age * Dedoose (https://www.dedoose.com/) is a web-based research application for analyzing qualitative data in a collaborative asynchronous environment. ACKNOWLEDGMENTSWe thank both tween children and their parents, for participating in our study. Special thanks to Mina Tari and Monica Posluszny for helping our team collect data. This material is based upon work supported by the University of Washington Innovation Award. This project was approved by the Institutional Review Board at the University of Washington (IRB ID: STUDY00002801). Managing Children's Online Identities: How Parents Decide What to Disclose About Their Children Online. Tawfiq Ammari, Priya Kumar, Cliff Lampe, Sarita Schoenebeck, 10.1145/2702123.2702325Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15)Tawfiq Ammari, Priya Kumar, Cliff Lampe, and Sarita Schoenebeck. 2015. Managing Children's Online Identities: How Parents Decide What to Disclose About Their Children Online. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15), 1895-1904. https://doi.org/10.1145/2702123.2702325 Play It Our Way: Customization of Game Rules in Children's Interactive Outdoor Games. Tetske Avontuur, Eveline Rian De Jong, Yves Brink, Florack, 10.1145/2593968.2593973Proceedings of the 2014 Conference on Interaction Design and Children (IDC '14). the 2014 Conference on Interaction Design and Children (IDC '14)Iris Soute, and Panos MarkopoulosTetske Avontuur, Rian de Jong, Eveline Brink, Yves Florack, Iris Soute, and Panos Markopoulos. 2014. Play It Our Way: Customization of Game Rules in Children's Interactive Outdoor Games. In Proceedings of the 2014 Conference on Interaction Design and Children (IDC '14), 95-104. https://doi.org/10.1145/2593968.2593973 Electric agents: fostering sibling joint media engagement through interactive television and augmented reality. Rafael Ballagas, Thérèse E Dugan, Glenda Revelle, Koichi Mori, Maria Sandberg, Janet Go, Emily Reardon, Mirjana Spasojevic, 10.1145/2441776.2441803Proceedings of the 2013 conference on Computer supported cooperative work (CSCW '13). the 2013 conference on Computer supported cooperative work (CSCW '13)Rafael Ballagas, Thérèse E. Dugan, Glenda Revelle, Koichi Mori, Maria Sandberg, Janet Go, Emily Reardon, and Mirjana Spasojevic. 2013. Electric agents: fostering sibling joint media engagement through interactive television and augmented reality. In Proceedings of the 2013 conference on Computer supported cooperative work (CSCW '13), 225-236. https://doi.org/10.1145/2441776.2441803 Managing Expectations: Technology Tensions Among Parents and Teens. Lindsay Blackwell, Emma Gardiner, Sarita Schoenebeck, 10.1145/2818048.2819928Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing (CSCW '16). the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing (CSCW '16)Lindsay Blackwell, Emma Gardiner, and Sarita Schoenebeck. 2016. Managing Expectations: Technology Tensions Among Parents and Teens. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing (CSCW '16), 1390-1401. https://doi.org/10.1145/2818048.2819928 It's Complicated: The Social Lives of Networked Teens. Danah Boyd, Yale University PressNew Haven, CT, USADanah Boyd. 2014. It's Complicated: The Social Lives of Networked Teens. Yale University Press, New Haven, CT, USA. Using thematic analysis in psychology. Virginia Braun, Victoria Clarke, 10.1191/1478088706qp063oaQualitative Research in Psychology. 3Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative Research in Psychology 3, 2: 77- 101. https://doi.org/10.1191/1478088706qp063oa Flourishing in nature: A review of the benefits of connecting with nature and its application as a wellbeing intervention. Colin A Capaldi, Holli-Anne Passmore, Elizabeth K Nisbet, John M Zelenski, Raelyne L Dopko, 10.5502/ijw.v5i4.449International Journal of Wellbeing. 5Colin A. Capaldi, Holli-Anne Passmore, Elizabeth K. Nisbet, John M. Zelenski, and Raelyne L. Dopko. 2015. Flourishing in nature: A review of the benefits of connecting with nature and its application as a wellbeing intervention. International Journal of Wellbeing 5, 4. https://doi.org/10.5502/ijw.v5i4.449 PacMap: Transferring PacMan to the Physical Realm. Thomas Chatzidimitris, Damianos Gavalas, Vlasios Kasapakis, 10.1007/978-3-319-19656-5_20Internet of Things. User-Centric IoT (Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering. Thomas Chatzidimitris, Damianos Gavalas, and Vlasios Kasapakis. 2015. PacMap: Transferring PacMan to the Physical Realm. In Internet of Things. User-Centric IoT (Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering), 139-144. https://doi.org/10.1007/978-3-319-19656-5_20 Benefits of Nature Contact for Children. Louise Chawla, 10.1177/0885412215595441Journal of Planning Literature. 30Louise Chawla. 2015. Benefits of Nature Contact for Children. Journal of Planning Literature 30, 4: 433-452. https://doi.org/10.1177/0885412215595441 The Development of Conservation Behaviors in Childhood and Youth. The Oxford Handbook of Environmental and Conservation Psychology. Louise Chawla, Victoria Derr, 10.1093/oxfordhb/9780199733026.013.0028Louise Chawla and Victoria Derr. 2012. The Development of Conservation Behaviors in Childhood and Youth. The Oxford Handbook of Environmental and Conservation Psychology. https://doi.org/10.1093/oxfordhb/9780199733026.013.0028 A Case Study of Tangible Flags: A Collaborative Technology to Enhance Field Trips. Gene Chipman, Allison Druin, Dianne Beer, Jerry Alan Fails, Mona Leigh Guha, Sante Simms, 10.1145/1139073.1139081Proceedings of the 2006 Conference on Interaction Design and Children (IDC '06. the 2006 Conference on Interaction Design and Children (IDC '06Gene Chipman, Allison Druin, Dianne Beer, Jerry Alan Fails, Mona Leigh Guha, and Sante Simms. 2006. A Case Study of Tangible Flags: A Collaborative Technology to Enhance Field Trips. In Proceedings of the 2006 Conference on Interaction Design and Children (IDC '06), 1-8. https://doi.org/10.1145/1139073.1139081 Parental Mediation Theory for the Digital Age. Lynn Schofield, Clark , 10.1111/j.1468-2885.2011.01391.xCommunication Theory. 21Lynn Schofield Clark. 2011. Parental Mediation Theory for the Digital Age. Communication Theory 21, 4: 323-343. https://doi.org/10.1111/j.1468-2885.2011.01391.x Adolescent Development in Interpersonal Context. W , Andrew Collins, Laurence Steinberg, Handbook of child psychology: Social, emotional, and personality development. Hoboken, NJ, USJohn Wiley & Sons, Inc36th edW. Andrew Collins and Laurence Steinberg. 2006. Adolescent Development in Interpersonal Context. In Handbook of child psychology: Social, emotional, and personality development, Vol. 3, 6th ed. John Wiley & Sons, Inc., Hoboken, NJ, US, 1003- 1067. Basics of qualitative research: techniques and procedures for developing grounded theory. Juliet M Corbin, SAGE, Thousand Oaks, CaliforniaJuliet M. Corbin. 2015. Basics of qualitative research: techniques and procedures for developing grounded theory. SAGE, Thousand Oaks, California. Everything's the Phone": Understanding the Phone's Supercharged Role in Parent-Teen Relationships. Katie Davis, Anja Dinhopl, Alexis Hiniker, 10.1145/3290605.3300457Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19). the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19)227Katie Davis, Anja Dinhopl, and Alexis Hiniker. 2019. "Everything's the Phone": Understanding the Phone's Supercharged Role in Parent-Teen Relationships. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19), 227:1-227:14. https://doi.org/10.1145/3290605.3300457 The psychological and social benefits of a nature experience for children: A preliminary investigation. L Raelyne, Colin A Dopko, John M Capaldi, Zelenski, Journal of Environmental Psychology. 63Raelyne L. Dopko, Colin A. Capaldi, and John M. Zelenski. 2019. The psychological and social benefits of a nature experience for children: A preliminary investigation. Journal of Environmental Psychology 63: 134-138. . 10.1016/j.jenvp.2019.05.002https://doi.org/10.1016/j.jenvp.2019.05.002 From Seeing to Observing: How Parents and Children Learn to See Science in a Botanical Garden. Catherine Eberbach, Kevin Crowley, 10.1080/10508406.2017.1308867Journal of the Learning Sciences. 0Catherine Eberbach and Kevin Crowley. 2017. From Seeing to Observing: How Parents and Children Learn to See Science in a Botanical Garden. Journal of the Learning Sciences 0, 0: 1-35. https://doi.org/10.1080/10508406.2017.1308867 Zoos' and Aquariums' Impact and Influence on Connecting Families to Nature: An Evaluation of the Nature Play Begins at Your Zoo & Aquarium Program. Julie Ernst, Visitor Studies. 21Julie Ernst. 2018. Zoos' and Aquariums' Impact and Influence on Connecting Families to Nature: An Evaluation of the Nature Play Begins at Your Zoo & Aquarium Program. Visitor Studies 21, 2: 232-259. . 10.1080/10645578.2018.1554094https://doi.org/10.1080/10645578.2018.1554094 GeoTagger: A Collaborative and Participatory Environmental Inquiry System. Jerry Alan Fails, Katherine G Herbert, Emily Hill, Christopher Loeschorn, Spencer Kordecki, David Dymko, Andrew Destefano, Zill Christian, 10.1145/2556420.2556481Proceedings of the Companion Publication of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing (CSCW Companion '14). the Companion Publication of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing (CSCW Companion '14)Jerry Alan Fails, Katherine G. Herbert, Emily Hill, Christopher Loeschorn, Spencer Kordecki, David Dymko, Andrew DeStefano, and Zill Christian. 2014. GeoTagger: A Collaborative and Participatory Environmental Inquiry System. In Proceedings of the Companion Publication of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing (CSCW Companion '14), 157-160. https://doi.org/10.1145/2556420.2556481 TableTalk: integrating personal devices and content for commensal experiences at the family dinner table. Bernd Hasan Shahid Ferdous, Hilary Ploderer, Frank Davis, Vetere, O&apos; Kenton, Geremy Hara, Rob Farr-Wharton, Comber, 10.1145/2971648.2971715Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp '16). the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp '16)Hasan Shahid Ferdous, Bernd Ploderer, Hilary Davis, Frank Vetere, Kenton O'Hara, Geremy Farr-Wharton, and Rob Comber. 2016. TableTalk: integrating personal devices and content for commensal experiences at the family dinner table. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp '16), 132-143. https://doi.org/10.1145/2971648.2971715 A Matter of Control or Safety?: Examining Parental Use of Technical Monitoring Apps on Teens' Mobile Devices. Arup Kumar Ghosh, Karla Badillo-Urquiola, Mary Beth Rosson, Heng Xu, John M Carroll, Pamela J Wisniewski, 10.1145/3173574.31737681-194:14Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18)194Arup Kumar Ghosh, Karla Badillo-Urquiola, Mary Beth Rosson, Heng Xu, John M. Carroll, and Pamela J. Wisniewski. 2018. A Matter of Control or Safety?: Examining Parental Use of Technical Monitoring Apps on Teens' Mobile Devices. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18), 194:1-194:14. https://doi.org/10.1145/3173574.3173768 Anchored Audio Sampling: A Seamless Method for Exploring Children's Thoughts During Deployment Studies. Alexis Hiniker, Jon E Froehlich, Mingrui Zhang, Erin Beneteau, 10.1145/3290605.3300238Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19. the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19Alexis Hiniker, Jon E. Froehlich, Mingrui Zhang, and Erin Beneteau. 2019. Anchored Audio Sampling: A Seamless Method for Exploring Children's Thoughts During Deployment Studies. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19), 1-13. https://doi.org/10.1145/3290605.3300238 Not at the Dinner Table: Parents' and Children's Perspectives on Family Technology Rules. Alexis Hiniker, Sarita Y Schoenebeck, Julie A Kientz, 10.1145/2818048.2819940Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing (CSCW '16). the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing (CSCW '16)Alexis Hiniker, Sarita Y. Schoenebeck, and Julie A. Kientz. 2016. Not at the Dinner Table: Parents' and Children's Perspectives on Family Technology Rules. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing (CSCW '16), 1376-1389. https://doi.org/10.1145/2818048.2819940 Scratch Nodes: Coding Outdoor Play Experiences to enhance Social-Physical Interaction. Tom Hitron, Itamar Apelblat, Iddo Wald, Eitan Moriano, Andrey Grishko, Idan David, 10.1145/3078072.3084331Proceedings of the 2017 Conference on Interaction Design and Children (IDC '17). the 2017 Conference on Interaction Design and Children (IDC '17)Avihay Bar, and Oren ZuckermanTom Hitron, Itamar Apelblat, Iddo Wald, Eitan Moriano, Andrey Grishko, Idan David, Avihay Bar, and Oren Zuckerman. 2017. Scratch Nodes: Coding Outdoor Play Experiences to enhance Social-Physical Interaction. In Proceedings of the 2017 Conference on Interaction Design and Children (IDC '17), 601-607. https://doi.org/10.1145/3078072.3084331 Digital Outdoor Play: Benefits and Risks from an Interaction Design Perspective. Tom Hitron, Idan David, Netta Ofer, Andrey Grishko, Hadas Iddo Yehoshua Wald, Oren Erel, Zuckerman, 10.1145/3173574.3173858Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18)284Tom Hitron, Idan David, Netta Ofer, Andrey Grishko, Iddo Yehoshua Wald, Hadas Erel, and Oren Zuckerman. 2018. Digital Outdoor Play: Benefits and Risks from an Interaction Design Perspective. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18), 284:1-284:13. https://doi.org/10.1145/3173574.3173858 Nature connectedness: Associations with well-being and mindfulness. Andrew J Howell, Raelyne L Dopko, Holli-Anne Passmore, Karen Buro, Personality and Individual Differences. 51Andrew J. Howell, Raelyne L. Dopko, Holli-Anne Passmore, and Karen Buro. 2011. Nature connectedness: Associations with well-being and mindfulness. Personality and Individual Differences 51, 2: 166-171. . 10.1016/j.paid.2011.03.037https://doi.org/10.1016/j.paid.2011.03.037 Level Up! Refreshing Parental Mediation Theory for Our Digital Media Landscape: Parental Mediation of Video Gaming. Hee Jhee Jiow, Sun Sun Lim, Julian Lin, Communication Theory. 27Hee Jhee Jiow, Sun Sun Lim, and Julian Lin. 2017. Level Up! Refreshing Parental Mediation Theory for Our Digital Media Landscape: Parental Mediation of Video Gaming. Communication Theory 27, 3: 309-328. . 10.1111/comt.12109https://doi.org/10.1111/comt.12109 EcoMOBILE: Integrating augmented reality and probeware with environmental education field trips. Amy M Kamarainen, Shari Metcalf, Tina Grotzer, Allison Browne, Diana Mazzuca, M Shane Tutwiler, Chris Dede, 10.1016/j.compedu.2013.02.018Computers & Education. 68Amy M. Kamarainen, Shari Metcalf, Tina Grotzer, Allison Browne, Diana Mazzuca, M. Shane Tutwiler, and Chris Dede. 2013. EcoMOBILE: Integrating augmented reality and probeware with environmental education field trips. Computers & Education 68: 545-556. https://doi.org/10.1016/j.compedu.2013.02.018 Blurring Boundaries Between Everyday Life and Pervasive Gaming: An Interview Study of Ingress. Pavel Karpashevich, Eva Hornecker, Nana Kesewaa Dankwa, Mohamed Hanafy, Julian Fietkau, Proceedings of the 15th International Conference on Mobile and Ubiquitous Multimedia (MUM '16). the 15th International Conference on Mobile and Ubiquitous Multimedia (MUM '16)Pavel Karpashevich, Eva Hornecker, Nana Kesewaa Dankwa, Mohamed Hanafy, and Julian Fietkau. 2016. Blurring Boundaries Between Everyday Life and Pervasive Gaming: An Interview Study of Ingress. In Proceedings of the 15th International Conference on Mobile and Ubiquitous Multimedia (MUM '16), 217-228. . 10.1145/3012709.3012716https://doi.org/10.1145/3012709.3012716 Sparking Interest: A Design Framework for Mobile Technologies to Promote Children's Interest in Nature. Saba Kawas, Sarah Chase, Jason Yip, Joshua Lawler, Davis Katie, International Journal of Child-Computer Interaction. Saba Kawas, Sarah Chase, Jason Yip, Joshua Lawler, and Davis Katie. 2019. Sparking Interest: A Design Framework for Mobile Technologies to Promote Children's Interest in Nature. . International Journal of Child-Computer Interaction. Otter this world": can a mobile application promote children's connectedness to nature?. Saba Kawas, Nicole S Kuhn, Mina Tari, Alexis Hiniker, Katie Davis, 10.1145/3392063.3394434Proceedings of the Interaction Design and Children Conference (IDC '20). the Interaction Design and Children Conference (IDC '20)Saba Kawas, Nicole S. Kuhn, Mina Tari, Alexis Hiniker, and Katie Davis. 2020. "Otter this world": can a mobile application promote children's connectedness to nature? In Proceedings of the Interaction Design and Children Conference (IDC '20), 444- 457. https://doi.org/10.1145/3392063.3394434 NatureCollections: Can a Mobile Application Trigger Children's Interest in Nature? 579-592. Saba Kawas, Jordan Sherry-Wagner, Nicole Kuhn, Sarah Chase, Brittany Bentley, Joshua Lawler, Katie Davis, 10.5220/0009421105790592RetrievedSaba Kawas, Jordan Sherry-Wagner, Nicole Kuhn, Sarah Chase, Brittany Bentley, Joshua Lawler, and Katie Davis. 2020. NatureCollections: Can a Mobile Application Trigger Children's Interest in Nature? 579-592. Retrieved September 16, 2020 from https://www.scitepress.org/Link.aspx?doi=10.5220/0009421105790592 What are the Benefits of Interacting with Nature?. Lucy E Keniger, Kevin J Gaston, Katherine N Irvine, Richard A Fuller, International Journal of Environmental Research and Public Health. 10Lucy E. Keniger, Kevin J. Gaston, Katherine N. Irvine, and Richard A. Fuller. 2013. What are the Benefits of Interacting with Nature? International Journal of Environmental Research and Public Health 10, 3: 913-935. . 10.3390/ijerph10030913https://doi.org/10.3390/ijerph10030913 Tweens' perspectives on their parents' media-related attitudes and rules: an exploratory study in the US. Ada S Kim, Katie Davis, 10.1080/17482798.2017.1308399Journal of Children and Media. 11Ada S. Kim and Katie Davis. 2017. Tweens' perspectives on their parents' media-related attitudes and rules: an exploratory study in the US. Journal of Children and Media 11, 3: 358-366. https://doi.org/10.1080/17482798.2017.1308399 Young children in urban areas: Links among neighborhood characteristics, weight status, outdoor play, and television watching. Rachel Tolbert Kimbro, Jeanne Brooks-Gunn, Sara Mclanahan, 10.1016/j.socscimed.2010.12.015Social Science & Medicine. 72Rachel Tolbert Kimbro, Jeanne Brooks-Gunn, and Sara McLanahan. 2011. Young children in urban areas: Links among neighborhood characteristics, weight status, outdoor play, and television watching. Social Science & Medicine 72, 5: 668-676. https://doi.org/10.1016/j.socscimed.2010.12.015 Parent-Child Joint Reading in Traditional and Electronic Formats. Marina Krcmar, Drew P Cingel, 10.1080/15213269.2013.840243Media Psychology. 17Marina Krcmar and Drew P. Cingel. 2014. Parent-Child Joint Reading in Traditional and Electronic Formats. Media Psychology 17, 3: 262-281. https://doi.org/10.1080/15213269.2013.840243 Hanging Out, Messing Around, and Geeking Out: Kids Living and Learning with New Media. . ; Kresimir Krolo, Ito, Revija za Sociologiju. 40Kresimir Krolo. 2010. Mizuko Ito et al., Hanging Out, Messing Around, and Geeking Out: Kids Living and Learning with New Media. Revija za Sociologiju 40, 2: 231-234. Metadesigning interactive outdoor games for children: a case study. Susanne Lagerström, Iris Soute, Yves Florack, Panos Markopoulos, 10.1145/2593968.2610483Proceedings of the 2014 conference on Interaction design and children (IDC '14). the 2014 conference on Interaction design and children (IDC '14)Susanne Lagerström, Iris Soute, Yves Florack, and Panos Markopoulos. 2014. Metadesigning interactive outdoor games for children: a case study. In Proceedings of the 2014 conference on Interaction design and children (IDC '14), 325-328. https://doi.org/10.1145/2593968.2610483 How Much is "Too Much"?: The Role of a Smartphone Addiction Narrative in Individuals' Experience of Use. Simone Lanette, Phoebe K Chua, Gillian Hayes, Melissa Mazmanian, 10.1145/3274370CSCW: 101:1-101:22Proc. ACM Hum.-Comput. Interact. 2Simone Lanette, Phoebe K. Chua, Gillian Hayes, and Melissa Mazmanian. 2018. How Much is "Too Much"?: The Role of a Smartphone Addiction Narrative in Individuals' Experience of Use. Proc. ACM Hum.-Comput. Interact. 2, CSCW: 101:1-101:22. https://doi.org/10.1145/3274370 Lincoln R Larson, Rachel Szczytko, Edmond P Bowers, Lauren E Stephens, Kathryn T Stevenson, Myron F Floyd, 10.1177/0013916518806686Outdoor Time, Screen Time, and Connection to Nature: Troubling Trends Among Rural Youth?: Environment and Behavior. Lincoln R. Larson, Rachel Szczytko, Edmond P. Bowers, Lauren E. Stephens, Kathryn T. Stevenson, and Myron F. Floyd. 2018. Outdoor Time, Screen Time, and Connection to Nature: Troubling Trends Among Rural Youth?: Environment and Behavior. https://doi.org/10.1177/0013916518806686 Maximizing opportunities and minimizing risks for children online: the role of digital skills in emerging strategies of parental mediation. Sonia Livingstone, Kjartan Ólafsson, Ellen J Helsper, Francisco Lupiáñez-Villanueva, Giuseppe A Veltri, Frans Folkvord, Journal of Communication. 67Sonia Livingstone, Kjartan Ólafsson, Ellen J. Helsper, Francisco Lupiáñez-Villanueva, Giuseppe A. Veltri, and Frans Folkvord. 2017. Maximizing opportunities and minimizing risks for children online: the role of digital skills in emerging strategies of parental mediation. Journal of Communication 67: 82-105. Last Child in the Woods: Saving our Children from. Richard Louv, Nature Deficit Disorder. Algonquin Books. Richard Louv. 2008. Last Child in the Woods: Saving our Children from Nature Deficit Disorder. Algonquin Books. Retrieved September 10, 2017 from http://edrev.asu.edu/edrev/index.php/ER/article/view/1196 Okay, One More Episode": An Ethnography of Parenting in the Digital Age. Melissa Mazmanian, Simone Lanette, 10.1145/2998181.2998218Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW '17). the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW '17)Melissa Mazmanian and Simone Lanette. 2017. "Okay, One More Episode": An Ethnography of Parenting in the Digital Age. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW '17), 2273-2286. https://doi.org/10.1145/2998181.2998218 Parent Roles and Facilitation Strategies as Influenced by a Mobile-Based Technology During a Family Nature Hike. Lucy R Mcclain, 10.1080/10645578.2018.1548844Visitor Studies. 21Lucy R. McClain. 2018. Parent Roles and Facilitation Strategies as Influenced by a Mobile-Based Technology During a Family Nature Hike. Visitor Studies 21, 2: 260-286. https://doi.org/10.1080/10645578.2018.1548844 Prior Experiences Shaping Family Science Conversations at a Nature Center. Lucy R Mcclain, Heather Toomey Zimmerman, 10.1002/sce.21134Science Education. 98Lucy R. McCLAIN and Heather Toomey Zimmerman. 2014. Prior Experiences Shaping Family Science Conversations at a Nature Center. Science Education 98, 6: 1009-1032. https://doi.org/10.1002/sce.21134 Family Learning with Mobile Devices in the Outdoors: Designing an e-trailguide to Facilitate Families. Lucy Richardson Mcclain, Joint Engagement with the Natural World. Lucy Richardson Mcclain. 2016. Family Learning with Mobile Devices in the Outdoors: Designing an e-trailguide to Facilitate Families' Joint Engagement with the Natural World. Retrieved June 25, 2020 from https://etda.libraries.psu.edu/catalog/28747 Parental Mediation, Online Activities, and Cyberbullying. Gustavo S Mesch, 10.1089/cpb.2009.0068CyberPsychology & Behavior. 12Gustavo S. Mesch. 2009. Parental Mediation, Online Activities, and Cyberbullying. CyberPsychology & Behavior 12, 4: 387- 393. https://doi.org/10.1089/cpb.2009.0068 Understanding geocaching practices and motivations. O&apos; Kenton, Hara, 10.1145/1357054.1357239Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '08). the SIGCHI Conference on Human Factors in Computing Systems (CHI '08)Kenton O'Hara. 2008. Understanding geocaching practices and motivations. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '08), 1177-1186. https://doi.org/10.1145/1357054.1357239 It wasn't really about the Pokémon: Parents' Perspectives on a Location-Based Mobile Game. Kiley Sobel, Arpita Bhattacharya, Alexis Hiniker, Jin Ha Lee, Julie A Kientz, Jason C Yip, 10.1145/3025453.3025761Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems -CHI '17. the 2017 CHI Conference on Human Factors in Computing Systems -CHI '17Kiley Sobel, Arpita Bhattacharya, Alexis Hiniker, Jin Ha Lee, Julie A. Kientz, and Jason C. Yip. 2017. It wasn't really about the Pokémon: Parents' Perspectives on a Location-Based Mobile Game. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems -CHI '17, 1483-1496. https://doi.org/10.1145/3025453.3025761 Design and Evaluation of RaPIDO, A Platform for Rapid Prototyping of Interactive Outdoor Games. Iris Soute, Tudor Vacaretu, Jan De Wit, Panos Markopoulos, 10.1145/3105704ACM Trans. Comput.-Hum. Interact. 24Iris Soute, Tudor Vacaretu, Jan De Wit, and Panos Markopoulos. 2017. Design and Evaluation of RaPIDO, A Platform for Rapid Prototyping of Interactive Outdoor Games. ACM Trans. Comput.-Hum. Interact. 24, 4: 28:1-28:30. https://doi.org/10.1145/3105704 The Vicissitudes of Autonomy in Early Adolescence. Laurence Steinberg, Susan B Silverberg, 10.2307/1130361Child Development. 57Laurence Steinberg and Susan B. Silverberg. 1986. The Vicissitudes of Autonomy in Early Adolescence. Child Development 57, 4: 841-851. https://doi.org/10.2307/1130361 In-game, in-room, in-world: Reconnecting video game play to the rest of kids' lives. The ecology of games: Connecting youth, games, and learning. Reed Stevens, Tom Satwicz, Laurie Mccarthy, 9Reed Stevens, Tom Satwicz, and Laurie McCarthy. 2008. In-game, in-room, in-world: Reconnecting video game play to the rest of kids' lives. The ecology of games: Connecting youth, games, and learning 9: 41-66. Lori Takeuchi, Reed Stevens, The New Coviewing: Designing for Learning through Joint Media Engagement. 75Lori Takeuchi and Reed Stevens. The New Coviewing: Designing for Learning through Joint Media Engagement. 75. Frequency of Parent-Supervised Outdoor Play of US Preschool-Aged Children. S Pooja, Chuan Tandon, Dimitri A Zhou, Christakis, Archives of Pediatrics & Adolescent Medicine. 166Pooja S. Tandon, Chuan Zhou, and Dimitri A. Christakis. 2012. Frequency of Parent-Supervised Outdoor Play of US Preschool- Aged Children. Archives of Pediatrics & Adolescent Medicine 166, 8: 707-712. . 10.1001/archpediatrics.2011.1835https://doi.org/10.1001/archpediatrics.2011.1835 Alone together: why we expect more from technology and less from each other. Sherry Turkle, Basic BooksNew York.Sherry Turkle. 2011. Alone together: why we expect more from technology and less from each other. Basic Books, New York. Retrieved December 14, 2018 from http://public.eblib.com/choice/publicfullrecord.aspx?p=684281 IGen: why today's super-connected kids are growing up less rebellious, more tolerant, less happy--and completely unprepared for adulthood (and what this means for the rest of us). Jean M Twenge, Atria BooksNew York, NYJean M. Twenge. 2017. IGen: why today's super-connected kids are growing up less rebellious, more tolerant, less happy--and completely unprepared for adulthood (and what this means for the rest of us). Atria Books, New York, NY. Time spent outdoors during preschool: Links with children's cognitive and behavioral development. Vidar Ulset, Frank Vitaro, Mara Brendgen, Mona Bekkhus, Anne I H Borge, 10.1016/j.jenvp.2017.05.007Journal of Environmental Psychology. 52Vidar Ulset, Frank Vitaro, Mara Brendgen, Mona Bekkhus, and Anne I. H. Borge. 2017. Time spent outdoors during preschool: Links with children's cognitive and behavioral development. Journal of Environmental Psychology 52: 69-80. https://doi.org/10.1016/j.jenvp.2017.05.007 Supporting Communication between Grandparents and Grandchildren through Tangible Storytelling Systems. Torben Wallbaum, Andrii Matviienko, Swamy Ananthanarayan, Thomas Olsson, Wilko Heuten, Susanne C J Boll, 10.1145/3173574.3174124Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18)Torben Wallbaum, Andrii Matviienko, Swamy Ananthanarayan, Thomas Olsson, Wilko Heuten, and Susanne C.J. Boll. 2018. Supporting Communication between Grandparents and Grandchildren through Tangible Storytelling Systems. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18), 1-12. https://doi.org/10.1145/3173574.3174124 Parental Control vs. Teen Self-Regulation: Is There a Middle Ground for Mobile Online Safety?. Pamela Wisniewski, Arup Kumar Ghosh, Heng Xu, Mary Beth Rosson, John M Carroll, 10.1145/2998181.2998352Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW '17). the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW '17)Pamela Wisniewski, Arup Kumar Ghosh, Heng Xu, Mary Beth Rosson, and John M. Carroll. 2017. Parental Control vs. Teen Self-Regulation: Is There a Middle Ground for Mobile Online Safety? In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW '17), 51-69. https://doi.org/10.1145/2998181.2998352 Reactive": How Parental Mediation Influences Teens' Social Media Privacy Behaviors. Pamela Wisniewski, Haiyan Jia, Heng Xu, Mary Beth Rosson, John M Carroll, 10.1145/2675133.2675293Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing (CSCW '15). the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing (CSCW '15)Preventative" vsPamela Wisniewski, Haiyan Jia, Heng Xu, Mary Beth Rosson, and John M. Carroll. 2015. "Preventative" vs. "Reactive": How Parental Mediation Influences Teens' Social Media Privacy Behaviors. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing (CSCW '15), 302-316. https://doi.org/10.1145/2675133.2675293 Adolescent Relations with Mothers, Fathers and Friends. By James Youniss and Jacqueline Smollar. R M Wrate, 10.1192/S0007125000122354British Journal of Psychiatry. 149The University of Chicago PressPp. 201. £21.25R. M. Wrate. 1986. Adolescent Relations with Mothers, Fathers and Friends. By James Youniss and Jacqueline Smollar. London: The University of Chicago Press. 1985. Pp. 201. £21.25. British Journal of Psychiatry 149, 6: 805-805. https://doi.org/10.1192/S0007125000122354 almost touching": parent-child remote communication using the sharetable system. Svetlana Yarosh, Anthony Tang, Sanika Mokashi, Gregory D Abowd, 10.1145/2441776.2441798Proceedings of the 2013 conference on Computer supported cooperative work (CSCW '13). the 2013 conference on Computer supported cooperative work (CSCW '13)Svetlana Yarosh, Anthony Tang, Sanika Mokashi, and Gregory D. Abowd. 2013. "almost touching": parent-child remote communication using the sharetable system. In Proceedings of the 2013 conference on Computer supported cooperative work (CSCW '13), 181-192. https://doi.org/10.1145/2441776.2441798 The Impact of Smartphones on the Family Vacation Experience. Xi Yu, Gerardo Joel Anaya, Li Miao, Xinran Lehto, Ipkin Anthony Wong, 10.1177/0047287517706263Journal of Travel Research. Xi Yu, Gerardo Joel Anaya, Li Miao, Xinran Lehto, and IpKin Anthony Wong. 2017. The Impact of Smartphones on the Family Vacation Experience: Journal of Travel Research. https://doi.org/10.1177/0047287517706263 Pass the iPad: collaborative creating and sharing in family groups. Nicola Yuill, Yvonne Rogers, Jochen Rick, 10.1145/2470654.2466120Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '13). the SIGCHI Conference on Human Factors in Computing Systems (CHI '13)Nicola Yuill, Yvonne Rogers, and Jochen Rick. 2013. Pass the iPad: collaborative creating and sharing in family groups. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '13), 941-950. https://doi.org/10.1145/2470654.2466120 Designing Outdoor Learning Spaces With iBeacons: Combining Place-Based Learning With the Internet of Learning Things. Susan M Heather Toomey Zimmerman, Chrystal Land, Robert W Maggiore, Chris Ashley, Millet, RetrievedHeather Toomey Zimmerman, Susan M. Land, Chrystal Maggiore, Robert W. Ashley, and Chris Millet. 2016. Designing Outdoor Learning Spaces With iBeacons: Combining Place-Based Learning With the Internet of Learning Things. Retrieved September 4, 2017 from https://repository.isls.org/handle/1/349 Using Augmented Reality to Support Observations About Trees During Summer Camp. Susan M Heather Toomey Zimmerman, Michael R Land, Gi Woong Mohney, Chrystal Choi, Soo Hyeon Maggiore, Yong Ju Kim, Jaclyn Jung, Dudek, 10.1145/2771839.2771925Proceedings of the 14th International Conference on Interaction Design and Children (IDC '15). the 14th International Conference on Interaction Design and Children (IDC '15)Heather Toomey Zimmerman, Susan M. Land, Michael R. Mohney, Gi Woong Choi, Chrystal Maggiore, Soo Hyeon Kim, Yong Ju Jung, and Jaclyn Dudek. 2015. Using Augmented Reality to Support Observations About Trees During Summer Camp. In Proceedings of the 14th International Conference on Interaction Design and Children (IDC '15), 395-398. https://doi.org/10.1145/2771839.2771925 The Great Indoors: Today's Screen-Hungry Kids Have Little Interest In Being Outside. Study Finds. Kids These Days: Why Is America's Youth Staying Indoors? Children & Nature Network. Retrieved67. 2011. Kids These Days: Why Is America's Youth Staying Indoors? Children & Nature Network. Retrieved January 13, 2020 from https://www.childrenandnature.org/2011/09/12/kids_these_days_why_is_americas_youth_staying_indoors/ 68. 2018. The Great Indoors: Today's Screen-Hungry Kids Have Little Interest In Being Outside. Study Finds. Retrieved September 16, 2020 from https://www.studyfinds.org/great-indoors-screen-hungry-kids-video-games-going-outside/ Kids & Tech: The Evolution of Today's Digital Natives | Influence Central. RetrievedKids & Tech: The Evolution of Today's Digital Natives | Influence Central. Retrieved October 22, 2019 from http://influence- central.com/kids-tech-the-evolution-of-todays-digital-natives/ New research reveals the nature of America's youth. The Nature Conservancy. RetrievedThe Nature Conservancy, New research reveals the nature of America's youth. Retrieved September 11, 2017 from https://www.nature.org/newsfeatures/kids-in-nature/kids-in-nature-poll.xml The Common Sense Census: Plugged-In Parents of Tweens and Teens. The Common Sense Census: Plugged-In Parents of Tweens and Teens, 2016 | Common Sense Media. Retrieved September 17, 2020 from https://www.commonsensemedia.org/research/the-common-sense-census-plugged-in-parents-of-tweens- and-teens-2016
[]
[ "Mechanical Lamb-shift analogue for the Cooper-pair box", "Mechanical Lamb-shift analogue for the Cooper-pair box", "Mechanical Lamb-shift analogue for the Cooper-pair box", "Mechanical Lamb-shift analogue for the Cooper-pair box" ]
[ "A D Armour \nBlackett Laboratory\nImperial College\nSW7 2BWLondonU.K\n", "M P Blencowe [email protected] \nDepartment of Physics and Astronomy\nDartmouth College\n03755HanoverNew Hampshire\n", "K C Schwab \nLaboratory for Physical Sciences\n20740College Park, Maryland\n", "A D Armour \nBlackett Laboratory\nImperial College\nSW7 2BWLondonU.K\n", "M P Blencowe [email protected] \nDepartment of Physics and Astronomy\nDartmouth College\n03755HanoverNew Hampshire\n", "K C Schwab \nLaboratory for Physical Sciences\n20740College Park, Maryland\n" ]
[ "Blackett Laboratory\nImperial College\nSW7 2BWLondonU.K", "Department of Physics and Astronomy\nDartmouth College\n03755HanoverNew Hampshire", "Laboratory for Physical Sciences\n20740College Park, Maryland", "Blackett Laboratory\nImperial College\nSW7 2BWLondonU.K", "Department of Physics and Astronomy\nDartmouth College\n03755HanoverNew Hampshire", "Laboratory for Physical Sciences\n20740College Park, Maryland" ]
[]
We estimate the correction to the Cooper-pair box energy level splitting due to the quantum motion of a coupled micromechanical gate electrode. While the correction due to zero-point motion is very small, it should be possible to observe thermal motion-induced corrections to the photon-assisted tunneling current.In a recent experiment [1], the coherent control of macroscopic quantum superposition states in a Cooper box was demonstrated. This represents an important advance towards the realization of a solid state quantum computer[2]. The ability to manipulate the Cooper box states also allows the possibility of producing entangled states between the Cooper box and any other
10.1016/s0921-4526(02)00527-6
[ "https://arxiv.org/pdf/cond-mat/0109207v1.pdf" ]
10,472,232
cond-mat/0109207
e38efc6f12d9f6487f86b57549ad900fc6db259e
Mechanical Lamb-shift analogue for the Cooper-pair box 12 Sep 2001 A D Armour Blackett Laboratory Imperial College SW7 2BWLondonU.K M P Blencowe [email protected] Department of Physics and Astronomy Dartmouth College 03755HanoverNew Hampshire K C Schwab Laboratory for Physical Sciences 20740College Park, Maryland Mechanical Lamb-shift analogue for the Cooper-pair box 12 Sep 2001Preprint submitted to Elsevier Science 30 October 2018Cooper-pair box, Micromechanical systems We estimate the correction to the Cooper-pair box energy level splitting due to the quantum motion of a coupled micromechanical gate electrode. While the correction due to zero-point motion is very small, it should be possible to observe thermal motion-induced corrections to the photon-assisted tunneling current.In a recent experiment [1], the coherent control of macroscopic quantum superposition states in a Cooper box was demonstrated. This represents an important advance towards the realization of a solid state quantum computer[2]. The ability to manipulate the Cooper box states also allows the possibility of producing entangled states between the Cooper box and any other dynamical system, possibly macroscopic, to which it can be coupled. Examples include coupling the Cooper box to another large superconducting island [3], a superconducting resonator [4] and to a micromechanical gate electrode [5], which could take the form of a cantilever or bridge-like structure. In the present work, we examine the effect of a mechanical gate electrode on the energy levels of a Cooper box. In particular, we consider what might be viewed as a mechanical analogue of the Lamb shift, in which the quantum zero-point motion of the mechanical oscillator modifies the level separation between the Cooper box states. The Hamiltonian for the Cooper box-coupled mechanical gate electrode system is H = 4E C [n g − (n + 1/2)]σ z − 1 2 E Jσx + ωâ †â − 4E C n m g ∆x zp d (â +â † )σ z , where E C = e 2 /2C J is the single-electron charging energy, n g = −(C c g V c g + C m g V m g )/2e is the dimensionless, total gate charge with control gate voltage V c g and mechanical gate electrode voltage V m g chosen such that n g is close to n + 1/2 for some n (so that only Cooper charge states |n ≡ ( 1 0 ) and |n + 1 ≡ ( 0 1 ) play a role), n m g = −C m g V m g /2e, E J is1,N = +∆E(η)/2 + N ω, where ∆E(η) = E J / sin η with the mixing angle η = tan −1 (E J /[8E C (n+ 1/2 − n g )])(0) . To second order in the coupling, we have E (2) 1,N − E (2) 0,M = ∆E(η) 1 + 32(N + M + 1) ∆x zp d 2 × E 2 C (∆E(η)) 2 − ( ω) 2 sin 2 η n m g 2 + (N − M) ω. Notice that we require η = 0 in order for the coupling to modify the energy A proper analysis of the quasiparticle tunneling current is required which includes the corrections to the Cooper box energy levels due to the coupling to the mechanical oscillator. A suitable starting point is the analysis of the experiment of Nakamura et al. [1] given in Ref. [7]. This will be the subject of a future investigation. This work was supported in part by the NSA and ARDA under ARO contract number DAAG190110696, and by the EPSRC under Grant No. GR/M42909/01. the Josephson coupling energy, ∆x zp is the zero-point displacement uncertainty of the mechanical gate electrode, d is the mechanical electrode-island gap, and ω is the frequency of the fundamental flexural mode of the mechanical electrode. We assume C J ≫ C g and d ≫ ∆x zp . Neglecting the coupling between the Cooper box and mechanical electrode, the energy eigenvalues are E (0) 0,N = −∆E(η)/2 + N ω and E levels. If E J = 0, then the only effect of the coupling of the Cooper box to the mechanical oscillator is a shift of the harmonic potential to the left or to the right, depending on the state of the Cooper box. With E J = 0 and ω < ∆E(η), we see that the interaction with the mechanical oscillator increases the gap between the Cooper box levels. Let us now estimate the magnitude of the possible gap increase under realisable conditions, supposing the mechanical oscillator to be in its ground state and assuming also that ω ≪ ∆E(η). Josephson energies for Cooper boxes are typically in the tens of micro-eV range, translating to tens of GHz which exceeds by at least an order of magnitude the fundamental frequencies of realisable micromechanical oscillators. For the oscillator undergoing zero-point motion (N = M = 0), the gap increase is then approximately 32n m g 2 (∆x zp /d) 2 (E C /E J ) 2 sin 4 η. Considering, for example, E C = 100 µeV, E J = 10 µeV, n + 1/2 − n g = 0.01, ∆x zp = 10 −2Å , d = 0.1 µm, and n m g = 10, the gap increase is about 10 −5 , likely too small to be detected. If, on the other hand, the mechanical oscillator is in a thermal state, then for the same parameter values the gap increase is approximately 10 −5 (2N + 1), whereN is the thermal-averaged occupation number. Considering, for example, a fundamental frequency ν = 50 kHz and temperature T = 30 mK, we haveN ≈ 2.5 × 10 4 giving a significant gap increase of about 0.25.A possible way to probe the effect of the mechanical oscillator thermal motion on the Cooper box levels is measure the photon-assisted Josephson quasiparticle (PAJQP) tunneling current dependence on total gate charge n g[6]. As the mechanical gate voltage V m g is turned on, increasing n m g , we would expect the PAJQP current peak to the left (right) of the main Josephson quasiparticle (JQP) tunneling current peak to shift towards the right (left), signalling the increasing gap between the n and n + 1 Cooper box levels. At the same time, the PAJQP peaks should broaden due to the thermal motion of the mechanical oscillator. . Y Nakamura, Yu A Pashkin, J S Tsai, Nature. 398786Y. Nakamura, Yu. A. Pashkin and J. S. Tsai, Nature 398 (1999), 786. . Y Makhlin, G Schön, A Shnirman, Rev. Mod. Phys. 73357Y. Makhlin, G. Schön and A. Shnirman, Rev. Mod. Phys. 73 (2001) 357. . F Marquardt, C Bruder, Phys. Rev. B. 6354514F. Marquardt and C. Bruder, Phys. Rev. B 63 (2001) 054514. . O Buisson, F W J Hekking, cond-mat/0008275O. Buisson and F. W. J. Hekking, cond-mat/0008275. . A D Armour, M P Blencowe, K C Schwab, in preparationA. D. Armour, M. P. Blencowe and K. C. Schwab (in preparation). . Y Nakamura, C D Chen, J S Tsai, Phys. Rev. Lett. 792328Y. Nakamura, C. D. Chen and J. S. Tsai, Phys. Rev. Lett. 79 (1997) 2328. . M-S Choi, R Fazio, J Siewert, C Bruder, Europhys. Lett. 53251M-S Choi, R. Fazio, J. Siewert and C. Bruder Europhys. Lett. 53 (2001) 251.
[]
[ "Transverse Single-Spin Asymmetries of Midrapidity Direct Photons and Neutral Mesons at PHENIX", "Transverse Single-Spin Asymmetries of Midrapidity Direct Photons and Neutral Mesons at PHENIX" ]
[ "\nUniversity of Michigan\n\n" ]
[ "University of Michigan\n" ]
[]
Results are presented for the transverse single-spin asymmetries of direct photons, neutral pions, and eta mesons for |η| < 0.35 from p ↑ + p collisions with s = 200 GeV at PHENIX. As hadrons, π 0 and η mesons are sensitive to both initial-and final-state effects and at midrapidity probe the dynamics of gluons along with a mix of quark flavors. Because direct photon production does not include hadronization, the direct photon TSSA is only sensitive to initial-state effects and at midrapidity provides a clean probe of the gluon dynamics in transversely polarized protons. All three of these results will help constrain the collinear twist-3 trigluon correlation function as well as the gluon Sivers function, improving our knowledge of spin-dependent gluon dynamics in QCD.
10.21468/scipostphysproc.8.009
[ "https://arxiv.org/pdf/2107.10079v2.pdf" ]
236,155,196
2107.10079
f9827bb41722556babdc1bcadf0c666ce2ccc0a8
Transverse Single-Spin Asymmetries of Midrapidity Direct Photons and Neutral Mesons at PHENIX March 18, 2022 University of Michigan Transverse Single-Spin Asymmetries of Midrapidity Direct Photons and Neutral Mesons at PHENIX March 18, 202210.21468/SciPostPhysProc.?CONTENTS CONTENTS Nicole Lewis for the PHENIX Collaboration 1 Proceedings for the XXVIII International Workshop on Deep-Inelastic Scattering and Related Subjects, Stony Brook University, New York, USA, 12-16 April 2021 Results are presented for the transverse single-spin asymmetries of direct photons, neutral pions, and eta mesons for |η| < 0.35 from p ↑ + p collisions with s = 200 GeV at PHENIX. As hadrons, π 0 and η mesons are sensitive to both initial-and final-state effects and at midrapidity probe the dynamics of gluons along with a mix of quark flavors. Because direct photon production does not include hadronization, the direct photon TSSA is only sensitive to initial-state effects and at midrapidity provides a clean probe of the gluon dynamics in transversely polarized protons. All three of these results will help constrain the collinear twist-3 trigluon correlation function as well as the gluon Sivers function, improving our knowledge of spin-dependent gluon dynamics in QCD. INTRODUCTION Introduction Transverse single-spin asymmetries (TSSAs) are a type of spin-momentum correlation measurement in hadronic collisions. In the context of proton-proton collisions, one of the protons is transversely polarized while the other is unpolarized. The TSSA measures the asymmetry of particle yields which travel to the left versus the right of the polarized-proton-going direction. Theoretical calculations which only include effects from high energy partonic scattering predict that spinmomentum correlations like these should be small, on the order of less than one percent [1], but in fact large nonzero asymmetries have been measured in a variety of collision systems. This includes the forward π 0 asymmetry for p + p and p + A collisions with s N N = 200 GeV and transverse momentum up to p T ≈ 7 GeV/c [2]. Because the perturbative part of QCD calculations could not account for these large spin-momentum correlations, this led to the reexamination of the nonperturbative part. Two theoretical frameworks were developed: transverse momentum dependent functions and collinear twist-3 correlation functions both of which describe spin-momentum correlations within the nucleon and in the process of hadronization. Traditionally parton distribution functions (PDFs) and fragmentation functions (FFs) are collinear, meaning that they integrate over the nonperturbative dynamics of partons and only depend on longitudinal momentum fractions. Transverse momentum dependent (TMD) functions, as the name suggests, depends explicitly on the parton's relative transverse momentum k T . The Sivers function is a TMD PDF that describes the spin-momentum correlation between the transversely polarized proton and the nonperturbative transverse momentum of the quark [3]. The quark Sivers functions have been extracted through polarized semi-inclusive deep inelastic scattering (SIDIS) measurements, while the gluon Sivers function has remained comparatively unconstrained because SIDIS is not sensitive gluons at leading order [4]. The Collins function is an example of a TMD FF and describes the spin-momentum correlation between the transverse spin of a quark and the soft-scale relative transverse momentum of the unpolarized hadron it produces [5]. In order for TMD factorization to apply, this transverse momentum must be both nonperturbative and much smaller than the hard-scale energy of the scattering event. Thus, the most straight forward way to extract these TMD functions is with a two-momentum scale scattering process, like SIDIS. In SIDIS it is also possible to isolate the effects from particular TMD functions through angular moments and measure both the Sivers [6,7] and Collins [8] functions directly. However, the measurements that are presented in these proceedings are single scale processes and are measured as a function of transverse momentum which is used as a proxy for the hard scale. In order to apply TMD functions to these measurements, one must take the k T moment of these functions. The neutral meson TSSA presented in this document are sensitive to both initialand final-state effects. In the TMD picture these are divided into the Sivers effect where the k T moment of the Sivers function has been taken and the Collins effect where the Collins function has been convolved with the collinear Transversity function, a PDF that describes the correlation between the transverse spin of a quark and the transverse spin of the proton. Theoretical calculations using TSSAs measured in proton-proton data have suggested that the Collins effect's contribution [9] is smaller than Sivers effect's contribution [10], which is consistent with the small Collins asymmetry that was measured for forward-rapidity π 0 s in jets [11]. Because the Sivers function is parity-time odd, in order to be nonzero, it must include a soft-gluon exchange with the proton fragment, which can happen before and/or after the hard-scattering event. In processes like proton-proton to hadron interactions soft gluon exchanges are possible in both the initial-and final-state simultaneously, leading to the prediction of TMD factorization breaking [12]. Another approach towards describing TSSAs are collinear twist-3 correlation functions. While traditional nonperturbative functions are twist-2 and only consider the interactions of a single parton in the proton and a single parton hadronizing at a time, twist-3 functions are multiparton correlation functions. They describe the quantum mechanical interference between scattering off of one parton at a given x versus scattering off of a parton of the same flavor and same x plus an additional gluon. These functions are broken into two types: the quark-gluon-quark (q gq) correlation functions describe the quantum mechanical interference between scattering of a single quark versus scattering off of a quark and a gluon, while the trigluon (g g g) correlation functions describe the interference between scattering off of one gluon versus scattering off of two. Collinear twist-3 correlation functions are used to describe spin-momentum correlations both from initial-state proton structure and also from final-state hadronization. Efremov-Teryaev-Qiu-Sterman (ETQS) function is a q gq correlation function for the polarized proton [13][14][15] that is related to the k T moment of the Sivers TMD PDF [16]. Collinear q gq correlation functions have been used to describe forward π 0 TSSA measurements by including contributions from both the ETQS function and final state hadronization effects, which dominate [17,18]. Twist-3 collinear functions have the added benefit that they do not depend on a soft-scale momentum and are uniquely suited to describing TSSA in proton-proton collisions where only one final state particle is measured. Experimental Setup These measurements were taken at the Relativistic Heavy Ion Collider (RHIC), the only collider in the world that is able to collide polarized proton beams. Both beams are polarized, and the polarization direction changes direction bunch-to-bunch in order to avoid systematic effects. The polarization is maintained by a series of spin-rotating helical dipoles called Siberian snakes which are placed at diametrically opposite points along both RHIC rings. They flip the polarization direction for each bunch by 180 o degrees without distorting the trajectory of the beam, causing additive depolarization effects to cancel out. PHENIX is one of the large multipurpose detectors around the RHIC ring. In 2015 PHENIX took a transversely polarized proton-proton data set with an integrated luminosity of 60 pb −1 . These measurements use photons that were measured in the electromagnetic calorimeter (EMCal) which has a pseudorapidity acceptance of |η| < 0.35 and two nearly back-to-back arms that each cover ∆φ = π/2 in azimuth. The EMCal is comprised of eight sectors, six of which are made of lead scintillator towers and two of which are made of lead glass. Events with high p T photons are selected though an EMCal-based high energy photon trigger. Traditionally TSSAs are measured as function of φ and then fit to a sinusoid to extract the amplitude. This becomes more difficult with the limited PHENIX acceptance, especially for midrapidity asymmetries that tend to be consistent with zero. So PHENIX midrapidity TSSA analyses integrate over the full φ range of the detector: A N = σ L − σ R σ L + σ R = 1 P〈| cos φ |〉 N ↑ − R N ↓ N ↑ + R N ↓(1) The acceptance correction, 〈| cos φ |〉, is used to correct for the dilution of the asymmetry over the EMCal φ range. 〈| cos φ |〉 is measured as a function of p T for the π 0 and η analyses since the diphoton azimuthal acceptance changes with the photon decay angle. The asymmetry is also diluted by the beam not being 100% polarized and so the asymmetry is divided the beam polarization, P which for this data set was 59% on average. This asymmetry formula takes advantage of the fact that the beam polarization direction changes bunch-to-bunch, where N ↑ and N ↓ are the particle yields for when the beam is polarized up and down, respectively. Because this formula takes the ratio of particle yields from the same detector, effects from detector acceptance and efficiencies cancel out. This asymmetry needs to be corrected for the relative luminosity of the different beam configurations: R = L ↑ /L ↓ . This is calculated by taking the ratio of the number of events that fired the minimum bias trigger when the beam was polarized up divided by the same for when the beam was polarized down. [GeV/c] [20] plotted with the previously published PHENIX results [21]. An additional scale uncertainty of 3.4% due to the uncertainty in the polarization is not shown 3 π 0 and η Meson TSSA Forward-rapidity light hadron production corresponds to probing the proton at higher x, meaning that valence quark spin-momentum correlations dominate the forward rapidity π 0 and η TSSA measurements. Measurements of the forward π 0 TSSA have been used to constrain q gq correla-tion functions both in the transversely polarized proton and the process of hadronization [17,18]. In contrast, at midrapidity the proton is probed at comparatively more moderate x, and so light hadron production includes contributions from valance quarks, gluons, and sea quarks [19]. While this makes interpreting results a midrapidity more challenging, it does mean that these midrapidity TSSA measurements are sensitive to gluon dynamics at leading order. As hadrons, π 0 and η mesons are sensitive to both proton structure as well as in the process of hadronization and the η meson is also sensitive strange quark dynamics. The π 0 yields are comprised of photon pairs with invariant mass in the signal region ±25 MeV/c 2 from the π 0 mass peak and η meson yields are measured in the range ±70 MeV/c 2 around the η mass peak. Figure 1 shows the new midrapidity π 0 and η TSSA results [20] compared with the previously published PHENIX results [21]. The π 0 asymmetry is consistent with zero to within 10 −4 at low p T and the η asymmetry is consistent with zero to within 0.5% at low p T . Both new results have a statistical uncertainty that is three times smaller than the previous PHENIX results and have a higher reach in p T . Figure 2 shows this same π 0 result plotted with theoretical predictions. The q gq curves shows the predicted contribution of the quark-gluon-quark correlations from both the transversely polarized proton and the process of hadronization. This calculation uses fits to data that were published in Ref [18] and recalculated for the pseudorapidity ranges of this measurement. Midrapidity π 0 production also includes a large fractional contribution from gluons, and so a full twist-3 collinear description of the midrapidity π 0 TSSA also needs to include the contribution from the trigluon correlation function, such as those published in Ref [22]. Given the small expected contributions from the q gq correlation functions, this measurement can constrain future calculations of the trigluon correlation function. The rest of the theory curves in Figure 2 show predictions for the midrapidity π 0 TSSA calculated in the TMD picture. These curves show the predicted asymmetry as generated by the Sivers TMD PDF for both quarks and gluons. They are calculated using functions published in Ref [23] which have been reevaluated at Feynman-x (x F = 2p L / s) equal to zero, which approximates the measured kinematics. The red curve was calculated using the Generalized Parton Model (GPM) meaning that the first k T moment of the Sivers function is taken and the calculation does not include next-to-leading-order interactions with the proton fragment. The color-gauge invariant generalized parton model (CGI-GPM) somewhat relaxes these assumptions and includes both initial-and final-state interactions via the one-gluon exchange approximation. The CGI-GPM curves plotted in Figure 2 come from simultaneous fits to open heavy flavor [24] and midrapidity π 0 [21] TSSA measurements from PHENIX. Scenario 1 maximizes the asymmetry, while scenario 2 minimizes it. As shown in the zoomed in top panel of Figure 2, this π 0 TSSA measurement has the statistical precision to distinguish between the GPM and CGI-GPM models, preferring CGI-GPM Scenario 2. Direct Photon TSSA Direct photons are photons that come directly from the hard-scattering event. Because they do not undergo hadronization they are only sensitive to initial-state effects from proton structure. At large transverse momentum they are produced by the 2-to-2 hard scattering subprocesses quarkgluon Compton scattering (g + q → γ + q) and quark-antiquark annihilation (q + q → γ + g). At midrapidity quark-gluon Compton scattering dominates [25] because the proton is probed at a moderate x where the gluon PDF dominates. As a result, midrapidity direct photons are a uniquely [20] with theoretical curves in both the collinear twist-3 [18] and TMD [23] frameworks. See text for details. clean probe of gluon structure in the proton. The vast number of photons present in an event are not direct photons, but instead come from decays and next-to-leading order fragmentation processes. Many of these photons are eliminated by a tagging cut which removes that photons that have been matched into a pair with another photon in the same event which reconstructs either a π 0 → γγ or η → γγ decay. An isolation cut further reduces the contribution of decay photons [26] as well as next-to-leading order fragmentation photons to about 15% for direct photons with p T > 5 GeV/c [25]. This cut adds up the energy of all of the surrounding EMCal clusters and the momentum of all of the surrounding tracks that are within a cone of 0.4 radians. The photon only passes this isolation cut if it has ten times the energy of the surrounding cone. The remaining background is dominated by decay photons where the second partner photon was missed because it was either out of acceptance or too low in energy. This decay photon would not have been eliminated by the tagging cut so this background is estimated through single particle Monte Carlo and found to be about 50% for the lowest p T bin and drop to about 16% in the highest p T bin. The final direct asymmetry is plotted in Figure 3 and is consistent with zero to within about 2% [27]. The only previously published direct photon TSSA result was measured at the E704 Fermilab experiment and was found to be consistent with zero to within about 20% for 2.5 < p γ T < 3.1 GeV/c for s = 19.4 GeV [28]. This new result measured photons with p γ T > 5 GeV/c with total uncertainties a factor of 50 times smaller than the E704 measurement. The green curve in Figure 3 shows the contribution from q gq correlation functions from both the polarized and unpolarized proton that were published in Ref. [29] and recalculated for |η| < 0.35. The error bars shown correspond to propagated uncertainties from fits to data and do not include uncertainties from assuming functional forms. The g g g correlation function contributions use fits that were published in Ref. [30] and were reevaluated for η = 0. Models 1 and 2 correspond to different functional forms assumptions for the trigluon correlation function in terms of the collinear leading-twist gluon PDF. The trigluon correlation function is divided into symmetric and antisymmetric parts. Setting these parts to have the same sign maximizes the direct photon asymmetry while setting them to have the opposite sign minimizes it. Given the small predicted q gq correlation function contribution, this direct photon asymmetry result will help constrain the trigluon correlation function. [ Figure 3: The direct photon TSSA [27] plotted with the contributions from the q gq [29] and g g g [30] correlation function contributions. See text for details. An additional scale uncertainty of 3.4% due to the uncertainty in the polarization is not shown. Conclusion TSSAs are spin-momentum correlations that probe parton dynamics in the proton as well as in the process of hadronization. They can be described by both the TMD and collinear twist-3 frameworks, were collinear twist-3 correlation functions only require a single hard-energy scale to be measured directly. The midrapidity π 0 and η TSSAs measurements were presented for s = 200 GeV. Both results are consistent with zero and have a factor of 3 increase in precision compared to the previous PHENIX results. Midrapidity π 0 and η mesons are sensitive to both initial-and final-state effects for both quarks and gluons and the η TSSA is in particular sensitive to strangeness in twist-3 functions. Midrapidity isolated direct photons offer a clean probe of gluon initial-state effects. The direct photon TSSA has been measured for the first time at RHIC and is also consistent with zero. These asymmetry results will help constrain the trigluon correlation function in the transversely polarized proton as well as the gluons Sivers function, both of which are steps towards creating a more complete, three-dimensional picture of proton structure. Funding information The funding for this project was proved by the Department of Energy, grant number DE-SC0013393. Figure 1 : 1The new π 0 and η TSSA results Figure 2 : 2The π 0 TSSA GeV/c] qgq Contribution ggg Contribution Model 1, min/max ggg Contribution Model 2, min/maxT p 5 6 7 8 9 10 11 12 N dir A 0.01 − 0 0.01 0.02 |<0.35 η = 200 GeV, | s + X, iso γ → + p ↑ p PHENIX AcknowledgementsThank you to Daniel Pitonyak for providing the q gq correlation function curves that appear in bothFigures 2 and 3. Thank you to Umberto D'Alesio, Cristian Pisano, and Francesco Murgia for providing the GPM and CGI-GPM curves that are plotted inFigure 2. Thank you to Shinsuke Yoshida for providing the g g g correlation function curves as shown inFigure 3. G L Kane, J Pumplin, W Repko, 10.1103/PhysRevLett.41.1689Transverse Quark Polarization in Large-p T Reactions, e + e − Jets, and Leptoproduction: A Test of Quantum Chromodynamics. 411689G. L. Kane, J. Pumplin, and W. Repko, Transverse Quark Polarization in Large-p T Reactions, e + e − Jets, and Leptoproduction: A Test of Quantum Chromodynamics, Phys. Rev. Lett. 41, 1689 (1978) doi:10.1103/PhysRevLett.41.1689. Comparison of transverse single-spin asymmetries for forward π 0 production in polarized pp, pAl and pAu collisions at nucleon pair c.m. energy s N N = 200 GeV. J Adam, STAR Collaboration10.1103/PhysRevD.103.072005Phys. Rev. D. 10372005J. Adam et al. (STAR Collaboration), Comparison of transverse single-spin asymmetries for forward π 0 production in polarized pp, pAl and pAu collisions at nucleon pair c.m. energy s N N = 200 GeV, Phys. Rev. D 103, 072005 (2021) doi:10.1103/PhysRevD.103.072005 Single Spin Production Asymmetries from the Hard Scattering of Point-Like Constituents. D W Sivers, 10.1103/PhysRevD.41.83Phys. Rev. D. 4183D. W. Sivers, Single Spin Production Asymmetries from the Hard Scattering of Point-Like Con- stituents, Phys. Rev. D 41, 83 (1990) doi:10.1103/PhysRevD.41.83 First measurement of the Sivers asymmetry for gluons using SIDIS data. C Adolph, COMPASS Collaboration10.1016/j.physletb.2017.07.018Phys. Lett. B. 772854C. Adolph et al. (COMPASS Collaboration), First measurement of the Sivers asymmetry for gluons using SIDIS data, Phys. Lett. B 772, 854 (2017). doi:10.1016/j.physletb.2017.07.018 Fragmentation of transversely polarized quarks probed in transverse momentum distributions. J Collins, 10.1016/0550-3213(93)90262-NNucl. Phys. B. 396161182J. Collins, Fragmentation of transversely polarized quarks probed in transverse momentum distributions, Nucl. Phys. B 396 161182 (1993) doi:10.1016/0550-3213(93)90262-N Single-Spin Asymmetries in Semi-Inclusive Deep-Inelastic Scattering on a Transversely Polarized Hydrogen Target. A Airapetian, HERMES Collaboration10.1103/PhysRevLett.94.012002Phys. Rev. Lett. 9412002A. Airapetian et al. (The HERMES Collaboration), Single-Spin Asymmetries in Semi-Inclusive Deep-Inelastic Scattering on a Transversely Polarized Hydrogen Target, Phys. Rev. Lett. 94, 012002 (2005) doi:10.1103/PhysRevLett.94.012002 Experimental investigation of transverse spin asymmetries in µ − p SIDIS processes: Sivers asymmetries. C Adolph, COMPASS Collaboration10.1016/j.physletb.2012.09.056Phys. Lett. B. 717383C. Adolph et al. (COMPASS Collaboration), Experimental investigation of transverse spin asymmetries in µ − p SIDIS processes: Sivers asymmetries, Phys. Lett. B 717, 383 (2012) doi:10.1016/j.physletb.2012.09.056 Measurement of azimuthal hadron asymmetries in semi-inclusive deep inelastic scattering off unpolarised nucleons. C Adolph, COMPASS Collaboration10.1016/j.nuclphysb.2014.07.019Nucl. Phys. B. 8861046C. Adolph et al. (COMPASS Collaboration), Measurement of azimuthal hadron asymmetries in semi-inclusive deep inelastic scattering off unpolarised nucleons, Nucl. Phys. B 886, 1046 (2014) doi:10.1016/j.nuclphysb.2014.07.019 Role of Collins effect in the single spin asymmetry A N in p ↑ p → hx processes. M Anselmino, 10.1103/PhysRevD.86.074032Phys. Rev. D. 8674032M. Anselmino et al, Role of Collins effect in the single spin asymmetry A N in p ↑ p → hx processes, Phys. Rev. D 86, 074032 (2012) doi:10.1103/PhysRevD.86.074032 Sivers effect and the single spin asymmetry A N in p ↑ p → hx processes. M Anselmino, 10.1103/PhysRevD.86.074032Phys. Rev. D. 8854023M. Anselmino, et al, Sivers effect and the single spin asymmetry A N in p ↑ p → hx processes, Phys. Rev. D 88, 054023 (2013) doi:10.1103/PhysRevD.86.074032 Measurement of transverse single-spin asymmetries of π 0 and electromagnetic jets at forward rapidity in 200 and 500 GeV transversely polarized proton-proton collisions. J Adam, STAR CollaborationarXiv:2012.11428hep-exJ. Adam et al. (STAR Collaboration) Measurement of transverse single-spin asymmetries of π 0 and electromagnetic jets at forward rapidity in 200 and 500 GeV transversely polarized proton-proton collisions, arXiv:2012.11428 [hep-ex] No Generalized TMD-Factorization in Hadro-Production of High Transverse Momentum Hadrons. T C Rogers, P J Mulders, 10.1103/PhysRevD.81.094006Phys. Rev. D. 8194006T. C. Rogers and P. J. Mulders, No Generalized TMD-Factorization in Hadro- Production of High Transverse Momentum Hadrons, Phys. Rev. D 81, 094006 (2010) doi:10.1103/PhysRevD.81.094006 QCD Asymmetry and Polarized Hadron Structure Functions. A V Efremov, O V Teryaev, 10.1016/0370-2693(85)90999-2Phys. Lett. B. 150383A. V. Efremov and O. V. Teryaev, QCD Asymmetry and Polarized Hadron Structure Functions, Phys. Lett. B 150, 383 (1985) doi:10.1016/0370-2693(85)90999-2 Single transverse spin asymmetries in direct photon production. J Qiu, G F Sterman, 10.1016/0550-3213(92)90003-TNucl. Phys. B. 37852J. Qiu and G. F. Sterman, Single transverse spin asymmetries in direct photon production, Nucl. Phys. B 378, 52 (1992) doi:10.1016/0550-3213(92)90003-T Single transverse spin asymmetries in hadronic pion production. J Qiu, G F Sterman, 10.1103/PhysRevD.59.014004Phys. Rev. D. 5914004J. Qiu and G. F. Sterman, Single transverse spin asymmetries in hadronic pion production, Phys. Rev. D 59, 014004 (1998) doi:10.1103/PhysRevD.59.014004 Universality of T-odd effects in single spin and azimuthal asymmetries. D Boer, P J Mulders, F Pijlman, 10.1016/S0550-3213(03)00527-3Nuclear Physics B. 667D. Boer, P. J. Mulders, F. Pijlman, Universality of T-odd effects in single spin and azimuthal asymmetries,Nuclear Physics B 667, 201 (2003) doi:10.1016/S0550-3213(03)00527-3. Towards an explanation of transverse singlespin asymmetries in proton-proton collisions: The role of fragmentation in collinear factorization. K Kanazawa, Y Koike, A Metz, D Pitonyak, 10.1103/PhysRevD.89.111501Phys. Rev. D. 89111501K. Kanazawa, Y. Koike, A. Metz, and D. Pitonyak, Towards an explanation of transverse single- spin asymmetries in proton-proton collisions: The role of fragmentation in collinear factoriza- tion, Phys. Rev. D 89, 111501(R) (2014) doi:10.1103/PhysRevD.89.111501. The origin of single transverse-spin asymmetries in high-energy collisions. J Cammarota, 10.1103/PhysRevD.102.054002Phys. Rev. D. 10254002J. Cammarota,et al., The origin of single transverse-spin asymmetries in high-energy collisions, Phys. Rev. D 102, 054002 (2020) doi:10.1103/PhysRevD.102.054002 Cross section and double helicity asymmetry for η mesons and their comparison to π 0 production in p + p collisions at s = 200 GeV. A Adare, PHENIX Collaboration10.1103/PhysRevD.83.032001Phys. Rev. D. 8332001A. Adare et al. (PHENIX Collaboration), Cross section and double helicity asymmetry for η mesons and their comparison to π 0 production in p + p collisions at s = 200 GeV, Phys. Rev. D 83, 032001 (2011) doi:10.1103/PhysRevD.83.032001. Transverse single-spin asymmetries of midrapidity π 0 and η mesons in polarized p + p collisions at s = 200 GeV. U A Acharya, PHENIX Collaboration10.1103/PhysRevD.103.052009Phys. Rev. D. 10352009U.A. Acharya et al. (PHENIX Collaboration), Transverse single-spin asymmetries of midrapidity π 0 and η mesons in polarized p + p collisions at s = 200 GeV, Phys. Rev. D 103, 052009 (2021) doi:10.1103/PhysRevD.103.052009. Measurement of transverse-single-spin asymmetries for midrapidity and forward-rapidity production of hadrons in polarized p + p collisions at s = 200 and 62.4 GeV. A Adare, PHENIX Collaboration10.1103/PhysRevD.90.012006Phys. Rev. D. 9012006A. Adare et al. (PHENIX Collaboration), Measurement of transverse-single-spin asymmetries for midrapidity and forward-rapidity production of hadrons in polarized p + p collisions at s = 200 and 62.4 GeV, Phys. Rev. D 90, 012006 (2014) doi:10.1103/PhysRevD.90.012006. Three-gluon contribution to the single spin asymmetry for light hadron production in pp collision. Hiroo Beppu, Koichi Kanazawa, Yuji Koike, Shinsuke Yoshida, 10.1103/PhysRevD.89.034029Phys. Rev. D. 8934029Hiroo Beppu, Koichi Kanazawa, Yuji Koike, and Shinsuke Yoshida, Three-gluon contribution to the single spin asymmetry for light hadron production in pp collision, Phys. Rev. D 89, 034029 (2014) doi:10.1103/PhysRevD.89.034029 Unraveling the gluon Sivers function in hadronic collisions at RHIC. Umberto D&apos;alesio, 10.1103/PhysRevD.99.036013Phys. Rev. D. 9936013Umberto D'Alesio et al., Unraveling the gluon Sivers function in hadronic collisions at RHIC, Phys. Rev. D 99, 036013 (2019) doi:10.1103/PhysRevD.99.036013 Measurements of µµ pairs from open heavy flavor and Drell-Yan in p + p collisions at s = 200 GeV. C Aidala, PHENIX Collaboration10.1103/PhysRevD.99.072003Phys. Rev. D. 95112001C. Aidala et al. (PHENIX Collaboration), Measurements of µµ pairs from open heavy fla- vor and Drell-Yan in p + p collisions at s = 200 GeV, Phys. Rev. D 95, 112001 (2017) doi:10.1103/PhysRevD.99.072003 High p T direct photon and π 0 triggered azimuthal jet correlations and measurement of k T for isolated direct photons in p + p collisions at s = 200 GeV. A Adare, PHENIX Collaboration10.1103/PhysRevD.82.072001Phys. Rev. D. 8272001A. Adare et al. (PHENIX Collaboration), High p T direct photon and π 0 triggered azimuthal jet correlations and measurement of k T for isolated direct photons in p + p collisions at s = 200 GeV Phys. Rev. D 82, 072001 (2010) doi:10.1103/PhysRevD.82.072001 Direct-Photon Production in p + p Collisions at s = 200 GeV at Midrapidity. A Adare, PHENIX Collaboration10.1103/PhysRevD.86.072008Phys. Rev. D. 8672008A. Adare et al. (PHENIX Collaboration), Direct-Photon Production in p + p Col- lisions at s = 200 GeV at Midrapidity, Phys. Rev. D 86, 072008 (2012). doi:10.1103/PhysRevD.86.072008 Probing gluon spin-momentum correlations in transversely polarized protons through midrapidity isolated direct photons in p ↑ + p collisions at s = 200 GeV. U A Acharya, PHENIX CollaborationarXiv:2102.13585hep-exU.A. Acharya et al. (PHENIX Collaboration), Probing gluon spin-momentum correlations in transversely polarized protons through midrapidity isolated direct photons in p ↑ + p collisions at s = 200 GeV, arXiv:2102.13585 [hep-ex] Measurement of single spin asymmetry for direct photon production in pp collisions at 200 GeV/c. D L Adams, E704 Collaboration10.1016/0370-2693(94)01695-9Phys. Lett. B. 345569D. L. Adams et al. (E704 Collaboration), Measurement of single spin asymmetry for di- rect photon production in pp collisions at 200 GeV/c, Phys. Lett. B 345, 569 (1995) doi:10.1016/0370-2693(94)01695-9 Transverse single-spin asymmetries in p ↑ p → γX from quark-gluon-quark correlations in the proton. K Kanazawa, Y Koike, A Metz, D Pitonyak, 10.1103/PhysRevD.91.014013Phys. Rev. D. 9114013K. Kanazawa, Y. Koike, A. Metz, and D. Pitonyak, Transverse single-spin asymmetries in p ↑ p → γX from quark-gluon-quark correlations in the proton, Phys. Rev. D 91, 014013 (2015) doi:10.1103/PhysRevD.91.014013 Three-gluon contribution to the single spin asymmetry in Drell-Yan and direct-photon processes. Y Koike, S Yoshida, 10.1103/PhysRevD.85.034030Phys. Rev. D. 8534030Y. Koike and S. Yoshida, Three-gluon contribution to the single spin asymmetry in Drell-Yan and direct-photon processes, Phys. Rev. D 85, 034030 (2012) doi:10.1103/PhysRevD.85.034030
[]
[ "Time Delay of PKS1830-211 Using Molecular Absorption Lines", "Time Delay of PKS1830-211 Using Molecular Absorption Lines" ]
[ "Tommy Wiklind \nOnsala Space Observatory\nSE-43992OnsalaSweden\n\nFrançoise Combes Observatoire de Paris\n61 Av. de l'ObservatoireF-75014ParisFrance\n" ]
[ "Onsala Space Observatory\nSE-43992OnsalaSweden", "Françoise Combes Observatoire de Paris\n61 Av. de l'ObservatoireF-75014ParisFrance" ]
[]
The use of molecular absorption lines in deriving the time delay in PKS1830-211 is described, as well as results from a three year monitoring campaign. The time delay and the implied value for the Hubble constant are presented.
null
[ "https://arxiv.org/pdf/astro-ph/9909314v1.pdf" ]
14,582,896
astro-ph/9909314
d27fd84ddedd64b94cc138a92f92f36ab0c2c371
Time Delay of PKS1830-211 Using Molecular Absorption Lines 17 Sep 1999 Tommy Wiklind Onsala Space Observatory SE-43992OnsalaSweden Françoise Combes Observatoire de Paris 61 Av. de l'ObservatoireF-75014ParisFrance Time Delay of PKS1830-211 Using Molecular Absorption Lines 17 Sep 1999 The use of molecular absorption lines in deriving the time delay in PKS1830-211 is described, as well as results from a three year monitoring campaign. The time delay and the implied value for the Hubble constant are presented. The gravitationally lensed radio source PKS1830-211 contains two components; the NE and SW components, each consisting of a core and a jet-like feature (cf. Subrahmanyan et al. 1990;Jauncey et al. 1991). The separation of the core components is 0.98 ′′ , which corresponds to an expected time delay of a few weeks. The background source is highly variable, and PKS1830-211 is thus a good candidate for a measurement of the differential time delay between the cores. Due to heavy extinction, the system can only be detected at wavelengths longer than a few microns At low radio frequencies, however, the observed continuum gets a considerable contribution from the jet features. This contribution must be subtracted in order to get the flux associated with the (variable) cores. This requires the use of interferometers as well as modelling of the emissivity distribution. The jet-like features have a steep radio spectrum, improving the situation at short wavelengths. At millimeter wavelengths the flux from the jets are completely negligible and only the cores contribute to the flux. The problem at millimeter wavelengths is that angular separation of the cores can only be achieved using interferometric techniques, which is both time consuming and difficult given the northern location of millimeter interferometers. Due to a fortunate configuration of obscuring molecular gas in the lensing galaxy, the flux contributions from the NE and SW cores can, however, easily be estimated using molecular absorption lines and a single dish telescope with low angular resolution. The lensing galaxy was actually detected through the presence of millimetric molecular absorption lines, at a redshift z=0.886 (Wiklind & Combes 1996). The redshift of the background source has recently been derived using IR spectroscopy and found to be z=2.507 (Lidman et al. 1999). Several molecular absorption lines were found to be saturated, among them the HCO + (2-1) line, yet did not reach zero intensity level. Subsequent interferometer observations showed that the absorption occurs only in front of the SW core component (Wiklind & Combes 1998). The high opacity of the HCO + absortion line has been inferred through the detection of isotopic variants, such as H 13 CO + and H 18 CO + (Wiklind & Combes 1996, 1998. The depth of the absorption thus We have used the SEST 15m telescope to monitor the total continuum flux and the depth of the HCO + (2-1) line towards PKS1830-211 since April 1996 (Fig. 1a). During this period the source has changed its flux by a factor ∼2.5. Using the dispersion technique introduced by Pelt et al. (1994) we derive a differential time delay of 24 +5 −4 days, with the NE component leading (Fig. 1b. The result of Lovell et al. (1998), based on low frequency interferometric techniques and model subtraction of the jet components, are consistent with this value. These two methods of deriving the time delay use different techniques and their agreement gives additional confidence to the results. In the lens model of Nair et al. (1993), our time delay measurement corresponds to a Hubble constant H 0 = 69 +12 −11 km s −1 Mpc −1 (q 0 = 0.5). Figure 1 . 1a) The total continuum flux (top) and the depth of the HCO + (2-1) absorption line (bottom). b) The combined NE and SW flux components, where the NE flux has been shifted by −24 days and corrected for a linear trend of the magnification measures directly the flux from the SW component, while the total continuum is the sum of the fluxes from the SW and NE components (seeFig. 1a). Acknowledgments. We are grateful to L.-Å. Nyman, L. Haikala, F. Mac-Auliffe, F. Azagra, A. Tieftrunk, F. Rantakyrö and K. Brooks for expert help with performing the observations. . D L Jauncey, Nature. 352132Jauncey, D. L., et al. 1991, Nature, 352, 132 . C Lidman, F Courbin, G Meylan, T Broadhurst, B Frye, W J W Welch, ApJ. 51457Lidman, C., Courbin, F., Meylan, G., Broadhurst, T., Frye, B., & Welch, W. J. W. 1999, ApJ, 514, L57 . J E J Lovell, ApJ. 50851Lovell, J. E. J., et al. 1998, ApJ, 508, L51 . S Nair, D Nrashima, A P Rao, ApJ. 40746Nair, S., Nrashima, D., & Rao, A. P. 1993, ApJ, 407, 46 . J Pelt, R Kayser, S Refsdal, T Schramm, A&A. 30597Pelt, J., Kayser, R., Refsdal, S., & Schramm, T. 1996, A&A, 305, 97 . R Subrahmanyan, D Narashima, A Pramesh-Rao, G Swarup, 246263MN-RASSubrahmanyan, R., Narashima, D., Pramesh-Rao, A., & Swarup, G. 1990, MN- RAS, 246, 263 . T Wiklind, F Combes, ApJ. 500129Wiklind, T., & Combes, F. 1998, ApJ, 500, 129 . T Wiklind, F Combes, Nature. 379139Wiklind, T., & Combes, F. 1996, Nature, 379, 139
[]
[ "Job Shop Scheduling Solver based on Quantum Annealing", "Job Shop Scheduling Solver based on Quantum Annealing", "Job Shop Scheduling Solver based on Quantum Annealing", "Job Shop Scheduling Solver based on Quantum Annealing" ]
[ "Davide Venturelli \nQuantum Artificial Intelligence Laboratory (QuAIL)\nNASA Ames\n\n\nU.S.R.A. Research Institute for Advanced Computer Science (RIACS)\n\n", "Dominic J J Marchand ", "Galo Rojo ", "Davide Venturelli \nQuantum Artificial Intelligence Laboratory (QuAIL)\nNASA Ames\n\n\nU.S.R.A. Research Institute for Advanced Computer Science (RIACS)\n\n", "Dominic J J Marchand ", "Galo Rojo " ]
[ "Quantum Artificial Intelligence Laboratory (QuAIL)\nNASA Ames\n", "U.S.R.A. Research Institute for Advanced Computer Science (RIACS)\n", "Quantum Artificial Intelligence Laboratory (QuAIL)\nNASA Ames\n", "U.S.R.A. Research Institute for Advanced Computer Science (RIACS)\n" ]
[]
1QB Information Technologies (1QBit)Quantum annealing is emerging as a promising near-term quantum computing approach to solving combinatorial optimization problems. A solver for the job-shop scheduling problem that makes use of a quantum annealer is presented in detail. Inspired by methods used for constraint satisfaction problem (CSP) formulations, we first define the makespan-minimization problem as a series of decision instances before casting each instance into a time-indexed quadratic unconstrained binary optimization. Several pre-processing and graph-embedding strategies are employed to compile optimally parametrized families of problems for scheduling instances on the D-Wave Systems' Vesuvius quantum annealer (D-Wave Two). Problem simplifications and partitioning algorithms, including variable pruning, are discussed and the results from the processor are compared against classical global-optimum solvers.I. I. INTRODUCTIONThe commercialization and independent benchmarking [1-4] of quantum annealers based on superconducting qubits has sparked a surge of interest for near-term practical applications of quantum analog computation in the optimization research community. Many of the early proposals for running useful problems arising in space science[5]have been adapted and have seen small-scale testing on the D-Wave Two processor[6]. The best procedure for comparison of quantum analog performance with traditional digital methods is still under debate[3,7,8]and remains mostly speculative due to the limited number of qubits on the currently available hardware. While waiting for the technology to scale up to more significant sizes, there is an increasing interest in the identification of small problems which are nevertheless computationally challenging and useful. One approach in this direction has been pursued in[9], and consisted in identifying parametrized ensembles of random instances of operational planning problems of increasing sizes that can be shown to be on the verge of a solvable-unsolvable phase transition. This condition should be sufficient to observe an asymptotic exponential scaling of runtimes, even for instances of relatively small size, potentially testable on current-or next-generation D-Wave hardware. An empirical takeaway from [6] (validated also by experimental results in[10,11]) was that the established programming and program running techniques for quantum annealers seem to be particularly amenable to scheduling problems, allowing for an efficient mapping and good performance compared to other applied problem classes like automated navigation and Bayesian-network structure learning[12].Motivated by these first results, and with the intention to challenge current technologies on hard problems of practical value, we herein formulate a quantum annealing version of the job-shop scheduling problem (JSP). The JSP is essentially a general paradigmatic constraint satisfaction problem (CSP) framework for the problem of optimizing the allocation of resources required for the execution of sequences of operations with constraints on location and time. We provide compilation and running strategies for this problem using original and traditional techniques for parametrizing ensembles of instances. Results from the D-Wave Two are compared with classical exact solvers. The JSP has earned a reputation for being especially intractable, a claim supported by the fact that the best general-purpose solvers (CPLEX, Gurobi Optimizer, SCIP) struggle with instances as small as 10 machines and 10 jobs (10 x 10)[13]. Indeed, some known 20 x 15 instances often used for benchmarking still have not been solved to optimality even by the best specialpurpose solvers[14], and 20 x 20 instances are typically completely intractable. We note that this early work constitutes a wide-ranging survey of possible techniques and research directions and leave a more in-depth exploration of these topics for future work.A. Problem definition and conventionsTypically the JSP consists of a set of jobs J = {j 1 , . . . , j N } that must be scheduled on a set of machines M = {m 1 , . . . , m M }. Each job consists of a sequence of operations that must be performed in a predefined orderJob j n is assumed to have L n operations. Each operation O nj has an integer execution time p nj (a value of zero is allowed) and has to be executed by an assigned machine m qnj ∈ M, where q nj is the index of the assigned machine. There can only be one operation running on any given machine at any given point in time and each operation of a job needs to complete before the following one can start. The usual objective is to schedule all operations in a valid sequence while minimizing the makespan (i.e., the completion time of the last running job), although other objective functions can be used. In what follows, we will denote with T the minimum possible makespan associated with a given JSP instance. arXiv:1506.08479v2 [quant-ph]
null
[ "https://arxiv.org/pdf/1506.08479v2.pdf" ]
118,422,155
1506.08479
25b9aad999e7d67a9fa4c0b9f081678843c637b8
Job Shop Scheduling Solver based on Quantum Annealing 17 Oct 2016 Davide Venturelli Quantum Artificial Intelligence Laboratory (QuAIL) NASA Ames U.S.R.A. Research Institute for Advanced Computer Science (RIACS) Dominic J J Marchand Galo Rojo Job Shop Scheduling Solver based on Quantum Annealing 17 Oct 2016 1QB Information Technologies (1QBit)Quantum annealing is emerging as a promising near-term quantum computing approach to solving combinatorial optimization problems. A solver for the job-shop scheduling problem that makes use of a quantum annealer is presented in detail. Inspired by methods used for constraint satisfaction problem (CSP) formulations, we first define the makespan-minimization problem as a series of decision instances before casting each instance into a time-indexed quadratic unconstrained binary optimization. Several pre-processing and graph-embedding strategies are employed to compile optimally parametrized families of problems for scheduling instances on the D-Wave Systems' Vesuvius quantum annealer (D-Wave Two). Problem simplifications and partitioning algorithms, including variable pruning, are discussed and the results from the processor are compared against classical global-optimum solvers.I. I. INTRODUCTIONThe commercialization and independent benchmarking [1-4] of quantum annealers based on superconducting qubits has sparked a surge of interest for near-term practical applications of quantum analog computation in the optimization research community. Many of the early proposals for running useful problems arising in space science[5]have been adapted and have seen small-scale testing on the D-Wave Two processor[6]. The best procedure for comparison of quantum analog performance with traditional digital methods is still under debate[3,7,8]and remains mostly speculative due to the limited number of qubits on the currently available hardware. While waiting for the technology to scale up to more significant sizes, there is an increasing interest in the identification of small problems which are nevertheless computationally challenging and useful. One approach in this direction has been pursued in[9], and consisted in identifying parametrized ensembles of random instances of operational planning problems of increasing sizes that can be shown to be on the verge of a solvable-unsolvable phase transition. This condition should be sufficient to observe an asymptotic exponential scaling of runtimes, even for instances of relatively small size, potentially testable on current-or next-generation D-Wave hardware. An empirical takeaway from [6] (validated also by experimental results in[10,11]) was that the established programming and program running techniques for quantum annealers seem to be particularly amenable to scheduling problems, allowing for an efficient mapping and good performance compared to other applied problem classes like automated navigation and Bayesian-network structure learning[12].Motivated by these first results, and with the intention to challenge current technologies on hard problems of practical value, we herein formulate a quantum annealing version of the job-shop scheduling problem (JSP). The JSP is essentially a general paradigmatic constraint satisfaction problem (CSP) framework for the problem of optimizing the allocation of resources required for the execution of sequences of operations with constraints on location and time. We provide compilation and running strategies for this problem using original and traditional techniques for parametrizing ensembles of instances. Results from the D-Wave Two are compared with classical exact solvers. The JSP has earned a reputation for being especially intractable, a claim supported by the fact that the best general-purpose solvers (CPLEX, Gurobi Optimizer, SCIP) struggle with instances as small as 10 machines and 10 jobs (10 x 10)[13]. Indeed, some known 20 x 15 instances often used for benchmarking still have not been solved to optimality even by the best specialpurpose solvers[14], and 20 x 20 instances are typically completely intractable. We note that this early work constitutes a wide-ranging survey of possible techniques and research directions and leave a more in-depth exploration of these topics for future work.A. Problem definition and conventionsTypically the JSP consists of a set of jobs J = {j 1 , . . . , j N } that must be scheduled on a set of machines M = {m 1 , . . . , m M }. Each job consists of a sequence of operations that must be performed in a predefined orderJob j n is assumed to have L n operations. Each operation O nj has an integer execution time p nj (a value of zero is allowed) and has to be executed by an assigned machine m qnj ∈ M, where q nj is the index of the assigned machine. There can only be one operation running on any given machine at any given point in time and each operation of a job needs to complete before the following one can start. The usual objective is to schedule all operations in a valid sequence while minimizing the makespan (i.e., the completion time of the last running job), although other objective functions can be used. In what follows, we will denote with T the minimum possible makespan associated with a given JSP instance. arXiv:1506.08479v2 [quant-ph] Quantum annealing is emerging as a promising near-term quantum computing approach to solving combinatorial optimization problems. A solver for the job-shop scheduling problem that makes use of a quantum annealer is presented in detail. Inspired by methods used for constraint satisfaction problem (CSP) formulations, we first define the makespan-minimization problem as a series of decision instances before casting each instance into a time-indexed quadratic unconstrained binary optimization. Several pre-processing and graph-embedding strategies are employed to compile optimally parametrized families of problems for scheduling instances on the D-Wave Systems' Vesuvius quantum annealer (D-Wave Two). Problem simplifications and partitioning algorithms, including variable pruning, are discussed and the results from the processor are compared against classical global-optimum solvers. I. I. INTRODUCTION The commercialization and independent benchmarking [1][2][3][4] of quantum annealers based on superconducting qubits has sparked a surge of interest for near-term practical applications of quantum analog computation in the optimization research community. Many of the early proposals for running useful problems arising in space science [5] have been adapted and have seen small-scale testing on the D-Wave Two processor [6]. The best procedure for comparison of quantum analog performance with traditional digital methods is still under debate [3,7,8] and remains mostly speculative due to the limited number of qubits on the currently available hardware. While waiting for the technology to scale up to more significant sizes, there is an increasing interest in the identification of small problems which are nevertheless computationally challenging and useful. One approach in this direction has been pursued in [9], and consisted in identifying parametrized ensembles of random instances of operational planning problems of increasing sizes that can be shown to be on the verge of a solvable-unsolvable phase transition. This condition should be sufficient to observe an asymptotic exponential scaling of runtimes, even for instances of relatively small size, potentially testable on current-or next-generation D-Wave hardware. An empirical takeaway from [6] (validated also by experimental results in [10,11]) was that the established programming and program running techniques for quantum annealers seem to be particularly amenable to scheduling problems, allowing for an efficient mapping and good performance compared to other applied problem classes like automated navigation and Bayesian-network structure learning [12]. Motivated by these first results, and with the intention to challenge current technologies on hard problems of practical value, we herein formulate a quantum annealing version of the job-shop scheduling problem (JSP). The JSP is essentially a general paradigmatic constraint satisfaction problem (CSP) framework for the problem of optimizing the allocation of resources required for the execution of sequences of operations with constraints on location and time. We provide compilation and running strategies for this problem using original and traditional techniques for parametrizing ensembles of instances. Results from the D-Wave Two are compared with classical exact solvers. The JSP has earned a reputation for being especially intractable, a claim supported by the fact that the best general-purpose solvers (CPLEX, Gurobi Optimizer, SCIP) struggle with instances as small as 10 machines and 10 jobs (10 x 10) [13]. Indeed, some known 20 x 15 instances often used for benchmarking still have not been solved to optimality even by the best specialpurpose solvers [14], and 20 x 20 instances are typically completely intractable. We note that this early work constitutes a wide-ranging survey of possible techniques and research directions and leave a more in-depth exploration of these topics for future work. A. Problem definition and conventions Typically the JSP consists of a set of jobs J = {j 1 , . . . , j N } that must be scheduled on a set of machines M = {m 1 , . . . , m M }. Each job consists of a sequence of operations that must be performed in a predefined order j n = {O n1 → O n2 → · · · → O nLn }. Job j n is assumed to have L n operations. Each operation O nj has an integer execution time p nj (a value of zero is allowed) and has to be executed by an assigned machine m qnj ∈ M, where q nj is the index of the assigned machine. There can only be one operation running on any given machine at any given point in time and each operation of a job needs to complete before the following one can start. The usual objective is to schedule all operations in a valid sequence while minimizing the makespan (i.e., the completion time of the last running job), although other objective functions can be used. In what follows, we will denote with T the minimum possible makespan associated with a given JSP instance. As defined above, the JSP variant we consider is denoted JM p nj ∈ [p min , . . . , p max ] C max in the well-known α|β|γ notation, where p min and p max are the smallest and largest execution times allowed, respectively. In this notation, JM stands for job-shop type on M machines, and C max means we are optimizing the makespan. For notational convenience, we enumerate the operations in a lexicographical order in such a way that j 1 = {O 1 → · · · → O k1 }, j 2 = {O k1+1 → · · · → O k2 }, . . . j N = {O k N −1 +1 → · · · → O k N }.(1) Given the running index over all operations i ∈ {1, . . . , k N }, we let q i be the index of the machine m qi responsible for executing operation O i . We define I m to be the set of indices of all of the operations that have to be executed on machine m m , i.e., I m = {i : q i = m}. The execution time of operation O i is now simply denoted p i . A priori, a job can use the same machine more than once, or use only a fraction of the M available machines. For benchmarking purposes, it is customary to restrict a study to the problems of a specific family. In this work, we define a ratio θ that specifies the fraction of the total number of machines that is used by each job, assuming no repetition when θ ≤ 1. For example, a ratio of 0.5 means that each job uses only 0.5M distinct machines. B. Quantum annealing formulation In this work, we seek a suitable formulation of the JSP for a quantum annealing optimizer (such as the D-Wave Two). The optimizer is best described as an oracle that solves an Ising problem with a given probability [15]. This Ising problem is equivalent to a quadratic unconstrained binary optimization (QUBO) problem [10]. The binary polynomial associated with a QUBO problem can be depicted as a graph, with nodes representing variables and values attached to nodes and edges representing linear and quadratic terms, respectively. The QUBO solver can similarly be represented as a graph where nodes represents qubits and edges represent the allowed connectivity. The optimizer is expected to find the global minimum with some probability which itself depends on the problem and the device's parameters. The device is not an ideal oracle: its limitations, with regard to precision, connectivity, and number of variables, must be considered to achieve the best possible results. As is customary, we rely on the classical procedure known as embedding to adapt the connectivity of the solver to the problem at hand. This procedure is described in a number of quantum annealing papers [6,11]. During this procedure, two or more variables can be forced to take on the same value by including additional constraints in the model. In the underlying Ising model, this is achieved by introducing a large ferromagnetic (negative) coupling J F between two spins. The embedding process modifies the QUBO problem accordingly and one should not confuse the logical QUBO problem value, which depends on the QUBO problem and the state considered, with the Ising problem energy seen by the optimizer (which additionally depends on the extra constraints and the solver's parameters, such as J F ). We distinguish between the optimization version of the JSP, in which we seek a valid schedule with a minimal makespan, and the decision version, which is limited to validating whether or not a solution exists with a makespan smaller than or equal to a user-specified timespan T . We focus exclusively on the decision version and later describe how to implement a full optimization version based on a binary search. We note that the decision formulation where jobs are constrained to fixed time windows is sometimes referred in the literature as the job-shop CSP formulation [16,17], and our study will refer to those instances where the jobs share a common deadline T . II. II. QUBO PROBLEM FORMULATION While there are several ways the JSP can be formulated, such as the rank-based formulation [18] or the disjunctive formulation [19], our formulation is based on a straightforward time-indexed representation particularly amenable to quantum annealers (a comparative study of mappings for planning and scheduling problems can be found in [10]). We assign a set of binary variables for each operation, corresponding to the various possible discrete starting times the operation can have: x i,t = 1 : operation O i starts at time t, 0 : otherwise.(2) Here t is bounded from above by the timespan T , which represents the maximum time we allow for the jobs to complete. The timespan itself is bounded from above by the total work of the problem, that is, the sum of the execution times of all operations. A. Constraints We account for the various constraints by adding penalty terms to the QUBO problem. For example, an operation must start once and only once, leading to the constraint and associated penalty function t x i,t = 1 for each i → i t x i,t − 1 2 .(3) There can only be one job running on each machine at any given point in time, which expressed as quadratic constraints yields (i,t,k,t )∈Rm Table representation of an example 3 x 3 instance whose execution times have been randomly selected to be either 1 or 2 time units. b) Pictorial view of the QUBO mapping of the above example for HT =6. Green, purple, and cyan edges refer respectively to h1, h2, and h3 quadratic coupling terms (Eqs. 7-9). Each circle represents a bit with its i, t index as in Eq. 2. c) The same QUBO problem as in (b) after the variable pruning procedure detailed in the section on QUBO formulation refinements. Isolated qubits are bits with fixed assignments that can be eliminated from the final QUBO problem. d) The same QUBO problem as in (b) for HT =7. Previously displayed edges in the above figure are omitted. Red edges/circles represent the variations with respect to HT =6. Yellow stars indicate the bits which are penalized with local fields for timespan discrimination. x i,t x k,t = 0 for each m,(4)m 2 , p = 2 m 3 , p = 2 t 1 t 2 t 3 t 4 t 5 t 6 t 7 t 1 t 2 t 3 t 4 t 5 t 6 t 7 m 2 , p = 1 m 1 , p = 1 m 1 , p = 1 m 1 , p = 2 m 3 , p = 2 m 2 , p = 1 j 3 j 1 j 1 j 3 j 2 j 1 j 3 j 2 j 1 j 3 j 2 FIG. 1: a) where R m = A m ∪ B m and A m = {(i, t, k, t ) : (i, k) ∈ I m × I m , i = k, 0 ≤ t, t ≤ T, 0 < t − t < p i }, B m = {(i, t, k, t ) : (i, k) ∈ I m × I m , i < k, t = t, p i > 0, p j > 0}. The set A m is defined so that the constraint forbids operation O j from starting at t if there is another operation O i still running, which happens if O i started at time t and t − t is less than p i . The set B m is defined so that two jobs cannot start at the same time, unless at least one of them has an execution time equal to zero. Finally, the order of the operations within a job are enforced with kn−1<i<kn t+pi>t x i,t x i+1,t for each n,(5) which counts the number of precedence violations between consecutive operations only. The resulting classical objective function (Hamiltonian) is given by H T (x) = ηh 1 (x) + αh 2 (x) + βh 3 (x),(6) where h 1 (x) = n     kn−1<i<kn t+pi>t x i,t x i+1,t     ,(7)h 2 (x) = m   (i,t,k,t )∈Rm x i,t x k,t   ,(8)h 3 (x) = i t x i,t − 1 2 ,(9) and the penalty constants η, α, and β are required to be larger than 0 to ensure that unfeasible solutions do not have a lower energy than the ground state(s). As expected for a decision problem, we note that the minimum of H T is 0 and it is only reached if a schedule satisfies all of the constraints. The index of H T explicitly shows the dependence of the Hamiltonian on the timespan T , which affects the number of variables involved. Figure 1-b illustrates the QUBO problem mapping for H T =6 for a particular 3 x 3 example (Figure 1-a). B. Simple variable pruning Figure 1-b also reveals that a significant number of the N M T binary variables required for the mapping can be pruned by applying simple restrictions on the time index t (whose computation is polynomial as the system size increases and therefore trivial here). Namely, we can define an effective release time for each operation corresponding to the sum of the execution times of the preceding operations in the same job. A similar upper bound corresponding to the timespan minus all of the execution times of the subsequent operations of the same job can be set. The bits corresponding to these invalid starting times can be eliminated from the QUBO problem altogether since any valid solution would require them to be strictly zero. This simplification eliminates an estimated number of variables equal to N M (M p − 1), where p represents the average execution time of the operations. This result can be generalized to consider the previously defined ratio θ, such that the total number of variables required after this simple QUBO problem pre-processing is θN M [T − θM p + 1]. III. III. QUBO FORMULATION REFINEMENTS Although the above formulation proves sufficient for running JSPs on the D-Wave machine, we explore a few potential refinements. The first pushes the limit of simple variable pruning by considering more advanced criteria for reducing the possible execution window of each task. The second refinement proposes a compromise between the decision version of the JSP and a full optimization version. A. Window shaving In the time-index formalism, reducing the execution windows of operations (i.e., shaving) [20], or in the disjunctive approach, adjusting the heads and tails of operations [21,22], or more generally, by applying constraints propagation techniques (e.g. [23]), together constitute the basis for a number of classical approaches to solving the JSP. Shaving is sometimes used as a pre-processing step or as a way to obtain a lower bound on the makespan before applying other methods. The interest from our perspective is to showcase how such classical techniques remain relevant, without straying from our quantum annealing approach, when applied to the problem of pruning as many variables as possible. This enables larger problems to be considered and improves the success rate of embeddability in general (see Figure 3), without significantly affecting the order of magnitude of the overall time to solution in the asymptotic regime. Further immediate advantages of reducing the required number of qubits become apparent during the compilation of JSP instances for the D-Wave device due to the associated embedding overhead that depends directly on the number of variables. The shaving process is typically handled by a classical algorithm whose worst-case complexity remains polynomial. While this does not negatively impact the fundamental complexity of solving JSP instances, for pragmatic benchmarking the execution time needs to be taken into account and added to the quantum annealing runtime to properly report the time to solution of the whole algorithm. Different elimination rules can be applied. We focus herein on the iterated Carlier and Pinson (ICP) procedure [21] reviewed in the appendix with worst-case complexity given by O(N 2 M 2 T log(N )). Instead of looking at the one-job sub-problems and their constraints to eliminate variables, as we did for the simple pruning, we look at the one-machine sub-problems and their associated constraints to further prune variables. An example of the resulting QUBO problem is presented in Figure 1-c. B. Timespan discrimination We explore a method of extracting more information regarding the actual optimal makespan of a problem within a single call to the solver by breaking the degeneracy of the ground states and spreading them over some finite energy scale, distinguishing the energy of valid schedules on the basis of their makespan. Taken to the extreme, this approach would amount to solving the full optimization problem. We find that the resulting QUBO problem is poorly suited to a solver with limited precision, so a balance must be struck between extra information and the precision requirement. A systematic study of how best to balance the amount of information obtained versus the extra cost will be the subject of future work. We propose to add a number of linear terms, or local fields, to the QUBO problem to slightly penalize valid solutions with larger makespans. We do this by adding a cost to the last operation of each job, that is, the set {O k1 , . . . , O k N }. At the same time, we require that the new range of energy over which the feasible solutions are spread stays within the minimum logical QUBO problem's gap given by ∆E = min{η, α, β}. If the solver's precision can accomodate K distinguishable energy bins, then makespans within [T − K, T ] can be immediately identified from their energy values. The procedure is illustrated in Figure 1-d and some implications are discussed in the appendix. IV. IV. ENSEMBLE PRE-CHARACTERIZATION AND COMPILATION We now turn to a few important elements of our computational strategy for solving JSP instances. We first show how a careful pre-characterization of classes of random JSP instances, representative of the problems to be run on the quantum optimizer, provides very useful information regarding the shape of the search space for T . We then describe how instances are compiled to run on the actual hardware. A. Makespan Estimation In Figure 2, we show the distributions of the optimal makespans T for different ensembles of instances parametrized by their size N = M , by the possible values of task durations P p = {p min , . . . , p max }, and by the ratio θ ≤ 1 of the number of machines used by each job. Instances are generated randomly by selecting θM distinct machines for each job and assigning an execution time to each operation randomly. For each set of parameters, we can compute solutions with a classical exhaustive solver in order to identify the median of the distribution T (N, P p , θ) as well as the other quantiles. These could also be inferred from previously solved instances with the proposed annealing solver. The resulting information can be used to guide the binary search required to solve the optimization problem. Figure 2 indicates that a normal distribution is an adequate approximation, so we need only to estimate its average T and variance σ 2 . Interestingly, from the characterization of the families of instances up to N = 10 we find that, at least in the region explored, the average minimum makespan T is proportional to the average execution time of a job p θN , where p is the mean of P p . This linear ansatz allows for the extrapolation of approximate resource requirements for classes of problems which have not yet been pre-characterized, and it constitutes an educated guess for classes of problems which cannot be precharacterized due to their difficulty or size. The accuracy of these functional forms was verified by computing the relative error of the prediction versus the fit of the makespan distribution of each parametrized family up to N = M = 9 and p max = 20 using 200 instances to compute the makespan histogram. The prediction for T results are consistently at least 95% accurate, while the one for σ has at worst a 30% error margin, a very approximate but sufficient model for the current purpose of guiding the binary search. B. Compilation The graph-minor embedding technique (abbreviated simply "embedding") represents the de facto method of recasting the Ising problems to a form compatible with the layout of the annealer's architecture [24,25], which for the D-Wave Two is a Chimera graph [1]. Formally, we seek an isomorphism between the problem's QUBO graph and a graph minor of the solver. This procedure has become a standard in solving applied problems using quantum annealing [6,11] and can be thought of as the analogue of compilation in a digital computer programming framework during which variables are assigned to hardware registers and memory locations. This process is covered in more details in the appendix. An example of embedding for a 5 x 5 JSP instance with θ = 1 and T = 7 is shown in Figure 3 3] Chimera graph. Finding the optimal tiling that uses the fewest qubits is NP-hard [26], and the standard approach is to employ heuristic algorithms [27]. In general, for the embedding of time-indexed mixed-integer programming QUBO problems of size N into a graph of degree k, one should expect a quadratic overhead in the number of binary variables on the order of aN 2 , with a ≤ (k − 2) −1 depending on the embedding algorithm and the hardware connectivity [11]. This quadratic scaling is apparent in Figure 3-b where we report on the compilation attempts using the algorithm in [27]. Results are presented for the D-Wave chip installed at NASA Ames at the time of this study, for a larger chip with the same size of Chimera block and connectivity pattern (like the latest chip currently being manufactured by D-Wave Systems), and for a speculative yet-larger chip where the Chimera block is twice as large. We deem a JSP instance embeddable when the respective H T =T is embeddable, so the decrease in probability of embedding with increasing system size is closely related to the shift and spreading of the optimal makespan distributions for ensembles of increasing size (see Figure 2). What we observe is that, with the available algorithms, the current architecture admits embedded JSP instances whose total execution time N M θ p is around 20 time units, while near-future (we estimate 2 years) D-Wave chip architectures could potentially double that. As noted in similar studies (e.g., [6]), graph connectivity has a much more dramatic impact on embeddability than qubit count. . 1000 random instances have been generated for each point, and a cutoff of 2 minutes has been set for the heuristic algorithm to find a valid topological embedding. Results for different sizes of Chimera are shown. c) Optimal parameter-setting analysis for the ensembles of JSP instances we studied. Each point corresponds to the number of qubits and the optimal JF (see main text) of a random instance, and each color represents a parametrized ensemble (green: 3 x 3, purple: 4 x 4, yellow: 5 x 5, blue: 6 x 6; darker colors represent ensembles with Pp = [1, 1] as opposed to lighter colors which indicate Pp = [0, 2]). Distributions on the right of scatter plots represent Gaussian fits of the histogram of the optimal JF for each ensemble. Runtime results are averaged over an ungauged run and 4 additional runs with random gauges [28]. Probability N = M = 3 p = [0, 1] N = M = 3 = [1] N = M = 3 p = [0, 2] N = M = 3 p = [1, 2] p = [0, 3] N = M = 4 p = [0, 1] N = M = 4 = [1] N = M = 4 p = [0, 2] N = M = 4 p = [1, 2] N = M = 4 p = [0, 3] N = M = 5 p = [0, 1] N = M = 5 = [1] N = M = 5 p= [0, 2] N = M = 5 p= [1, 2] N = M = 5 p= [0, 3] N = M = 6 p = [0, 1] N = M = 6 p= [1] N = M = 6 p= [0, 2] N = M = 6 p= [1, 2] N = M = 6 p= [0, Once the topological aspect of embedding has been solved, we set the ferromagnetic interactions needed to adapt the connectivity of the solver to the problem at hand. For the purpose of this work, this should be re-garded as a technicality necessary to tune the performance of the experimental analog device and we include the results for completeness. Introductory details about the procedure can be found in [6,11]. In Figure 3-c we show a characterization of the ensemble of JSP instances (parametrized by N , M , θ, and P p , as described at the beginning of this section). We present the best ferromagnetic couplings found by runs on the D-Wave machine under the simplification of a uniform ferromagnetic coupling by solving the embedded problems with values of J F from 0.4 to 1.8 in relative energy units of the largest coupling of the original Ising problem. The run parameters used to determine the best J F are the same as we report in the following sections, and the problem sets tested correspond to Hamiltonians whose timespan is equal to the sought makespan H T =T . This parametersetting approach is similar to the one followed in [6] for operational planning problems, where the instance ensembles were classified by problem size before compilation. What emerges from this preliminary analysis is that each parametrized ensemble can be associated to a distribution of optimal J F that can be quite wide, especially for the ensembles with p min = 0 and large p max . This spread might discourage the use of the mean value of such a distribution as a predictor of the best J F to use for the embedding of new untested instances. However, the results from this predictor appear to be better than the more intuitive prediction obtained by correlating the number of qubits after compilation with the optimal J F . This means that for the D-Wave machine to achieve optimal performance on structured problems, it seems to be beneficial to use the information contained in the structure of the logical problem to determine the best parameters. We note that this "offline" parameter-setting could be used in combination with "online" performance estimation methods such as the ones described in [28] in order to reach the best possible instance-specific J F with a series of quick experimental runs. The application of these techniques, together with the testing of alternative offline predictors, will be the subject of future work. V. V. RESULTS OF TEST RUNS AND DISCUSSION A complete quantum annealing JSP solver designed to solve an instance to optimality using our proposed formulation will require the independent solution of several embedded instances {H T }, each corresponding to a different timespan T . Assuming that the embedding time, the machine setup time, and the latency between subsequent operations can all be neglected, due to their being non-fundamental, the running time T of the approach for a specific JSP instance reduces to the expected total annealing time necessary to find the optimal solution of each embedded instance with a specified minimum target probability 1. The probability of ending the annealing cycle in a desired ground state depends, in an essen-tially unknown way, on the embedded Ising Hamiltonian spectrum, the relaxation properties of the environment, the effect of noise, and the annealing profile. Understanding through an ab initio approach what is the best computational strategy appears to be a formidable undertaking that would require theoretical breakthroughs in the understanding of open-system quantum annealing [29,30], as well as a tailored algorithmic analysis that could take advantage of the problem structure that the annealer needs to solve. For the time being, and for the purposes of this work, it seems much more practical to limit these early investigations to the most relevant instances, and to lay out empirical procedures that work under some general assumptions. More specifically, we focus on solving the CSP version of JSP, not the full optimization problem, and we therefore only benchmark the Hamiltonians with T = T with the D-Wave machine. We note however that a full optimization solver can be realized by leveraging data analysis of past results on parametrized ensembles and by implementing an adaptive binary search. Full details can be found in a longer version of this work [31]. On the quantum annealer installed at NASA Ames (it has 509 working qubits; details are presented in [32]), we run hundreds of instances, sampling the ensembles N = M ∈ {3, 4, 5, 6}, θ ∈ {0.5, 1}, and P p ∈ {[1, 1], [0, 2]}. For each instance, we report results, such as runtimes, at the most optimal J F among those tested, assuming the application of an optimized parameter-setting procedure along the lines of that described in the previous section. Figure 4-a displays the total annealing repetitions required to achieve a 99% probability of success on the ground state of H T , with each repetition lasting t A = 20 µs, as a function of the number of qubits in the embedded (and pruned) Hamiltonian. We observe an exponential increase in complexity with increasing Hamiltonian size, for both classes of problems studied. This likely means that while the problems tested are small, the analog optimization procedure intrinsic to the D-Wave device's operation is already subject to the fundamental complexity bottlenecks of the JSP. It is, however, premature to draw conclusions about performance scaling of the technology given the current constraints on calibration procedures, annealing time, etc. Many of these problems are expected to be either overcome or nearly so with the next generation of D-Wave chip, at which point more extensive experimentation will be warranted. In Figure 4-b, we compare the performance of the D-Wave device to two exhaustive classical algorithms in order to gain insight on how current quantum annealing technology compares with paradigmatic classical optimization methods. Leaving the performance of approximate solutions for future work, we chose not to explore the plethora of possible heuristic methods as we operate the D-Wave machine, seeking the global optimum. The first algorithm, B, detailed in [35], exploits the disjunctive graph representation and a branch-and-bound strategy that very effectively combines a branching Figure 3-c). The number of qubits on the x-axis represents the qubits used after embedding. b) Correlation plot between classical solvers and the D-Wave optimizer. Gray and violet points represent runtimes compared with algorithm B, and cyan and red are compared to the MS algorithm, respectively, with θ = 1 and θ = 0.5. All results presented correspond to the best out of 5 gauges selected randomly for every instance. In case the machine returns embedding components whose values are discordant, we apply a majority voting rule to recover a solution within the logical subspace [6,11,28,33,34]. We observe a deviation of about an order of magnitude on the annealing time if we average over 5 gauges instead of picking the best one, indicating that there is considerable room for improvement if we were to apply more-advanced calibration techniques [32]. scheme based on selecting the direction of a single disjunctive edge (according to some single-machine constraints), and a technique introduced in [36] for fixing additional disjunctions (based on a preemptive relaxation). It has publicly available code and is considered a well-performing complete solver for the small instances currently accessible to us, while remaining competitive for larger ones even if other classical approaches become more favorable [37]. B has been used in [38] to discuss the possibility of a phase transition in the JSP, demonstrating that the random instances with N = M are particularly hard families of problems, not unlike what is observed for the quantum annealing implementation of planning problems based on graph vertex coloring [9]. The second algorithm, MS, introduced in [20], proposes a time-based branching scheme where a decision is made at each node to either schedule or delay one of the available operations at the current time. The authors then rely on a series of shaving procedures such as those proposed by [21] to determine the new bound and whether the choice leads to valid schedules. This algorithm is a natural comparison with the present quantum annealing approach as it solves the decision version of the JSP in a very similar fashion to the time-indexed formulation we have implemented on the D-Wave machine, and it makes use of the same shaving technique that we adapted as a pre-processing step for variable pruning. However, we should mention that the variable pruning that we implemented to simplify H T is employed at each node of the classical branch and bound algorithm, so the overall computational time of MS is usually much more important than our one-pass pre-processing step, and in general its runtime does not scale polynomially with the problem size. What is apparent from the correlation plot in Figure 4-b is that the D-Wave machine is easily outperformed by a classical algorithm run on a modern singlecore processor, and that the problem sizes tested in this study are still too small for the asymptotic behavior of the classical algorithms to be clearly demonstrated and measured. The comparison between the D-Wave machine's solution time for H T and the full optimization provided by B is confronting two very different algorithms, and shows that B solves all of the full optimization problems that have been tested within milliseconds, whereas D-Wave's machine can sometimes take tenths of a second (before applying the multiplier factor 2, due to the binary search; see the appendix). When larger chips become available, however, it will be interesting to compare B to a quantum annealing solver for sizes considered B-intractable due to increasing memory and time requirements. The comparison with the MS method has a promising signature even now, with roughly half of the instances being solved by D-Wave's hardware faster than the MS algorithm (with the caveat that our straightforward implementation is not fully optimized). Interestingly, the different parametrized ensembles of problems have distinctively different computational complexity characterized by well-recognizable average computational time to solution for MS (i.e., the points are "stacked around horizontal lines" in Figure 4-b), whereas the D-Wave machine's complexity seems to be sensitive mostly to the total qubit count (see Figure 4-a) irrespective of the problem class. We emphasize again that conclusions on speedup and asymptotic advantage still cannot be confirmed until improved hardware with more qubits and less noise becomes available for empirical testing. VI. VI. CONCLUSIONS Although it is probable that the quantum annealingbased JSP solver proposed herein will not prove competitive until the arrival of an annealer a few generations away, the implementation of a provably tough application from top to bottom was missing in the literature, and our work has led to noteworthy outcomes we expect will pave the way for more advanced applications of quantum annealing. Whereas part of the attraction of quantum annealing is the possibility of applying the method irrespective of the structure of the QUBO problem, we have shown how to design a quantum annealing Mapping to QUBO problems is discussed in Sections II and III. V-VI) Pre-characterization for parameter setting is described in Section VI. VII) Structured run strategies adapted to specific problems have not to our knowledge been discussed before. We discuss a prescription in the appendix. VIII) The only decoding required in our work is majority voting within embedding components to recover error-free logical solutions. The time-indexed formulation then provides QUBO problem solutions that can straightforwardly be represented as Gantt charts of the schedules. solver, mindful of many of the peculiarities of the annealing hardware and the problem at hand, for improved performance. Figure 5 shows a schematic view of the streamlined solving process describing a full JSP optimization solver. The pictured scheme is not intended to be complete, for example, the solving framework can benefit from other concepts such as performance tuning techniques [28] and error-correction repetition lattices [39]. The use of the decision version of the problem can be combined with a properly designed search strategy (the simplest being a binary search) in order to be able to seek the minimum value of the common deadline of feasible schedules. The proposed timespan discrimination further provides an adjustable compromise between the full optimization and decision formulations of the problems, allowing for instant benefits from future improvements in precision without the need for a new formulation or additional binary variables to implement the makespan minimization as a term in the objective function. As will be explored further in future work, we found that instance pre-characterization performed to fine tune the solver parameters can also be used to improve the search strategy, and that it constitutes a tool whose use we expect to become common practice in problems amenable to CSP formulations as the ones proposed for the JSP. Additionally, we have shown that there is great potential in adapting classical algorithms with favorable polynomial scaling as pre-processing techniques to either prune variables or reduce the search space. Hybrid approaches and metaheuristics are already fruitful areas of research and ones that are likely to see promising developments with the advent of these new quantum heuristics algorithms. A. Acknowledgements The authors would like to thank J. In this appendix we expand on the penalty form used for the constraints and the alternative reward-base formulation as well as the timespan discrimination scheme. Penalties versus rewards formulation The encoding of constraints as terms in a QUBO problem can either reward the respecting of these constraints or penalize their violation. Although the distinction may at first seem artificial, the actual QUBO problem generated differs and can lead to different performance on an imperfect annealer. We present one such alternative formulation where the precedence constraint (7) is instead encoded as a reward for correct ordering by replacing +ηh 1 (x) with −η h 1 (x), where h 1 (x) = n     kn−1<i<kn t+pi≤t x i,t x i+1,t     . (A1) The new Hamiltonian is H T (x) = −η h 1 (x) + αh 2 (x) + βh 3 (x). (A2) The reward attributed to a solution is equal to η times the number of satisfied precedence constraints. A feasible solution, where all constraints are satisfied, will have energy equal to −η (k N − N ). The functions h 1 and h 1 differ only by the range of t : in the rewards version we have t − t ≥ p i , and in the penalties version we have t − t < p i . The fact that we are allowing equality in the rewards version means that h 1 will always have more quadratic terms than h 1 regardless of variable pruning, leading to a more connected QUBO graph and therefore a harder problem to embed. Another important disadvantage is revealed when choosing the coefficients η , α, and β in H T to guarantee that no unfeasible solution has energy less than −η (k N − N ). This can happen if the penalty that we gain from breaking constraints h 2 or h 3 is less than the potential reward we get from h 1 . The penalty-based formulation simply requires that η, α, and β be larger than 0. The following lemma summarizes the equivalent condition for the reward-based case. Lemma 1. If β/η ≥ 3 and α > 0, then H T (x) ≥ −(k N − N ),(A3) for allx, with equality if and only ifx represents a feasible schedule. We also found examples that show that these bounds on the coefficients are tight. The fact that β/η must be greater than or equal to 3 is a clear disadvantage because of the issues with precision of the current hardware. In H T we can set all of the penalty coefficients (and hence all non-zero couplers) to be equal, which is the best possible case from the point of view of precision. Timespan discrimination The timespan discrimination that we propose is a specification to strike a compromise between the information obtained from each solver call, and the required precision for this information to be accurate and obtained efficiently. Specifically, we want this extra information to help identify the optimal makespan by looking at the energy of the solutions. This means breaking the degeneracy of the ground states (i.e., the valid solutions) and assigning different energy sectors to different makespans. To prevent collisions with invalid solutions, these energy sectors have to fit within the logical QUBO problem's gap given by ∆E = min{η, α, β}. We note that this will affect the actual gap (as seen by the hardware) of the embedded Ising model. Since the binary variables we have defined in the proposed formulation are not sufficient to write a simple expression for the makespan of a given solution, additional auxiliary variables and associated constraints would need to be introduced. Instead, a simple way to implement this feature in our QUBO formulation is to add a number of local fields to the binary variables corresponding to the last operation of each job, {O k1 , . . . , O k N }. Since the makespan depends on the completion time of the last operation, the precedence constraint guaranties that the makespan of a valid solution will be equal to the completion time of one of those operations. We can then select the local field appropriately as a function of the time index t to penalize a fixed number K of the larger makespans ranging from T − K + 1 to T . Within a sector assigned to the time step T , we need to further divide ∆E T by the maximum number of operations that can complete at T to obtain the largest value we can use as the local field h T , i.e., the number of distinct machines used by at least one operation in the set of operations {O k1 , . . . , O k N }, denoted by M final . If K is larger than 1, we also need to ensure that contributions from various sectors can be differentiated. The objective is to assign a distinct T -dependent energy range to all valid schedules with makespans within [T −K, T ]. More precisely, we relate the local fields for various sectors with the recursive relation h T −1 = h T M final + ,(A4) where is the minimum logical energy resolvable by the annealer. Considering that is also the minimum local field we can use for h T −K+1 and that the maximum total penalty we can assign through this time-discrimination procedure is ∆E − , it is easy to see that the energy resolution should scale as ∆E/(M K final ). An example of the use of local fields for timespan discrimination is shown in Figure 1-d of the main text for the case K = 1. σ x k ,(B4) where for each logical variable index i we have a corresponding ensemble of qubits given by the set of vertices V (i) in the hardware graph with |V (i)| = N V (i) . Edges between logical variables are denoted E(i, j) and edges within the subgraph of V (i) are denoted E(i, i). The couplings J ij and local fields h i represent the logical terms obtained after applying the linear QUBO-Ising transformation to Eq. (6). J F i,k,k are embedding parameters for vertex V (i) and (k, k ) ∈ E(i, i) (see discussion below on the ferromagnetic coupling). The equation above assumes that a local field h i is distributed uniformly between the vertices V (i) and the coupling J i,j is attributed to a single randomly selected edge (α i , β j ) among the available couplers E(i, j), but other distributions can be chosen. In the actual hardware implementation we rescale the Hamiltonian by dividing by J F , which is the value assigned to all J F i,k,k , as explained below. This is due to the limited range of J ij and h i allowed by the machine [11]. Once a valid embedding is chosen, the ferromagnetic interactions J F i,k,k in Eq. (B3) need to be set appropriately. While the purpose of these couplings is to penalize states for which σ z k = σ z k for k, k ∈ V (i), setting them to a large value negatively affects the performance of the annealer due to the finite energy resolution of the machine (given that all parameters must later be rescaled to the actual limited parameter range of the solver) and the slowing down of the dynamics of the quantum system associated with the introduction of small energy gaps. There is guidance from research in physics [11,40] and mathematics [41] on which values could represent the optimal J F i,k,k settings, but for application problems it is customary to employ empirical prescriptions based on pre-characterization [6] or estimation techniques of performance [28]. Despite embedding being a time-consuming classical computational procedure, it is usually not considered part of the computation and its runtime is not measured in determining algorithmic complexity. This is because we can assume that for parametrized families of problems one could create and make available a portfolio of embeddings that are compatible with all instances belonging to a given family. The existence of a such a library would reduce the computational cost to a simple query in a lookup table, but this could come at the price of the available embedding not being fully optimized for the particular problem instance. Quantum annealing optimization solver We now detail our approach to solving individual JSP instances. We shall assume the instance at hand can be identified as belonging to a pre-characterized family of instances for minimal computational cost. This can involve identifying N , M , and θ, as well as the approximate distribution of execution times for the operations. The pre-characterization is assumed to include a statistical distribution of optimal makespans as well as the appropriate solver parameters (J F , optimal annealing time, etc.). Using this information, we need to build an ensemble of queries Q = {q} to be submitted to the D-Wave quantum annealer to solve a problem H. Each element of Q is a triple (t A , R, T ) indicating that the query considers R identical annealings of the embedded Hamiltonian H T for a single annealing time t A . To determine the elements in Q we first make some assumptions, namely, i) sufficient statistics: for each query, R is sufficiently large to sample appropriately the ensembles defined in Eqs. (B7)-(B9); ii) generalized adiabaticity: t A is optimized (over the window of available annealing times) for the best annealing performance in finding a ground state of H T (i.e., the annealing procedure is such that the total expected annealing time t A R required to evolve to a ground state is as small as possible compared to the time required to evolve to an excited state, with the same probability). Both of these conditions can be achieved in principle by measuring the appropriate optimal parameters R (q) and t A (q) through extensive test runs over random ensembles of instances. However, we note that verifying these assumptions experimentally is currently beyond the operational limits of the D-Wave Two device since the optimal t A for generalized adiabaticity is expected to be smaller than the minimum programmable value [3]. Furthermore, we deemed the considerable machine time required for such a large-scale study too onerous in the context of this initial foray. Fortunately, the first limitation is expected to be lifted with the next generation of chip, at which point nothing would prevent the proper determination of a family-specific choice of R and t A . Given a specified annealing time, the number of anneals is determined by specifying r 0 , the target probability of success for queries or confidence level, and measuring r q , the rate of occurrence of the ground state per repetition for the following query: R = log[1 − r 0 ] log[1 − r q ] .(B5) The rate r q depends on the family, T , and the other parameters. The minimum observed during precharacterization should be used to guarantee the ground state is found with at least the specified r 0 . Formally, the estimated time to solution of a problem is then given by T = q∈Q t A log[1 − r 0 ] log[1 − r q ] . (B6) The total probability of success of solving the problem in time T will consequently be q r q . For the results presented in this paper, we used R = 500 000 and t A = min(t A ) = 20 µs. We can define three different sets of qubit configurations that can be returned when the annealer is queried with q. E is the set of configurations whose energy is larger than ∆E as defined in Section III of the paper. These configurations represent invalid schedules. V is the set of solutions with zero energy, i.e., the solutions whose makespan T is small enough (T ≤ T − K) that they have not been discriminated by the procedure described in the subsection on timespan discrimination. Finally, S is the set of valid solutions that can be discriminated (T ∈ (T − K, T ]). Depending on what the timespan T of the problem Hamiltonian H T and the optimal makespan T are, the quantum annealer samples the following configuration space (reporting R samples per query): T < T −→ V, S = ∅ → E 0 > ∆E, (B7) T ∈ (T − K, T ] −→ V = ∅ → E 0 ∈ (0, ∆E],(B8)T ≤ T − K −→ E, V, S = ∅ → E 0 = 0. (B9) Condition (B8) is the desired case where the ground state of H T with energy E 0 corresponds to a valid schedule with the optimal makespan we seek. The ground states corresponding to conditions (B7) and (B9) are instead, respectively, invalid schedules and valid schedules whose makespan could correspond to a global minimum (to be determined by subsequent calls). The abovedescribed assumption ii) is essential to justify aborting the search when case (B8) is encountered. If R and t A are appropriately chosen, the ground state will be preferentially found instead of all other solutions such that one can stop annealing reasonably soon (i.e., after a number of reads on the order of R ) in the absence of the appearance of a zero-energy solution. We can then declare this minimum-energy configuration, found within (0, ∆E], to be the ground state and the associated makespan and schedule to be the optimal solution of the optimization problem. On the other hand, we note that if K = 0, a minimum of two calls are required to solve the problem to optimality, one to show that no valid solution is found for T = T − 1 and one to show that a zero-energy configuration is found for T = T . While for cases (B8) and (B9) the appearance of an energy less than or equal to ∆E heuristically determines the trigger that stops the annealing of H T , for case (B7), we need to have a prescription, based on pre-characterization, on how long to anneal in order to be confident that T < T . While optimizing these times is a research program on its own that requires extensive testing, we expect that the characteristic time for achieving condition (B8) when T = T will be on the same order of magnitude for this unknown runtime. Search strategy The final important component of the computational strategy consists in determining the sequence of timespans of the calls (i.e., the ensemble Q). Here we propose to select the queries based on an optimized binary search that makes informed dichotomic decisions based on the pre-characterization of the distribution of optimal makespans of the parametrized ensembles as described in the previous sections. More specifically, the search is designed based on the assumption that the JSP instance at hand belongs to a family whose makespan distribution has a normal form with average makespan T and variance σ 2 . This fitted distribution is the same P p described in Figure 2-a of the main text whose tails have been cut off at locations corresponding to an instance-dependent upper bound T max and strict lower bound T min (see the following section on bounds). Once the initial T min and T max are set, the binary search proceeds as follows. To ensure a logarithmic scaling for the search, we need to take into account the normal distribution of makespans by attempting to bisect the range (T min , T max ] such that the probability of finding the optimal makespan on either side is roughly equal. In other words, T should be selected by solving the following equation and rounding to the nearest integer: erf T max + 1 2 − T σ √ 2 + erf T min + 1 2 − T σ √ 2 = (B10) erf T + 1 2 − T σ √ 2 + erf T − max(1, K) + 1 2 − T σ √ 2 , where erf(x) is the error function. For our current purpose, an inexpensive approximation of the error function is sufficient. In most cases this condition means initializing the search at T = T . We produce a query q 0 for the annealing of H T . If no schedule is found (condition (B7)) we simply let T min = T . If condition (B9) is verified, on the other hand, we update T max to be the makespan T of the valid found solution (which is equal to T − max(1, K) + 1 in the worst case) for the determination of the next query q 1 . The third condition (B8), only reachable if K > 0, indicates both that the search can stop and the problem has been solved to optimality. The search proceeds in this manner by updating the bounds and bisecting the new range at each step and stops either with condition (B8) or when T = T max = T min +1. Figure 6-a shows an example of such a binary search in practice. The reason for using this guided search is that the average number of calls to find the optimal makespan is dramatically reduced with respect to a linear search on the range (T min , T max ]. For a small variance this optimized search is equivalent to a linear search that starts near T = T . A more spread-out distribution, on the other hand, will see a clear advantage due to the logarithmic, instead of linear, scaling of the search. In Figure 6-b, we compute this average number of calls as a function of N , θ, and K for N = M instances generated such that an operation's average execution time also scales with N . This last condition ensures that the variance of the makespan grows linearly with N as well, ensuring that the logarithmic behavior becomes evident for larger instances. For this calculation we use the worst case when updating T max due to condition (B9) being met. We find that for the experimentally testable instances with the D-Wave Two device (see Figure 3-b of the main text), the expected number of calls to solve the problem is less than three (in the absence of pre-characterization it would be twice that), while for larger instances the size of Q scales logarithmically, as expected. JSP bounds The described binary search assumes that a lower bound T min and an upper bound T max are readily available. We cover their calculation for the sake of completeness. The simplest lower bounds are the job bound and the machine bound. The job bound is calculated by finding the total execution time of each job and keeping the largest one of them, put simply max n∈{1, ..., N } kn i=kn−1 p i ,(B11) where we use the lexicographic index i for operations and where k 0 = 1. Similarly, we can define the machine bound as max m∈{1, ..., M } i∈Im p i ,(B12) where I m is the set of indices of all operations that need to run on machine m m . Since these bounds are inexpensive to calculate, we can take the larger of the two. An even better lower bound can be obtained using the iterated Carlier and Pinson (ICP) procedure described in the window shaving subsection of Section III of the main text. We mentioned that the shaving procedure can show that a timespan does not admit a solution if a window closes completely. Using shaving for different timespans and performing a binary search, we can obtain the ICP lower bound in O N 2 log(N )M 2 T max log 2 (T max − T min ) , where T min and T max are some trivial lower and upper bound, respectively, such as the ones described in this section. Given that the search assumes a strict bound, we need to decrease whichever bound we chose here by one. As for the upper bound, an excellent choice is provided by another classical algorithm developed by Applegate and Cook [42] for some finite computational effort. The straightforward alternative is given by the total work of the problem i∈{1, ..., k N } p i .(B13) The solver's limitations can also serve to establish practical bounds for the search. For a given family of problems, if instances of a specific size can only be embedded with some acceptable probability for timespans smaller than T embed max , the search can be set with this limit, and failure to find a solution will result in T embed max , at which point the solver will need to report a failure or switch to another classical approach. O(N log(N )), where N = |I j |. After these updates have been carried out on a per-machine basis, we propagate the new heads and tails using the precedence of the operation by setting r i+1 = max {r i+1 , r i + p i } ,(C1)q i = max {q i , q i+1 + p i+1 } ,(C2) for every pair of operations O i and O i+1 that belong to the same job. After propagating the updates, we check again if any ascendant-set updates can be made, and repeat the cycle until no more updates are found. In our tests, we use an implementation similar to the one described in [21] to do the ascendant-set updates. Algorithm 1 is pseudocode that describes the shaving procedure. Here, the procedure UpdateMachine(i) updates heads and tails for machine i in O(N log(N )), as described by Carlier and Pinson in [21]. It returns True if any updates were made, and False otherwise. PropagateWindows() is a procedure that iterates over the tasks and checks that Eqs. for i ∈ machines do 6: updated ← UpdateMachine(i) ∨ updated 7: if updated then PropagateWindows() For each repetition of the outermost loop of Algorithm 1, we know that there is an update on the windows, which means that we have removed at least one variable. Since there are at most N M T variables, the loop will run at most this many times. The internal for loop runs exactly M times and does work in O(N log(N )). Putting all of this together, the final complexity of the shaving procedure is O(N 2 M 2 T log(N )). Classical algorithm implementation Brucker et al.'s branch and bound method [35] remains widely used due to its state-of-the-art status on smaller JSP instances and its competitive performance on larger ones [43]. Furthermore, the original code is freely available through ORSEP [44]. No attempt was made at optimizing this code and changes were only made to properly interface with our own code and time the results. Martin and Shmoys' time-based approach [20] is less clearly defined in the sense that no publicly available standard code could be found and because a number of variants for both the shaving and the branch and bound strategy are described in the paper. As covered in the section on shaving, we have chosen the O(n log(n)) variants of heads and tails adjustments, the most efficient choice available. On the other hand, we have restricted our branch and bound implementation to the simplest strategy proposed, where each node branches between scheduling the next available operation (an operation that was not yet assigned a starting time) immediately or delaying it. Although technically correct, the same schedule can sometimes appear in both branches because the search is not restricted to active schedules, and unwarranted idle times are sometimes considered. According to Martin and Shmoys, the search strategy can be modified to prevent such occurrences, but these changes are only summarily described and we did not attempt to implement them. Other branching schemes are also proposed which we did not consider for this work. One should be careful when surveying the literature for runtimes of a full-optimization version based on this decision-version solver. What is usually reported assumes the use of a good upper bound such as the one provided by Applegate and Cook [42]. The runtime to obtain such bounds must be taken into account as well. It would be interesting to benchmark this decision solver in combination with our proposed optimized search, but this benchmarking we also leave for future work. Benchmarking of classical methods was performed on an off-the-shelf Intel Core i7-930 processor clocked at 2.8 GHz. -a, where the 72 logical variables of the QUBO problem are embedded using 257 qubits of the Number of machines M, number of jobs N Operation execution time distribution Optimal makespan histograms of optimal makespans T for parametrized families of JSP instances with N = M , Pp on the y-axis, θ = 1 (yellow), and θ = 0.5 (purple). The distributions are histograms of occurrences for 1000 random instances, fitted with a Gaussian function of mean T . We note that the width of the distributions increases as the range of the execution times Pp increases, for fixed p . The mean and the variance are well fitted respectively by T = AT N pmin+BT N pmax and σ = σ0+Cσ T +Aσpmin+Bσpmax, with AT = 0.67, BT = 0.82, σ0 = 0.7, Aσ = −0.03, Bσ = 0.43, and Cσ = 0.003. of an embedded JSP instance on NASA's D-Wave Two chip. Each chain of qubits is colored to represent a logical binary variable determined by the embedding. For clarity, active connections between the qubits are not shown. b) Embedding probability as a function of N = M for θ = 1 (similar results are observed for θ = 0.5). Solid lines refer to Pp = [1, 1] and dashed lines refer to Pp = [0, 2] of repetitions required to solve HT with the D-Wave Two with a 99% probability of success (see the appendix). The blue points indicate instances with θ = 1 and yellow points correspond to θ = 0.5 (they are the same instances and runtimes used for Ensemble pre-characterization (hardware) III. Choice of mapping IV. Pre-processing VI. Embedding strategy VII. Running strategy VIII. Decoding and analysis FIG. 5: I-II) Appropriate choice of benchmarking and classical simulations is discussed in Section IV. III-IV) Frank, M. Do, E.G. Rieffel, B. O'Gorman, M. Bucyk, P. Haghnegahdar, and other researchers at QuAIL and 1QBit for useful input and discussions. This research was supported by 1QBit, Mitacs, NASA (Sponsor Award Number NNX12AK33A), and by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via IAA 145483 and by the AFRL Information Directorate under grant F4HBKC4162G001. Appendix A: JSP and QUBO formulation FIG . 6: a) View of a guided binary search required to identify the minimum makespan over a distribution. The fitted example distribution corresponds to N = M = 8, fitted to a Gaussian distribution as described in the main text. We assume K = 1. The first attempt queries H26, the second H29,and the third H30 (the final call), following Eq. (B10). b) Average number of calls to the quantum annealer required by the binary search assuming Eq. (B10) (left panel) or assuming a uniform distribution of minimum makespans between trivial upper and lower bounds. Thick and dashed lines correspond to θ = 1 and θ = 0.5, respectively, and the numeric values associated with each color in the figure correspond to different values of K. The operations' execution times are distributed uniformly with Pp = {0, . . . , N/2}. Appendix B: Computational strategyThis appendix elaborates on the compilation methods and the quantum annealing implementation of a full optimization solver based on the decision version and a binary search as outlined in the main text.CompilationThe process of compiling, or embedding, an instance for a specific target architecture is a crucial step given the locality of the programmable interactions on current quantum annealer architectures. During the graph-minor topological embedding, each vertex of the problem graph is mapped to a subset of connected vertices, or subgraph, of the hardware graph. These assignments must be such that the the edges in the problem graph have at least one corresponding edge between the associated subgraphs in the hardware graph. Formally, the classical Hamiltonian Eq. (6) is mapped to a quantum annealing Ising Hamiltonian on the hardware graph using the set of equations that follows. The spin operators s σ i are defined by setting s = 1 and using the Pauli matrices to writeThe resulting spin variables σ z i = ±1, our qubits, are easily converted to the usual binary variablesThe Ising Hamiltonian is given byAppendix C: Classical algorithmsWhen designing a quantum annealing solver, a survey of classical methods provides much more than a benchmark for comparison and performance. Classical algorithms can sometimes be repurposed as useful pre-processing techniques as demonstrated with variable pruning. We provide a quick review of the classical methods we use for this work as well as some details on the classical solvers to which we compare.Variable pruningEliminating superfluous variables can greatly help mitigate the constraints on the number of qubits available. Several elimination rules are available and we explain below in more detail the procedure we used for our tests.The first step in reducing the processing windows is to eliminate unneeded variables by considering the precedence constraints between the operations in a job, something we refer to as simple variable pruning. We define r i as the sum of the execution times of all operations preceding operation O i . Similarly, we define q i as the sum of the execution times of all operations following O i . The numbers r i and q i are referred to as the head and tail of operation O i , respectively. An operation cannot start before its head and must leave enough time after finishing to fit its tail, so the window of possible start times, the processing window,If we consider the one-machine subproblems induced on each machine separately, we can update the heads and tails of each operation and reduce the processing windows further. For example, recalling that I j is the set of indices of operations that have to run on machine M j , we suppose that a, b ∈ I j are such thatThen O a must be run after O b . This means that we can update r a with r a = max{r a , r b + p b }.We can apply similar updates to the tails because of the symmetry between heads and tails. These updates are known in the literature as immediate selections.Better updates can be performed by using ascendant sets, introduced by Carlier and Pinson in[36]. A subset X ⊂ I j is an ascendant set of c ∈ I j if c ∈ I j and min a∈X∪{c} r a + a∈X∪{c} p a + min a∈X q a > T.If X is an ascendant set of c, then we can update r c with r c = max r c , max X ⊂X min a∈X r a + a∈X p j .Once again, the symmetry implies that similar updates can be applied to the tails. Carlier and Pinson in[21]provide an algorithm to perform all of the ascendant-set updates on M j in . M W Johnson, P Bunyk, F Maibaum, E Tolkacheva, A J Berkley, E M Chapple, R Harris, J Johansson, T Lanting, I Perminov, Superconductor Science and Technology. 2365004M. W. Johnson, P. Bunyk, F. Maibaum, E. Tolkacheva, A. J. Berkley, E. M. Chapple, R. Harris, J. Johans- son, T. Lanting, I. Perminov, et al., Superconductor Science and Technology 23, 065004 (2010), URL http: //stacks.iop.org/0953-2048/23/i=6/a=065004. . S Boixo, T F Ronnow, S V Isakov, Z Wang, D Wecker, D A Lidar, J M Martinis, M Troyer, 10.1038/nphys29001745-2473Nature Physics. 10S. Boixo, T. F. Ronnow, S. V. Isakov, Z. Wang, D. Wecker, D. A. Lidar, J. M. Martinis, and M. Troyer, Nature Physics 10, 218 (2014), ISSN 1745-2473, URL http://dx.doi.org/10.1038/nphys2900. . T F Rønnow, Z Wang, J Job, S Boixo, S V Isakov, D Wecker, J M Martinis, D A Lidar, M Troyer, Science. 345T. F. Rønnow, Z. Wang, J. Job, S. Boixo, S. V. Isakov, D. Wecker, J. M. Martinis, D. A. Li- dar, and M. Troyer, Science 345, 420 (2014), http://www.sciencemag.org/content/345/6195/420.full.pdf, URL http://www.sciencemag.org/content/345/6195/ 420.abstract. C C Mcgeoch, C Wang, http:/doi.acm.org/10.1145/2482767.2482797978-1-4503-2053-5Proceedings of the ACM International Conference on Computing Frontiers. the ACM International Conference on Computing FrontiersNew York, NY, USAACM23CF '13C. C. McGeoch and C. Wang, in Proceedings of the ACM International Conference on Computing Frontiers (ACM, New York, NY, USA, 2013), CF '13, pp. 23:1- 23:11, ISBN 978-1-4503-2053-5, URL http://doi.acm. org/10.1145/2482767.2482797. . V N Smelyanskiy, E G Rieffel, S I Knysh, C P Williams, M W Johnson, M C Thom, W G Macready, K L Pudenz, arXiv:1204.28211204.2821quant-phV. N. Smelyanskiy, E. G. Rieffel, S. I. Knysh, C. P. Williams, M. W. Johnson, M. C. Thom, W. G. Macready, and K. L. Pudenz, arXiv:1204.2821 [quant-ph] (2012), 1204.2821. E G Rieffel, D Venturelli, B O&apos;gorman, M B Do, E M Prystay, V N Smelyanskiy, 10.1007/s11128-014-0892-x1570-0755Quantum Information Processing. 14E. G. Rieffel, D. Venturelli, B. O'Gorman, M. B. Do, E. M. Prystay, and V. N. Smelyanskiy, Quantum Infor- mation Processing 14, 1 (2015), ISSN 1570-0755, URL http://dx.doi.org/10.1007/s11128-014-0892-x. . I Hen, J Job, T Albash, T F Rønnow, M Troyer, D Lidar, arXiv:1502.016631502.01663quant-phI. Hen, J. Job, T. Albash, T. F. Rønnow, M. Troyer, and D. Lidar, arXiv:1502.01663 [quant-ph] (2015), 1502.01663. . H G Katzgraber, F Hamze, Z Zhu, A J Ochoa, H Munoz-Bauza, http:/link.aps.org/doi/10.1103/PhysRevX.5.031026Physical Review X. 531026H. G. Katzgraber, F. Hamze, Z. Zhu, A. J. Ochoa, and H. Munoz-Bauza, Physical Review X 5, 031026 (2015), URL http://link.aps.org/doi/10.1103/PhysRevX.5. 031026. Parametrized families of hard planning problems from phase transitions. E Rieffel, D Venturelli, M Do, I Hen, F J , E. Rieffel, D. Venturelli, M. Do, I. Hen, and F. J., Parametrized families of hard planning problems from phase transitions (2014). B O&apos;gorman, E G Rieffel, D Minh, D Venturelli, ICAPS 2015, Research Workshop Constraint Satisfaction Techniques for Planning and Scheduling Problems. B. O'Gorman, E. G. Rieffel, D. Minh, and D. Venturelli, in ICAPS 2015, Research Workshop Constraint Satisfac- tion Techniques for Planning and Scheduling Problems (2015). . D Venturelli, S Mandrà, S Knysh, B O&apos;gorman, R Biswas, V Smelyanskiy, 1406.7553Physical Review X. 531040D. Venturelli, S. Mandrà, S. Knysh, B. O'Gorman, R. Biswas, and V. Smelyanskiy, Physical Review X 5, 031040 (2015), 1406.7553. . B O&apos;gorman, R Babbush, A Perdomo-Ortiz, A Aspuru-Guzik, V Smelyanskiy, 10.1140/epjst/e2015-02349-91951-6355The European Physical Journal Special Topics. 224B. O'Gorman, R. Babbush, A. Perdomo- Ortiz, A. Aspuru-Guzik, and V. Smelyanskiy, The European Physical Journal Special Top- ics 224, 163 (2015), ISSN 1951-6355, URL http://dx.doi.org/10.1140/epjst/e2015-02349-9. . W.-Y Ku, J C Beck, W.-Y. Ku and J. C. Beck (2014). . A Jain, S Meeran, 0377- 2217European Journal of Operational Research. 113A. Jain and S. Meeran, European Journal of Op- erational Research 113, 390 (1999), ISSN 0377- 2217, URL http://www.sciencedirect.com/science/ article/pii/S0377221798001131. . E Boros, P L Hammer, 10.1016/S0166-218X(01)00341-90166-218XDiscrete Applied Mathematics. 123E. Boros and P. L. Hammer, Discrete Applied Mathe- matics 123, 155 (2002), ISSN 0166-218X, URL http: //dx.doi.org/10.1016/S0166-218X(01)00341-9. . C.-C Cheng, S F Smith, Annals of Operations Research. 70327C.-C. Cheng and S. F. Smith, Annals of Operations Re- search 70, 327 (1997). A Garrido, M A Salido, F Barber, M López, Proc. ECAI-2000 Workshop on New Results in Planning, Scheduling and Design. ECAI-2000 Workshop on New Results in Planning, Scheduling and DesignA. Garrido, M. A. Salido, F. Barber, and M. López, in Proc. ECAI-2000 Workshop on New Results in Planning, Scheduling and Design (PuK2000) (2000), pp. 44-49. . H M Wagner, 10.1002/nav.38000602051931-9193Naval Research Logistics Quarterly. 6H. M. Wagner, Naval Research Logistics Quarterly 6, 131 (1959), ISSN 1931-9193, URL http://dx.doi.org/ 10.1002/nav.3800060205. . A S Manne, 0030364XOperations Research. 8A. S. Manne, Operations Research 8, 219 (1960), ISSN 0030364X, 15265463, URL http://www.jstor. org/stable/167204. P Martin, D B Shmoys, 10.1007/3-540-61310-2_29978-3-540-61310-7Integer Programming and Combinatorial Optimization. W. H. Cunningham, S. McCormick, and M. QueyranneBerlin HeidelbergSpringer1084 of Lecture Notes in Computer ScienceP. Martin and D. B. Shmoys, in Integer Programming and Combinatorial Optimization, edited by W. H. Cunning- ham, S. McCormick, and M. Queyranne (Springer Berlin Heidelberg, 1996), vol. 1084 of Lecture Notes in Com- puter Science, pp. 389-403, ISBN 978-3-540-61310-7, URL http://dx.doi.org/10.1007/3-540-61310-2_29. . J Carlier, P E , 0377- 2217European Journal of Operational Research. 78J. Carlier and P. E., European Journal of Op- erational Research 78, 146 (1994), ISSN 0377- 2217, URL http://www.sciencedirect.com/science/ article/pii/0377221794903794. . L Péridy, D Rivreau, 0377- 2217European Journal of Operational Research. 164L. Péridy and D. Rivreau, European Journal of Op- erational Research 164, 24 (2005), ISSN 0377- 2217, URL http://www.sciencedirect.com/science/ article/pii/S0377221703009019. Y Caseau, F Laburthe, Proc. of the 11th International Conference on Logic Programming. of the 11th International Conference on Logic ProgrammingY. Caseau and F. Laburthe, in Proc. of the 11th Inter- national Conference on Logic Programming (1994). W M Kaminsky, S Lloyd, 10.1007/978-1-4419-9092-1_25978-1-4613-4791-0Quantum Computing and Quantum Bits in Mesoscopic Systems. A. J. Leggett, B. Ruggiero, and P. SilvestriniSpringer USW. M. Kaminsky and S. Lloyd, in Quantum Comput- ing and Quantum Bits in Mesoscopic Systems, edited by A. J. Leggett, B. Ruggiero, and P. Silvestrini (Springer US, 2004), pp. 229-236, ISBN 978-1-4613-4791-0, URL http://dx.doi.org/10.1007/978-1-4419-9092-1_25. V Choi, 10.1007/s11128-010-0200-3s11128-010-0200-3Quantum Information Processing. 10V. Choi, Quantum Information Processing 10, 343 (2011), ISSN 1570-0755, URL http://dx.doi.org/10. 1007/s11128-010-0200-3. I Adler, F Dorn, F V Fomin, I Sau, D M Thilikos, 10.1007/978-3-642-13731-0_31978-3-642-13730-3Algorithm Theory -SWAT 2010. H. KaplanBerlin HeidelbergSpringer6139I. Adler, F. Dorn, F. V. Fomin, I. Sau, and D. M. Thilikos, in Algorithm Theory -SWAT 2010, edited by H. Ka- plan (Springer Berlin Heidelberg, 2010), vol. 6139 of Lecture Notes in Computer Science, pp. 322-333, ISBN 978-3-642-13730-3, URL http://dx.doi.org/10.1007/ 978-3-642-13731-0_31. . J Cai, W G Macready, A Roy, arXiv:1406.27411406.2741quant-phJ. Cai, W. G. Macready, and A. Roy, arXiv:1406.2741 [quant-ph] (2014), 1406.2741. . A Perdomo-Ortiz, J Fluegemann, R Biswas, V N Smelyanskiy, arXiv:1503.010831503.01083quant-phA. Perdomo-Ortiz, J. Fluegemann, R. Biswas, and V. N. Smelyanskiy, arXiv:1503.01083 [quant-ph] (2015), 1503.01083. . S Boixo, V N Smelyanskiy, A Shabani, S V Isakov, M Dykman, V S Denchev, M Amin, A Smirnov, M Mohseni, H Neven, arXiv:1411.40361411.4036quant-phS. Boixo, V. N. Smelyanskiy, A. Shabani, S. V. Isakov, M. Dykman, V. S. Denchev, M. Amin, A. Smirnov, M. Mohseni, and H. Neven, arXiv:1411.4036 [quant-ph] (2014), 1411.4036. . V N Smelyanskiy, D Venturelli, A Perdomo-Ortiz, S Knysh, M I Dykman, arXiv:1511.025811511.02581quantphV. N. Smelyanskiy, D. Venturelli, A. Perdomo-Ortiz, S. Knysh, and M. I. Dykman, arXiv:1511.02581 [quant- ph] (2015), 1511.02581. . D Venturelli, D J J Marchand, G Rojo, arXiv:1506.084791506.08479quant-phD. Venturelli, D. J. J. Marchand, and G. Rojo, arXiv:1506.08479 [quant-ph] (2015), 1506.08479. . A Perdomo-Ortiz, B O&apos;gorman, J Fluegemann, R Biswas, V N Smelyanskiy, arXiv:1503.056791503.05679quant-phA. Perdomo-Ortiz, B. O'Gorman, J. Fluegemann, R. Biswas, and V. N. Smelyanskiy, arXiv:1503.05679 [quant-ph] (2015), 1503.05679. . A D King, C C Mcgeoch, arXiv:1410.26281410.2628cs.DSA. D. King and C. C. McGeoch, arXiv:1410.2628 [cs.DS] (2014), 1410.2628. . K L Pudenz, T Albash, D A Lidar, 10.1038/ncomms4243Nature Communications. 5K. L. Pudenz, T. Albash, and D. A. Lidar, Nature Communications 5 (2014), URL http://dx.doi.org/ 10.1038/ncomms4243. . P Brucker, B Jurisch, B Sievers, 0166- 218XDiscrete Applied Mathematics. 49P. Brucker, B. Jurisch, and B. Sievers, Discrete Applied Mathematics 49, 107 (1994), ISSN 0166- 218X, URL http://www.sciencedirect.com/science/ article/pii/0166218X94902046. . J Carlier, E Pinson, 0254-5330Annals of Operations Research. 26J. Carlier and E. Pinson, Annals of Operations Research 26, 269 (1991), ISSN 0254-5330, URL http://dl.acm. org/citation.cfm?id=115185.115198. . J C Beck, T K Feng, J.-P Watson, INFORMS Journal on Computing. 231J. C. Beck, T. K. Feng, and J.-P. Watson, INFORMS Journal on Computing 23, 1 (2011). . M J Streeter, S F Smith, Journal of Artificial Intelligence Research. 26247M. J. Streeter and S. F. Smith, Journal of Artificial In- telligence Research 26, 247 (2006). . W Vinci, T Albash, G Paz-Silva, I Hen, D A Lidar, Physical Review A. 9242310W. Vinci, T. Albash, G. Paz-Silva, I. Hen, and D. A. Lidar, Physical Review A 92, 042310 (2015). . A D King, T Lanting, R Harris, arXiv:1502.020981502.02098quant-phA. D. King, T. Lanting, and R. Harris, arXiv:1502.02098 [quant-ph] (2015), 1502.02098. V Choi, 10.1007/s11128-008-0082-91570-0755Quantum Information Processing. 7V. Choi, Quantum Information Processing 7, 193 (2008), ISSN 1570-0755, URL http://dx.doi.org/10.1007/ s11128-008-0082-9. . A D , C W , Orsa J Comput, 3149A. D. and C. W., ORSA J. Comput. 3, 149 (1991). . W Brinkkötter, P Brucker, 10.1002/1099-1425(200101/02)4:1<53::AID-JOS59>3.0.CO;2-Y1099-1425Journal of Scheduling. 41<53::AID-JOS59>3.0.CO;2-YW. Brinkkötter and P. Brucker, Journal of Scheduling 4, 53 (2001), ISSN 1099-1425, URL http://dx.doi.org/10.1002/1099-1425(200101/02)4: 1<53::AID-JOS59>3.0.CO;2-Y. . B P , J B , S B , European Journal of Operational Research. 57132B. P., J. B., and S. B., European Journal of Operational Research 57, 132 (1992).
[]
[ "EXISTENCE OF CONTINUOUS FUNCTIONS THAT ARE ONE-TO-ONE ALMOST EVERYWHERE", "EXISTENCE OF CONTINUOUS FUNCTIONS THAT ARE ONE-TO-ONE ALMOST EVERYWHERE" ]
[ "Alexander J Izzo " ]
[]
[]
It is shown that given a metric space X and a σ-finite positive regular Borel measure µ on X, there exists a bounded continuous real-valued function on X that is one-to-one on the complement of a set of µ measure zero.Dedicated to the memory of Mary Ellen Rudin2000 Mathematics Subject Classification. Primary 54C30; Secondary 26E99, 46E30, 54E40.
10.7146/math.scand.a-23688
[ "https://arxiv.org/pdf/1508.03778v1.pdf" ]
119,135,810
1508.03778
3d35aa4a44c58fdac4e582722d9050e5c28cfd12
EXISTENCE OF CONTINUOUS FUNCTIONS THAT ARE ONE-TO-ONE ALMOST EVERYWHERE 15 Aug 2015 Alexander J Izzo EXISTENCE OF CONTINUOUS FUNCTIONS THAT ARE ONE-TO-ONE ALMOST EVERYWHERE 15 Aug 2015Dedicated to the memory of Mary Ellen Rudin It is shown that given a metric space X and a σ-finite positive regular Borel measure µ on X, there exists a bounded continuous real-valued function on X that is one-to-one on the complement of a set of µ measure zero.Dedicated to the memory of Mary Ellen Rudin2000 Mathematics Subject Classification. Primary 54C30; Secondary 26E99, 46E30, 54E40. Introduction In [2] the author and Bo Li studied the question of how many functions are needed to generate an algebra dense in various L p -spaces. In connection with this, they proved [2, Theorem 1.10] that on every smooth manifold-with-boundary there exists a bounded continuous realvalued function that is one-to-one on the complement of a set of measure zero. It was suggested by Lee Stout that this result would generalize to a metric space context. In this paper we show that this is indeed the case. The author would like to thank Stout for sharing his insight. We state the result using the following terminology introduced in [2]. Definition 1.1. We call a map F defined on a measure space X oneto-one almost everywhere if there is a subset E of X of measure zero such that the restriction of F to X \ E is one-to-one. Theorem 1.2. Let X be a metric space and µ be a σ-finite positive regular Borel measure on X. Then there exists a bounded continuous real-valued function on X that is one-to-one almost everywhere. The boundedness of the function is not really important; given an unbounded function with the other properties, we can obtain a bounded one by post composing with a homeomorphism of R onto the interval (−1, 1). The point of the theorem is that the function is continuous everywhere and one-to-one almost everywhere. Note that the metric space X can be of arbitrarily large cardinality, but the set of full measure on which the function is one-to-one can have cardinality at most that of the continuum. Note also that the theorem becomes false if the σ-finiteness condition is dropped as is exemplified by the case of counting measure on a discrete space with cardinality greater than that of the continuum. The result about continuous one-to-one almost everywhere functions in [2] was used there to show that on every Riemannian manifoldwith-boundary M of finite volume there exists a bounded continuous real-valued function f such that the set of polynomials in f is dense in L p (M) for all 1 ≤ p < ∞ [2, Theorem 1.2]. The argument given there can now be repeated using Theorem 1.2 above in place of [2, Theorem 1.10] to establish the following more general result. This result also strengthens [2, Theorem 1.1]. Theorem 1.3. Let X be a metric space and µ be a finite positive regular Borel measure on X. Then there exists a bounded continuous real-valued function f on X such that the set of polynomials in f is dense in L p (µ) for all 1 ≤ p < ∞. Proof of Theorem 1.2 We begin with several lemmas. The first of these is probably wellknown, and it appears with proof as [2, Lemma 3.1]. Throughout the paper, by "a Cantor set" we mean any space that is homeomorphic to the standard middle thirds Cantor set. Lemma 2.1. If C is a Cantor set and U is an open cover of C, then C can be written as a finite union C = C 1 ∪ . . . ∪ C N of disjoint Cantor sets C 1 , . . . , C N each of which lies in some member of U . Lemma 2.2. Let X be a topological space and µ be a σ-finite positive regular Borel measure on X. Then there exists a countable collection {K n } of disjoint compact sets in X such that µ X \ ( K n ) = 0. Proof. By hypothesis X = ∞ n=1 X n with µ(X n ) < ∞ for each n, and without loss of generality the X n can be taken to be disjoint. For each fixed n, the regularity of µ enables us to inductively choose disjoint compact sets X 1 n , X 2 n , . . . contained in X n , such that µ X n \ (X 1 n ∪ . . . ∪ X j n ) < 1/j for each j = 1, 2, . . . . Then µ X n \ ( ∞ j=1 X j n ) = 0. Hence {X j n } n,j is a countable collection of disjoint compact sets in X such that µ X \ ( n,j X j n ) = 0. Lemma 2.3. Let X be a (nonempty) compact metric space without isolated points, and let µ be a positive regular Borel measure on X. Fix ε > 0 and δ > 0. Then for every sufficiently large positive integer r, there exists a collection {U 1 , . . . , U r } of nonempty open sets in X with disjoint closures such that µ X \ (U 1 ∪ . . . ∪ U r ) < ε and diameter(U j ) < δ for every j = 1, . . . , r. Proof. Since X is a compact metric space, X is totally bounded. Thus X can be covered by finitely many balls A 1 , . . . , A s of diameters less than δ. Set E 1 = A 1 and E j = A j \(A 1 ∪. . .∪A j−1 ) for each j = 2, . . . , s. Then the E j are disjoint and s j=1 E j = X. By the regularity of µ, for each j = 1, . . . , s, we can choose a compact set K j contained in E j such that µ(E j \ K j ) < ε/s. Then the sets K 1 , . . . , K s are disjoint and have diameters less than δ. Hence we can choose open neighborhoods U 1 , . . . , U s of K 1 , . . . , K s , respectively, so that the closures of the U j are disjoint and diameter(U j ) < δ for every j = 1, . . . , s. Then also µ X \ (U 1 ∪ . . . ∪ U s ) ≤ µ X \ (K 1 ∪ . . . ∪ K s ) = s j=1 µ(E j \ K j ) < ε. The above argument establishes that the desired nonempty open sets can be obtained for some positive integer r ≤ s. To show that r can be taken arbitrarily large, it suffices by induction, to show that r can be increased by 1. To this end, suppose that U 1 , . . . , U r are as in the statement of the lemma. Let γ = ε − µ X \ (U 1 ∪ . . . ∪ U r ) > 0, and choose a point p ∈ U r . Because X has no isolated points and µ is regular, there is a nonempty compact set K in U r \ {p} such that µ (U r \ {p}) \ K < γ. Choose open neighborhoods U ′ r and U ′ r+1 of {p} and K, respectively, contained in U r with disjoint closures. Then U 1 , . . . , U r−1 , U ′ r , U ′ r+1 is a collection of r + 1 nonempty open sets with the required properties. Lemma 2.4. Given a (nonempty) compact metric space X without isolated points, a positive regular Borel measure µ on X, and ε > 0, there exists a Cantor set C in X such that µ(X \ C) < ε. A result close to Lemma 2.4 appears in the paper [1] by Bernard Gelbaum. (The author would like to thank Bo Li for pointing this out.) Lemma 2.4 is more general than the result in [1], since in [1] the measure is required to be nonatomic and there is no such requirement in Lemma 2.4. The author was surprised to find that the proof in [1] is very different from the one given here. Proof. By the preceding lemma, there are nonempty open sets U 1 , . . . , U r 1 (for some r 1 ) with disjoint closures such that µ X \ (U 1 ∪ . . . ∪ U r 1 ) < ε/2 and diameter(U j 1 ) < 1 for every j 1 = 1, . . . , r 1 . Each U j 1 is a compact set without isolated points, so we can apply the preceding lemma to each U j 1 to obtain nonempty relatively open subsets V j 1 ,j 2 for j 1 = 1, . . . , r 1 and j 2 = 1, . . . , r 2 (for some r 2 ) with disjoint closures such that (i) V j 1 ,j 2 ⊂ U j 1 , (ii) µ U j 1 \ (V j 1 ,1 ∪ . . . ∪ V j 1 ,r 2 ) < ε 2 2 r 1 , and (iii) diameter(V j 1 ,j 2 ) < 1/2. Setting U j 1 ,j 2 = V j 1 ,j 2 ∩ U j 1 , we obtain nonempty open subsets of X such that (i ′ ) U j 1 ,j 2 ⊂ U j 1 , (ii ′ ) µ U j 1 \ (U j 1 ,1 ∪ . . . ∪ U j 1 ,r 2 ) < ε 2 2 r 1 , and (iii ′ ) diameter(U j 1 ,j 2 ) < 1/2. In general, assume that we have chosen, for each s = 1, . . . , k, nonempty open subsets U j 1 ,...,js of X for each j 1 = 1, . . . , r 1 ; . . . ; j s = 1, . . . , r s (for some r 1 , . . . , r s ) with disjoint closures such that (i ′′ ) U j 1 ,...,js ⊂ U j 1 ,...,j s−1 , (ii ′′ ) µ U j 1 ,...,j s−1 \ (U j 1 ,...,j s−1 ,1 ∪ . . . ∪ U j 1 ,...,j s−1 ,rs ) < ε 2 s r s−1 , and (iii ′′ ) diameter(U j 1 ,...,js ) < 1/s. Each U j 1 ,...,j k is a compact set without isolated points to which we can apply the procedure above to obtain open sets U j 1 ,...,j k+1 for each j 1 = 1, . . . , r 1 ; . . . ; j k+1 = 1, . . . , r k+1 (for some r k+1 ) with disjoint closures such that conditions (i ′′ )-(iii ′′ ) hold with s replaced by k + 1. Thus by induction the construction can be continued. Now consider the sets K s = r 1 j 1 =1 · · · rs js=1 U j 1 ,...,js . These are nonempty compact sets such that K 1 ⊃ K 2 ⊃ · · · , so their intersection C = ∞ s=1 K s is nonempty. Moreover, one easily verifies that µ(X \ C) < ε. Finally we claim that C is a Cantor set. To verify this, note that for each sequence (j 1 , j 2 , . . .) ∈ ∞ k=1 {1, . . . , r k } we have U j 1 ⊃ U j 1 ,j 2 ⊃ U j 1 ,j 2 ,j 3 ⊃ · · · , so the intersection of these sets is nonempty, and because the diameters of these sets go to zero, the intersection consists of a single point. Thus there is a well-defined map F : ∞ k=1 {1, . . . , r k } → C sending the sequence (j 1 , j 2 , . . .) to the point in the intersection. One easily verifies that F is a bijection by using that, for each fixed s, the sets U j 1 ,...,js (as j 1 , . . . , j s vary) are disjoint. One easily verifies that F is continuous using that the diameters of the sets U j 1 ,...,js go to zero as s → ∞. Hence, by compactness, F is a homeomorphism. Thus since ∞ k=1 {1, . . . , r k } is a Cantor set, so is C. Lemma 2.5. Given a (nonempty) compact metric space X without isolated points and a positive regular Borel measure µ on X, there exists an at most countable collection {C n } of disjoint Cantor sets in X such that µ X \ ( C n ) = 0. Proof. We construct the sets C n inductively. By the preceding lemma, there exists a Cantor set C 1 in X such that µ(X \ C 1 ) < 1. In general, assume that disjoint Cantor sets C 1 , . . . , C k have been chosen such that µ X \ (C 1 ∪ . . . ∪ C k ) < 1/2 k . If in fact µ X \ (C 1 ∪ . . . ∪ C k ) = 0, then we are done. Otherwise, by the regularity of µ, there is an open neighborhood U X of C 1 ∪ . . . ∪ C k such that µ U \ (C 1 ∪ . . . ∪ C k ) < 1/2 k+2 . Now choose an open neighborhood V of C 1 ∪ . . . ∪ C k such that V ⊂ U. Let Y = X \ V . Then Y is a nonempty compact set disjoint from C 1 ∪ . . . ∪ C k and X = U ∪ Y . Because Y is the closure of the open set X \ V , we see that Y has no isolated points. Therefore, the preceding lemma gives that there is a Cantor set C k+1 in Y such that µ(Y \ C k+1 ) < 1/2 k+2 . Since C k+1 ⊂ Y , we know that C k+1 is disjoint from the sets C 1 , . . . , C k . Since X = U ∪ Y we have µ X \ (C 1 ∪ . . . ∪ C k+1 ) ≤ µ U \ (C 1 ∪ . . . ∪ C k ) + µ(Y \ C k+1 ) < 1/2 k+2 + 1/2 k+2 = 1/2 k+1 . Thus by induction we obtain a sequence of disjoint Cantor sets C 1 , C 2 , . . ., such that µ X \ (C 1 ∪ . . . ∪ C j ) < 1/2 j for every j. Hence µ(X \ ∞ n=1 C n ) = 0. Lemma 2.6. Given a metric space X and a σ-finite positive regular Borel measure µ on X, there exist an at most countable collection {C n } of disjoint Cantor sets in X and an at most countable set S in X disjoint from each C n such that µ X \ ( C n ) ∪ S = 0. Proof. By Lemma 2.2 there exists a countable collection {K n } of disjoint compact sets in X such that µ X \ ( K n ) = 0. By the Cantor-Bendixson theorem [3,Theorem 2A.1], each of the compact sets K n is a disjoint union of a perfect set P n and an at most countable set S n . By Lemma 2.5 each nonempty perfect set P n contains an at most countable collection {K j n } j of disjoint Cantor sets such that µ(P n \ j K j n ) = 0. Now {K j n } n,j is an at most countable collection of disjoint Cantor sets, the set S = S n is at most countable and disjoint from each K j n , and µ X \ (( n,j K j n ) ∪ S) = 0. With these preliminaries, we can now prove Theorem 1.2 by essentially repeating the proof of [2, Theorem 1.10]. Minor changes are required on account of the (possible) presence of the at most countable set S in Lemma 2.6. The proof will be carried out as if the collection {C n } and the set S in Lemma 2.6 are both countably infinite. If either is actually finite, then in the inductive procedure below one simply ceases to carry out the part of the construction that no longer makes sense once the collection {C n }, or the set S, has been exhausted. If both the collection {C n } and the set S are finite, then the procedure terminates, but in that case the result is rather trivial, so the construction below is not really needed then. Proof of Theorem 1.2. By Lemma 2.6 there exist in X disjoint sets S and C 1 , C 2 , . . . such that S is at most countable, each C j is a Cantor set, and µ X \ ( C j ) ∪ S = 0. Let the points of S be denoted by x 1 , x 2 , . . . . We will construct a sequence (f n ) ∞ n=1 of continuous functions from X into [0, 1] such that for each n (i) f n is one-to-one on C 1 ∪ . . . ∪ C n ∪ {x 1 , . . . , x n }, (ii) f n+1 agrees with f n on C 1 ∪ . . . ∪ C n ∪ {x 1 , . . . , x n }, and (iii) f n+1 − f n ∞ ≤ 1/2 n . Suppose for the moment that such a sequence of functions has been constructed. Then on account of condition (iii), the sequence (f n ) converges uniformly to a continuous limit function f . Due to condition (ii), f m agrees with f n on C 1 ∪ . . . ∪ C n ∪ {x 1 , . . . , x n } for all m ≥ n, and hence the limit function f also agrees with f n on C 1 ∪ . . . ∪ C n ∪ {x 1 , . . . , x n }. Now given distinct points a and b in ( ∞ j=1 C j )∪S, choose N such that both a and b lie in C 1 ∪. . .∪C N ∪{x 1 , . . . , x N }. Then f (a) = f N (a) = f N (b) = f (b) . Hence f is one-to-one on ( C j ) ∪ S. Thus it suffices to construct a sequence of functions satisfying conditions (i)-(iii). We will construct the sequence of functions f n by induction. For the purpose of carrying out the induction we will also require the additional condition that for each n (iv) { f n (C 1 ), . . . , f n (C n ) } is a collection of disjoint Cantor sets in [0, 1]. We begin by defining f 1 . Choose a Cantor set C 1 in [0, 1] and a point y 1 in [0, 1] \ C 1 . Choose a homeomorphism g 1 of C 1 onto C 1 . By the Tietze extension theorem, there is an extension of g 1 to a continuous function of X into [0, 1] that maps x 1 to y 1 . Let f 1 be the extension. Now to carry out the induction, assume that functions f 1 , . . . , f k have been defined so that conditions (i)-(iv) hold for those values of n for which they are meaningful. We wish to define f k+1 . By the continuity of f k , there is an open cover U of C k+1 such that for each member U of U we have that f k (U) is contained in an interval of length 1/2 k . By Lemma 2.1 we can write C k+1 as a finite union C k+1 = C 1 k+1 ∪ . . . ∪ C N k+1 of disjoint Cantor sets C 1 k+1 , . . . , C N k+1 each of which is contained in some member of U . Then for each j = 1, . . . , N, the set f k (C j k+1 ) is contained in an interval I j k+1 ⊂ [0, 1] of length 1/2 k . Since f k (C 1 ), . . . , f k (C k ) are disjoint Cantor sets, their union is also a Cantor set and in particular has empty interior in [0, 1]. Consequently, we can choose disjoint Cantor sets C 1 k+1 , . . . , C N k+1 with C j k+1 contained in I j k+1 \ f k (C 1 )∪. . .∪f k (C k )∪{f k (x 1 ), . . . , f k (x k )} for each j, and we can choose a point y k+1 in [0, 1] \ f k (C 1 ) ∪ . . . ∪ f k (C k ) ∪ {f k (x 1 ), . . . , f k (x k )} ∪ C 1 k+1 ∪ . . . ∪ C N k+1 with |f k (x k+1 ) − y k+1 | < 1/2 k . Choose a homeomorphism g j k+1 of C j k+1 onto C j k+1 for each j, and then define g k+1 on C 1 ∪ . . . ∪ C k+1 ∪ {x 1 , . . . , x k+1 } by g k+1 (x) =      f k (x) if x ∈ C 1 ∪ . . . ∪ C k ∪ {x 1 , . . . , x k } g j k+1 (x) if x ∈ C j k+1 (j = 1, . . . , N) y k+1 if x = x k+1 Then g k+1 is a homeomorphism of C 1 ∪ . . . ∪ C k+1 ∪ {x 1 , . . . , x k+1 } onto f (C 1 ) ∪ . . . ∪ f (C k ) ∪ C 1 k+1 ∪ . . . ∪ C N k+1 ∪ {y 1 , . . . , y k+1 } taking C k+1 onto C 1 k+1 ∪ . . . ∪ C N k+1 . Note that sup |f k (x) − g k+1 (x)| : x ∈ C 1 ∪ . . . ∪ C k+1 ∪ {x 1 , . . . , x k+1 } ≤ 1/2 k since for each j both f k (C j k+1 ) and g k+1 (C j k+1 ) are contained in the interval I j k+1 of length 1/2 k and |f k (x k+1 ) − y k+1 | < 1/2 k . By the Tietze extension theorem, there is a continuous function h k+1 on X that agrees with f k − g k+1 on C 1 ∪ . . . ∪ C k+1 ∪ {x 1 , . . . , x k+1 } and satisfies h k+1 ∞ ≤ 1/2 k . Define a function f k+1 on X by f k+1 (x) =      f k (x) − h k+1 (x) if 0 ≤ f k (x) − h k+1 (x) ≤ 1 0 if f k (x) − h k+1 (x) ≤ 0 1 if f k (x) − h k+1 (x) ≥ 1 Then f k+1 is a continuous functions from X into [0, 1] such that f k+1 = g k+1 on C 1 ∪ . . . ∪ C k+1 ∪ {x 1 , . . . , x k+1 } and f k+1 − f k ∞ ≤ 1/2 k . It follows that f 1 , . . . , f k+1 satisfy the required conditions (i)-(iv) for those values of n for which the conditions are meaningful. Therefore, by induction we obtain the desired sequence (f n ), and the proof is complete. Acknowledgement:The author thanks the referee for reading the paper especially carefully and making several helpful comments that led to improvements. Cantor sets in metric measure spaces. B R Gelbaum, Proc. Amer. Math. Soc. 24B. R. Gelbaum, Cantor sets in metric measure spaces, Proc. Amer. Math. Soc. 24 (1970), 341-343. Generators for algebras dense in L p -spaces. A J Izzo, B Li, Studia Math. 217A. J. Izzo and B. Li, Generators for algebras dense in L p -spaces, Studia Math. 217 (2013), 243-263. Descriptive Set Theory. Y N Moschovakis, Studies in Logic and the Foundations of Mathematics. Amsterdam-New YorkNorth-Holland Publishing Co100Y. N. Moschovakis, Descriptive Set Theory, Studies in Logic and the Foun- dations of Mathematics, 100, North-Holland Publishing Co., Amsterdam- New York, 1980.
[]
[ "First submission: PRE 26Oct2016 On the Dynamical Foundation of Multifractality", "First submission: PRE 26Oct2016 On the Dynamical Foundation of Multifractality" ]
[ "Korosh Mahmoodi \nCenter for Nonlinear Science\nUniversity of North Texas\nP.O. Box 31142776203-1427DentonTexasUSA\n", "Bruce J West \nInformation Sciences Directorate\nArmy Research Office\nResearch Triangle ParkNC\n", "Paolo Grigolini \nCenter for Nonlinear Science\nUniversity of North Texas\nP.O. Box 31142776203-1427DentonTexasUSA\n" ]
[ "Center for Nonlinear Science\nUniversity of North Texas\nP.O. Box 31142776203-1427DentonTexasUSA", "Information Sciences Directorate\nArmy Research Office\nResearch Triangle ParkNC", "Center for Nonlinear Science\nUniversity of North Texas\nP.O. Box 31142776203-1427DentonTexasUSA" ]
[]
The crucial aspect of this demonstration is the discovery of renewal events, hidden in the computed dynamics of a multifractal metronome, which enables the replacement of the phenomenon of strong anticipation with a time delayed cross-correlation between the driven and the driving metronome. We establish that the phenomenon of complexity matching, which is the theme of an increasing number of research groups, has two distinct measures. One measure is the sensitivity of a complex system to environmental multi-fractality; another is the level of information transfer, between two complex networks at criticality. The cross-correlation function is evaluated in the ergodic long-time limit, but its delayed maximal value is the signature of information transfer occurring in the non ergodic short-time regime. It is shown that a more complex system transfers its multifractality to a less complex system while the reverse case is not possible.
10.1016/j.physa.2019.124038
[ "https://arxiv.org/pdf/1707.05988v1.pdf" ]
119,227,322
1707.05988
3dcd969ca5cf428ecc2746346367849b24055d57
First submission: PRE 26Oct2016 On the Dynamical Foundation of Multifractality Korosh Mahmoodi Center for Nonlinear Science University of North Texas P.O. Box 31142776203-1427DentonTexasUSA Bruce J West Information Sciences Directorate Army Research Office Research Triangle ParkNC Paolo Grigolini Center for Nonlinear Science University of North Texas P.O. Box 31142776203-1427DentonTexasUSA First submission: PRE 26Oct2016 On the Dynamical Foundation of Multifractality Transfer of MultifractalityComplexity matchingCrucial eventsErgodicty breakingMultifractal metronomeMultifractal Decision Making model The crucial aspect of this demonstration is the discovery of renewal events, hidden in the computed dynamics of a multifractal metronome, which enables the replacement of the phenomenon of strong anticipation with a time delayed cross-correlation between the driven and the driving metronome. We establish that the phenomenon of complexity matching, which is the theme of an increasing number of research groups, has two distinct measures. One measure is the sensitivity of a complex system to environmental multi-fractality; another is the level of information transfer, between two complex networks at criticality. The cross-correlation function is evaluated in the ergodic long-time limit, but its delayed maximal value is the signature of information transfer occurring in the non ergodic short-time regime. It is shown that a more complex system transfers its multifractality to a less complex system while the reverse case is not possible. I. INTRODUCTION The central role of complexity in understanding nonlinear dynamic phenomena has become increasingly evident over the last quarter century, whether the scientific focus is on ice melting across the car windshield, stock prices plummeting in a market crash, or the swarming of insects [1]. It is remarkable, given its importance in modulating the behavior of dynamic phenomena, from the cooperative behavior observed in herding, schooling, and consensus building, to the behavior of the lone individual responding to motor control response tasks, that defining complexity has been so elusive. As Delignières and Marmelat [2] point out, a complex system consists of a large number of infinitely entangled elements, which cannot be decomposed into elementary components. They go on to provide an elegant, if truncated, historical review of complexity, along with its modern connection to fluctuations and randomness. They provide a working definition for complexity, as done earlier by West et al [1], as a balance between regularity and randomness. A. Complexity management In order to sidestep the impasse of providing an absolute definition of complexity, West et al. [1] introduced the complexity matching effect (CME). This effect details how one complex network responds to an excitation by a second complex network, as a function of the mismatch in the measures of complexity of the two networks. An erratic time series generated by a complex network hosts * Corresponding author: [email protected] crucial events, namely events characterized by the following properties. The time distance τ between two consecutive events has a probability density function (PDF), ψ(τ ), that is inverse power law (IPL) with IPL index µ < 3. Different pairs of consecutive times correspond to time distances with no correlation, renewal property. The measure of complexity is taken to be the IPL index µ. Aquino et al. [3] show that a complex network S with IPL-index µ < 2, has no characteristic time scale, and in the long-time limit is insensitive to a perturbation by a complex network with a finite time scale, including one having oscillatory dynamics. It is important to notice that when µ < 2, the first moment of ψ(τ ) is divergent, thereby generating a condition of perennial aging, while the condition µ < 3, making the second moment of ψ(τ ) divergent, generates non-stationary fluctuations that become stationary in the long-time limit. The spectrum of these fluctuations in the region 2 < µ < 3 can be evaluated using the ordinary Wiener-Khintchine theorem, whereas the region of perennial aging, µ < 2, requires the adoption of a generalized version of this theorem [4]. The network S is expected to be sensitive to perturbations having the same IPL index. This observation generated the plausible conjecture that a complex network, with a given temporal complexity, is especially sensitive to the influence of a network with the same temporal complexity, this being, in fact, a manifestation of CME. The crucial events are generated by complex systems at criticality [5]. On the basis of this property, Turalska et al. [6] afforded strong numerical support to the CME, showing that a network at criticality is maximally sensitive to the influence of a identical network also at criticality. In this case, the IPL-index of both the perturbed network, µ S , and the IPL-index of the perturbing network, µ P , must be identical, because the perturbing and the perturbed network are identical systems in the same condition, that being the criticality condition. It is known [7] that at criticality µ S = µ P = 1.5, thereby implying that the two systems share the same temporal complexity, with the same lack of a finite time scale. The CME was subsequently generalized to the principle of complexity management (PCM), where the network response was determined when both the perturbing and responding networks have indices in the interval 1 < µ < 3. This latter condition was studied by means of ensemble averages [3,8] and time averages [9], leading to the discovery that in the region of perennial aging 1 < µ < 2 the evaluation of complexity management requires special treatment. This conclusion is based on the observation that renewal events are responsible for perennial aging, with the occurrence of the renewal events in the perturbed network being affected by the occurrence of the renewal events in the perturbing network. A careless treatment, ignoring this condition may lead to misleading observations characterized erratic behavior making it impossible to realize the correlation between the perturbed and the perturbing signal [9]. This is where the adoption of the multi-fractal perspective adopted by the authors of [10] to study CME may turn out to be more convenient that the adoption of the method of cross-correlation functions. There is a growing literature devoted to the interpretation that 1/f -noise is the signature of complexity, where the spectra of complex phenomena are given by 1/f ν . The complexity in time series is generically called 1/fnoise or 1/f -fluctuations, even though empirically the IPL index lies in the interval 0.5 < ν < 1.5. The 1/fbehavior can be detected by converting the underlying time series into a diffusion process. This conversion of data allows us to determine the corresponding Hurst exponent H for the diffusion process, which is well known to be related to the dimension of fractal fluctuations [11]. Consequently, we obtain for the scaling index of the spectrum ν = 2H − 1,(1) It is important to remark that this approach rests on the Gaussian assumption requiring some caution when dynamical complexity is incompatible with the Gaussian condition. It has been observed [12] that the condition µ > 2 generates a diffusion process that, interpreted as Gaussian, yield the Hurst scaling H = 4 − µ 2 ,(2) which plugged into Eq. (1) yields ν = 3 − µ.(3) Eq. (3) is valid also for µ < 2, but in this case it requires a theoretical derivation taking into explicit account the condition of perennial aging [4]. It is important to stress that ν > 1 is consequently a sign of the action of crucial events. The spectrum of a complex network at criticality with crucial events with IPL index µ < 2, expressed in terns of the frequency ω = 2πf , is [4]: S(ω) ∝ 1 L 2−µ 1 ω 3−µ ,(4) where L is the length of the observed time series. In this paper we shall use this expression to prove that the multi-fractal metronome used by the authors of Ref. [10] is driven by crucial events responsible for CME [1] and complexity management PCM [3]. Fractal statistics appear to be ubiquitous in time series characterizing complex phenomena. Some empirical evidence for the existence of 1/f -noise within the brain and how it relates to the transfer of information, helps set the stage for the theoretical arguments given below. The brain has been shown to be more sensitive to 1/f -noise than to white noise [13]; neurons in the primary visual cortex exhibit higher coding efficiency and information transmission rates for 1/f -signals than for white noise [14]; human EEG activity is characterized by changing patterns and these fluctuations generate renewal events [15]; reaction time to stimuli reveals that the more challenging the task, the weaker the cognitive 1/f -noise produced [16]. Of course, we could extend this list of brainrelated experiments, or shift our attention to other complex systems, but the point has been made. Here we stress that according to the analysis of [17] the brain dynamics is a source of ideal 1/f -noise being characterized by crucial events with µ ≈ 2 in accordance with an independent observation made by Buiatti et al [18] of IPL index µ ranging from 1.7 to 2.3. It is interesting to notice that heartbeat dynamics of healthy patients were proved to host crucial events with µ close to the ideal condition µ = 2 [19] and that the recent work of Bohara et al [20], in addition to confirming this observation lends support to interpreting the meditation-activated brainheartbeat synchronicity [21] as a form of CME of the same kind as that observed by Deligniéres and co-workers [10], namely, as a transfer of multi-fractality. B. Experiments, multi-fractality and ergodicty breaking With the increase in geopolitical tensions between states, the enhanced social media connectivity between individuals along with the rapid progress of neurophysiology, societal behavior has become one of today's more important scientific topics, forcing researchers to develop and adopt new interdisciplinary approaches to understanding. As pointed out by Pentland [22], even the most fundamental social interaction, that being the dialogue between two individuals, is a societal behavior involving psychology, sociology, information science and neurophysiology. Consequently, we try and keep the theoretical discussion outside any one particular discipline and focus our remarks on what may apply across disciplines. Consequently, the same issue of dyadic interaction can be studied from the theoretical perspective of the Science of Complexity, which we interpret as an attempt to establish a fruitful interdisciplinary perspective. This view recognizes that the interaction between phenomena from different disciplines requires the transfer of information from one complex system to another. The adoption of this interdisciplinary perspective naturally leads us to use the previously introduced notion of complexity matching [1,2,23]. On the other hand, the transition in modeling from physics to biology, ecology, or sociology, gives rise to doubts concerning the adoption of the usual reductionism strategy and establishes a periodicity constraint that is often ignored by physical theories [24]. The observation of the dynamics of single molecules [25], yields the surprising result that in biological systems the ergodic assumption is violated [26]. On the basis of real psychological experiments, for instance the remarkable report on the response of the brain to the influence of a multi-fractal metronome [27], complexity matching has been interpreted as the transfer of a scaling PDF from a stimulus to the brain of a stimulated subject. More recent experimental results [10] confirm this interpretation, based on the transfer of global properties from one complex network to another, termed genuine complexity matching, while affording suggestions on how to distinguish it from more conventional local discrete coupling. We notice that the PCM [1] relies on the crucial role of criticality and ergodicity breaking, in full agreement with the concept of the transport of global properties from one complex network to another. This agreement suggests, however, that a connection exists, but has not yet been established, between multi-fractality and ergodicity breaking, in spite of the fact that current approaches to detecting multi-fractality are based on the ergodicity assumption [28]. Note that multi-fractality is defined by a time series having a spectrum of Hurst exponents (fractal dimensions), which is to say the scaling index changes over time [29], resulting in no single fractal dimension, or scaling parameter, characterizing the process. Instead there is a unimodel distribution of scale indices centered on the average Hurst exponent. In psycho-physical experiments, for example, a person is asked to synchronize a tapping finger in response to a chaotic auditory stimulus, and complexity matching is interpreted as the transfer of scaling of the fractal statistical behavior of the stimulus, to the fractal statistical response of the stimulated subject's brain. This response of the brain, in such a motor control task, when the stimulus is a multifractal metronome, has been established [27]. Experimental results [10] confirm this interpretation, based on the transfer of global properties from one complex network to another. In these latter experiments the multifractal metronome generates a spectrum of fractal dimensions f (α) as a function of the average singularity strength of the excititory signal and it is this dimen- Ref. [10] sional spectrum that is captured by the brain response simulation, as depicted in Figure 1. The multi-fractal behavior manifest by the unimodal distribution provides a unique measure of complexity of the underlying network.It is worth noting that the same displacement of the metronome spectrum, from the body response spectrum, is observed for walking in response to a multifractal metronome. The main purpose of the present article is to establish that the multi-fractal arguments advanced by many advocates of complexity matching must be compatible with data displaying ergodicity breaking. The dynamical model that we use to establish this connection is the multi-fractal metronome [27], described by the periodically driven rate equation with delay: x = −γx(t) − βsin (x(t − τ m )) .(5) This model, originally introduced by Ikeda and coworkers [30,31], was adopted by Voss [32] to illustrate the phenomenon of anticipating synchronization. In the present article, we adopt Eq. (5) to mimic the output of a complex network, generating both temporal complexity [33] and periodicity [34]. More precisely, setting γ = 1, we use Eq. (5) with only two adjustable parameters; the amplitude of the sinusoidal driver β and the delay time τ m , to determine the nonlinear dynamics of a metronome with a multi-fractal time series. We should use this simple equation to study the joint action of renewal events and periodicity, which may have the effect of either annihilating or reducing the renewal nature of events. This is, however, a challenging issue that we plan to discuss in future work. In this paper, on purpose, we establish a time separation between temporal complexity and periodicity and we establish the accuracy of this separation by means of the aging experiment. We show that this simple equation, with a careful choice of the parameters β and τ m , generates the same temporal complexity as that produced at criticality by a large number of interacting units in a complex network [7], yielding ergodicity breaking. This earlier work shows that in the long-time regime the nonlinear Langevin equation, yielding ergodicity breaking in the short-time regime, becomes equivalent to an ordinary linear Langevin equation. Herein we use this theory to establish the cross-correlation between a perturbed network S and a perturbing network P , identical to the network S, when both networks are at criticality. Using the important property that in the long-time limit the non-linear Langevin equation becomes identical to an ordinary Langevin equation, we find an exact analytical expression for the cross-correlation between S and P . We prove that the mullti-fractal metronome, with a suitable choice of the parameters γ, β and τ m generates crucial events and that the temporal complexity of these events is identical to that a complex network at criticality, more precisely the complex network of Section II. On the basis of this equivalence we predict that the cross-correlation function between the metronome equivalent to the network S and the metronome equivalent to the metronome P should be identical to the analytical cross-correlation function between network S and network P . This prediction is supported with a surprising accuracy by the numerical results of this paper. The strategy of replacing the output of a complex network with the time series solution to a multi-fractal metronome equation yields the additional benefit of avoiding numerically integrating the equations of motion for a complex dynamic network, involving the interactions among a large number of particles in a physical model, people in a social model, or neurons in a model of the brain. Although the interplay between complexity and periodicity is an issue of fundamental importance [34], herein we focus on the IPL complexity, hidden within the dynamics of Eq. (5); complexity that was overlooked in earlier research on this subject. As pointed out earlier, with our arguments about the generalization of the Wiener-Khintchine theorem in the case of perennial aging we prove numerically that the spectrum S(ω) of the metronome fits very well the prediction of Eq. (4). On the other hand, we prove directly that a network at criticality is the source of the broad multi-fractal spectrum used in Ref. [10] to discuss CME. C. Outline of paper The outline of the paper is as follows. Section II shows that a complex network at criticality generates a distinct multi-fractal spectrum. We devote Section III to detecting the renewal events hidden within the dynam-ics of Eq.(5) and we establish the equivalence between the multi-fractal metronome and a complex network at criticality. In Section IV we find the analytical expression for the cross-correlation between two complex networks at criticality and we prove numerically that the two equivalent multi-fractal metronomes generate the same cross-correlation. Section V illustrates the transfer of the multi-fractal spectrum from a complex to a deterministic metronome and Section VI is devoted to concluding remarks. II. CRITICALITY, DECISION MAKING MODEL AND MULTIFRACTALITY A. Multifractality of the Decision Making Model As an example of criticality we adopt the Decision Making Model (DMM) widely illustrated in the earlier work of Refs. [6,7,[35][36][37][38][39]. For a review please consult the book of Ref. [40]. To clarify the connection between criticality and multi-fractality, we use the DMM. This model rests on a network of N units that have to make a choice between two states, called C and D. The state C corresponds to the value ξ = 1 and the state D corresponds to the value ξ = −1. The transition rate from C to D, g CD , is given by g CD = g 0 exp −K M C − M D M(6) and the transition rate from D to the C, g DC , is given by g DC = g 0 exp K M C − M D M .(7) The meaning of this prescription is as follows. The parameter 1/g 0 defines a dynamic time scale and we set g 0 = 0.1 throughout. Each individual has M neighbors (four in the case of the regular two-dimensional lattice used in this article). The cooperation state is indicated by C and the defection state by D. If an individual is in C, and the majority of its neighbors are in the same state, then the transition rate becomes smaller and the individual sojourns in the cooperation state for a longer time. If the majority of its neighbors are in D, then the individual sojourns in the cooperator state for a shorter time. An analogous prescription is used if the individual is in the defection state. We run the DMM for a time t and we evaluate the mean field than the critical values K c , which depends on the network topology, the mean field x fluctuates around the vanishing mean values. For values of K significantly larger than K c the mean field x(t) fluctuates around either 1 or −1. It is important to stress that for values of K in the vicinity of the critical value K c , the fluctuations of the mean field have large intensity and the largest intensity corresponds to the critical value K c . The exact value of K c depends also on the number of units [38] and its evaluation is outside the scope of this paper. Here we limit ourselves to notice that in the case of a regular twodimensional network K c , with N = 100 is around 1.5 . The adoption of an irregular networks with a distribution of links departing from the condition of an equal number of links for each unit may have the effect of significantly reducing the value of K c . A scale-free distribution of links was found [39] to make K c very close to K c = 1, which corresponds to the ideal case where each unit is linked to all the other N − 1 units. The PDF of the time intervals between two consecutive re-crossings of the origin in the sub-critical condition K < K c is exponential. At K = K c the PDF becomes an IPL with power index µ = 1.5. In the supercritical state, the mean field fluctuates around a non-vanishing mean field. For K K c the PDF of the time intervals between two consecutive recrossing of this non-vanishing mean field again becomes exponential. The purpose of Section is give the readers a better understanding of the role crucial events, namely non-Poisson renewal events with power index µ fitting the condition 1 < µ < 3. Here we limit ourselves to to illustrate the connection between criticality and multi-fractal spectrum. x(t) ≡ N i ξ i N ,(8) Following Delignières and co-workers [10] we apply to the time series x(t) the multifractal method of analysis proposed in 2002 by Kantelhardt et al [41]. The method of Ref. [41] is an extension of the popular technique called Detrended Fluctuation Analysis (DFA) of Ref. [42] originally proposed to determine the Hurst coefficient H. The results depicted in Fig. 2 show that the inverted parabola f (α) becomes broadest at criticality. It is interesting to observe that in the sub-critical regime, where the fluctuation of the mean field of Eq. (8) generates ordinary diffusion with Hurst coefficient H = 0.5, the spectrum is much sharper and is centered around α = 0.5. Increasing the value of the control parameter has the effect of shifting the barycenter of the inverted parabola towards larger values of α with no significant effect on the parabola's width. The dependence of f (α) on K is dramatically non-linear. In fact, with K going closer to K c the barycenter of the inverted parabola jumps to the vicinity of α = 1.2 and the parabola, as stated earlier, reaches its maximal width. Moving towards higher values of K, supercritical values, has the effect of further shifting to the right the parabola's barycenter. However, the parabola's width becomes much smaller, in line with earlier arguments about the super-critical condition being less complex than the critical condition. It is impressive that with K = 2 the parabola's barycenter jumps back to the left, suggesting that for even larger values of K complexity is lost, in a full agreement with [43]. Unfortunately, at the moment of writing this paper, no quantitative theory exists connecting criticality-induced complexity and the emergence of a multi-fractal spectrum at criticality. The work of [20] suggests that the spectrum f (α) is made broader by the action of crucial events activated by the criticality of the processes of selforganization [44]. The numerical results of this section confirms this property. In fact, as remarked earlier, Fig. 2 shows that the multi-fractal spectrum becomes broadest at criticality, while it becomes sharper in both the supercritical and sub-critical condition, where according to [43] the time interval between two consecutive events has an exponential PDF. This result suggests that, as done by Delignières and co-workers [10] it may be convenient to measure CME observing the correlation between the multi-fractal spectrum of the perturbed network S and the multi-fractal spectrum of the perturbing network P rather than the cross-correlation between the crucial events of S and the crucial events of P . This method, although more closely related to the occurrence of crucial events is made hard by ergodicity breaking [9], even when the crucial events are visible, not to speak about the fact that usually crucial events are hidden in a cloud of non-crucial events [19]. B. Transfer of multi-fractality from one DMM to another DMM network In this section we want to discuss an experiment similar to that of Ref. [10] shown in Fig. 1. We do that with FIG. 3: The dashed black curve is the multifractal spectrum of the system A with K = 0.1 and the dashed pink curve denotes the multifractal spectrum of the system B, with K = 1.4. One network perturbs the other, as described in the text, with 5% of its units adopting the state C or D according to whether the perturbing system has a positive or a negative mean field. The blue curve is the multif-fractal spectrum of the network B under the influence of network A and the red curve is the multiffractal spectrum of A under the influence of B. two DMM networks, the perturbing network playing the role of metronome and the perturbed network playing the role of participants. The complex network A has the control parameter K = 0.1, namely, it is a system in the subcritical condition and the network B has the control parameter K = 1.4, close to criticality. We explore two opposite conditions. In the first the system A perturbs the system B and in the second the system B perturbs the system A. The perturbation is done as follows. 5% of the units of the perturbed system adopt either the state C or D, according to whether the perturbing system has a positive or a negative mean field. We see that when the system B with broader spectrum perturbs A, with a a sharper spectrum, it forces A to get a much broader spectrum ,even broader the spectrum of B. When A, with a sharper spectrum than B perturbs B it has the effect of making B adopt a spectrum as sharp as that of the perturbing system. This result can be compared to that of the earlier work of Ref. [36], where 2% of the units of the perturbed network adopted the choice made by the perturbing network. In that case no correlation was detected between S and P but in the case where both networks being at criticality. This suggests that the correlation between the multi-fractal spectrum f (α) of S and the multifractal spectrum f (α) of P may be a more proper way to study CME [45]. III. DETECTING RENEWAL EVENTS We devote this Section to establish the equivalence between the dynamics of metronome and those of a DMM network at criticality. We make a suitable choice of the parameters β and τ m of Eq. (5) so as to make it possible to establish this statistical equivalence. A. Renewal character of re-crossings As done in earlier work [6,33] attention is focused on events corresponding to zero-crossings, that is, to the time intervals between succesive crossings of x = 0. Successive zero-crossings are used to generate a first time (FT) series {τ i }, where τ i = t i+1 − t i is the time interval between two consecutive events, that is, zero-crossings. An important question about this FT series is whether a non-zero two-time correlation, between different events, exists or not. The events are identified as renewal if all two-time and higher-order correlations are zero. The renewal nature of the events generated using the metronome equation is determined by using the aging experiment [46]. The method, originally proposed by Allegrini et al. [46], was for the purpose of proving that each zero-crossing of a time series is an isolated event, with no correlation with earlier events. The lack of correlation implies that when an event occurs, the occurrence of the next event is completely unrelated and unpredictable; the occurrence of an event can be interpreted as a form of rejuvenation of the system. Ergodicity breaking [26] is closely connected to the occurrence of renewal events, as can be intuitively understood by assigning to the waitingtime PDF ψ(τ ) the IPL form ψ(τ ) ∝ 1/τ µ .(9) In the case µ < 2 the mean waiting time is infinite, τ = ∞ 0 τ ψ(τ )dτ = ∞, and the longer the total length of the time series under study, the greater the maximum value of the waiting-time τ detected. In fact, although a very short time interval can be drawn immediately after a very long one is drawn, due to the renewal nature of the process, it is impossible that the largest value of τ found, within a sequence of length L 1 , remains the maximum in examining a sequence of length L 2 > L 1 . This would conflict with the τ = ∞ condition. To assess whether the FT series {τ i } generated by Eq. (5) is detected to be renewal, we generate a second, auxiliary, time series by shuffling the FT series. We refer to the latter as the shuffled time series. We apply the aging experiment algorithm to both the original and the shuffled sequence: We adopt a window of size t a , corresponding to the age of the network that we want to examine. Locate the left end of the window at the time of occurrence of an event, record the time interval between the right end of the window and the occurrence of the first event, emerging after the end of the window. Note that adopting windows of vanishing size corresponds to generating ordinary histograms. The histograms generated by t a produce different decision-time distribution densities, and these distribution densities, properly normalized, generate survival probabilities, whose relaxation can be distinctly different from that of the ordinary survival probability. A non-ergodic renewal process is expected to generate a relaxation that becomes slower and slower as t a increases. This lengthening of the relaxation time occurs because the method leads to a truncated time series. However, the truncation affects the short time intervals more than it does the long time intervals, thereby reducing the weight of ψ(τ ) for short times, while enhancing the weight of long time intervals. We have, of course, to take into account that we adopt normalized histograms. A process is renewal if the aging of the non-shuffled FT series is identical to the aging of the shuffled time series. Let us discuss the results of the aging experiment applied to the time series {τ i } generated by the multifractal metronome data of Eq. (5). In Fig. 4 the shuffled time series is seen to yield a slight deviation from the non-shuffled time series of approximate magnitude τ ∼ 10. This is a consequence of setting the delay time to τ m = 10, thereby establishing a periodicity interfering with temporal complexity. On the other hand, Fig. 5 shows that the shuffled data curve virtually coincides with the non-shuffled data curve throughout the entire time region explored by the multi-fractal metronome. B. Long-time ergodic behavior Note that the data curves in both Fig.4 and Fig.5 are characterized by long-time exponential truncations. In the intermediate time regions, both conditions show an IPL behavior with IPL-index µ = 1.5. This property reinforces the conviction that the metronome dynamics and that of a network of interacting units at criticality are equivalent. In fact, in the absence of exponential truncation, the complex network would generate perennial non-ergodic behavior. The latter behavior would make the network dynamics incompatible with multi-fractality, which, as pointed out by Juzba and Korbel [28], requires a condition of thermodynamic equilibrium. The recent work of Beig et al. [7] shows that the mean field of a complex network at criticality generates a mean field x(t), which is well described by the nonlinear Langevin equation: x(t) = −ax(t) 3 + f x (t),(10) with f x (t) being a random noise generated by the finite size of the network, whose intensity is proportional to 1/ √ N , and N is the number of the interacting units. Eq.(10) describes the over-damped motion of a particle within a quartic potential, which has a canonical equilibrium distribution. However, a particle moving from the initial condition x(0) = 0, undergoes a virtual diffusional process for an extended time T eq . The order of magnitude of this time is given by T eq ∝ N a(11) For times t T eq the network's dynamics are nonergodic, but they become ergodic asymptotically for t T eq . The zero-crossings are well described by a waiting-time PDF given by Eq.(9) for times τ T eq . This PDF, however, is exponentially truncated, and as a consequence of this truncation the renewal aging is not perennial. As a result of aging the IPL index of the PDF changes from µ to µ−1, making the decay slower in the intermediate time region, as shown in Fig. 5. However, the aging process has also the effect of extending the exponential truncation. In the case of the complex network at criticality, this corresponds to the existence of a thermodynamical equilibrium emerging from the adoption of a large time scale of the order of T eq . It is known [7] that the normalized auto-correlation function of the mean field x, to a high degree of approximation, becomes A(τ ) = exp (−Γ|τ |)(12) with the relaxation rate given by [7] Γ ∝ a x 2 eq . Note that the theory of Ref. [7] proves that this autocorrelation function is obtained by replacing the nonlinear Langevin equation of Eq. (10) with the following linear Langevin equatioṅ x(t) = −Γx(t) + f x (t).(14) In Section IV we shall use these arguments to prove that the strong-anticipation of the multi-fractal metronome may be closely related to the complexity matching observed by stimulating a complex network at criticality, with the mean field of another complex network at criticality, see Lukovic et al [43] for more details. The authors of the latter article studied a network of N units, with a small fraction of these units, called lookout birds because of the context of the discussion, or more generally labeled perceiving units, are sensitive to the mean field of another network of N units in a comparable physical condition. The units in the both networks are decision making individuals, who have to make a dyadic choice, between the yes, +1, and no, −1, state. The perceiving units adopt the +1 state, if the mean field perceived by them is positive, y > 0, or the −1 state, if they perceive y < 0. The cross-correlation between the mean field x(t) of the driven network and the mean field y(t) of the driving network attains maximal intensity when both networks are at criticality. It turns out that the cross-correlation function is identical to the auto-correlation function of x(t), with a significant shift, namely, the cross-correlation function x(t + τ )y(t) gets its maximal value when τ = ∆, where the delay ∆ represents a delay in transmitting information from the perturbing to the perturbed complex network. An intuitive interpretation of the above time delay is that the information perceived by the lookout birds must be transmitted to all the units of their network. Lukovic et al [43] adopted a different interpretation of this important delay time. To vindicate their view they studied the all-to-all coupling condition, where the preceptors are coupled to all the other units in their network. Even in this case the cross-correlation function is characterized by a significant time delay, with respect to the unperturbed correlation function of x(t). The reason for the delay is that the group, in the case discussed by [43,47], a flock of birds, can follow the direction of the driving network only when a significantly large number of zero-crossings occur. A zero-crossing corresponds to a free-will condition, where the whole network, can be nudged by an infinitesimally small fluctuation, to select either the positive or the negative state. Thus, the renewal nature of the zero-crossing events becomes essential for the emergence of such cooperative behavior as cognition [47], as in changing one's mind for no apparent reason. The zerocrossing is the time at which the network is most sensitive to a perturbation, so the more zero-crossings the shorter the interval between a perturbation and response. C. Beyond ordinary diffusion We note that the waiting-time PDF of Fig. 5 is given by µ ≈ 1.35(15) This does not conflict with the earlier arguments on the exponential nature of the infinitely aged regime. In fact, on the basis of a theoretical approach based on the observation of random growth of surfaces [48] it is argued [49] that in all the organization processes the fluctuation of x(t) around the origin are non-Poisson renewal events characterized by the following common property. Let us call Ψ(t) the probability that no renewal non-Poisson event occurs at a distance t from an earlier event. The Laplace transform of Ψ(t),Ψ(u), is given bŷ Ψ(u) = 1 u + λ α (u + ∆) 1−α .(16) On the basis of Fig. 4 we assume that there exists a wide time interval generating the PDF index µ = 1 + α, which is 1.35 in the case of that figure. This time interval is defined by 1 λ t 1 ∆ .(17) This wide time interval in the Laplace domain becomes λ u ∆,(18) thereby turning Eq. (16) intô Ψ(u) = 1 u + λ α u 1−α .(19) This is equivalent to the Laplace transform of the Mittag-Leffler function, which is known to be a stretched FIG. 6: Survival probability of the crucial events of the metronome, compared to the exponential function corresponding to infinitely large age.The blue curve is the exponential function of Eq. (22). exponential in the time regime t < 1/λ and the inverse power law 1/t α in the time regime t > 1/λ. Due to Eq. (17) we conclude that in that time interval the waiting time distribution density is an inverse power law with µ = 1 + α while for times t > 1/∆ ψ(t) = Γexp (−Γt) ,(20) where Γ ≡ λ α ∆ 1−α .(21) Note that the corresponding survival probability, Ψ(t), gets the form Ψ(t) = exp (−Γt)(22) and is identical an equilibrium correlation function with the same form as that of Eq. (12), with Γ given by Eq. (21). In the example discussed in Section IV, where Γ = 0.01, we have ∆ = 0.0078 corresponding to the time t = 129. In Fig. 6 we make a comparison between the numerical results on the aging of the renewal events and the theoretical prediction of Eq. (22). The good agreement between numerical results and numerical prediction confirms that the metronome hosts crucial events therefore supporting our conviction that the multi-fractal properties of the metronome, stressed by the work of Deligniéres and coworkers [10], are a manifestation of the action of crucial events. We afford a further support to this important property studying the spectrum S(ω) of the metronome in the condition corresponding to arguments illustrated in Section 7, see Eq. (4), the 1/fnoise generated by the metronome in this condition, with µ = 1.333, should yield ν = 1.67. Fig. 7 yields a satisfactorily agreement between theory and numerical results if we take into account the challenging numerical issue of evaluating the spectrum of a non-ergodic process with the density of crucial events decreasing upon increase of L. IV. COMPLEXITY MATCHING BETWEEN TWO MULTI-FRACTAL METRONOMES In this Section we discuss the results of a numerical experiment done on the cross-correlation between two identical multi-fractal metronomes, in the condition illustrated by Fig. 2, which makes them, as shown in the earlier Section, equivalent to the complex networks at criticality of Ref. [7]. We stress that this equivalence rests on sharing the same temporal complexity, namely, the same non-Poisson renewal statistics for the zero-crossings. The results of this experiment of information transport from one to another identical multi-fractal metronome generates a qualitative agreement with the results of the earlier work of Ref. [43] on the information transport from a complex network at criticality to another complex network at criticality. However, we go much beyond this qualitative agreement. To do so, we use the theory of Ref. [7] to derive an analytical expression for the crosscorrelation used to evaluate the information transport and compare it to the cross-correlation between the two equivalent multi-fractal metronomes done in this section, and we find outstanding agreement. The numerical calculations of this section are based on the following set of coupled equations: FIG. 8: (Color online) Complexity matching between driven and driving metronome. The black curve is the driving system and red curve corresponds to the driven network. x = −γx(t) − βsin (x(t − τ m )) + χy,(23) where x(t) is the mean field of the responding network and y(t) is the mean field generated by the driving networkẏ = −γy(t) − βsin (y(t − τ m )) . This choice of equations is done to mimic the influence that perceptors (lookout birds) exert on their own network in response to an external network. The interaction term χy must be weak to mimic the influence of a very small number of perceiving units. For this reason we assign to the coupling coefficient χ the value χ = 0.1. In Fig. 8 we show that this weak coupling results in a remarkable synchronization of the driven metronome with the driving metronome. Let us now move a quantitative discussion. To make our results compatible with the observation of single complex systems, the brain being an example of unique system making it impossible for us to adopt the ensemble average method, we use the time average approach and we define the autocorrelation function A(τ ) and the cross-correlation function C(τ ) as follows: A(τ ) = T −τ 0 dtx(t + τ )x(t) T − τ(25) and C(τ ) = T −τ 0 dtx(t + τ )y(t) T − τ .(26) In both cases we set T = 10 7 . The auto-correlation function of Eq. (35) is evaluated with χ = 0, namely, when the metronome x is not perturbed by the metronome y. The bottom panel of Fig. 9 shows that the crosscorrelation function, as expected, is characterized by a significant delay, on the order of τ ∼ 100. The crosscorrelation function is asymmetric with respect to the shifted maximum. Notice that we have selected the value τ m = 1000, of Fig. 5 so as to reduce the influence of periodicity on the temporal complexity. Let us now generate an analytical expression to match these numerical results. First of all let us stress that setting T = 10 7 is equivalent to make the numerical observation in the ergodic regime, where time and ensemble averages are expected to yield the same results. The adoption of ensemble averages make the calculations much simpler and for this reason, with no contradiction with the statement that we focus our attention on unique complex networks, we rest our theoretical arguments on ensemble averages. The assumption that the time series generated by the multi-fractal metronome and that of the complex network are equivalent at criticality, and the arguments [7] proving the equivalence between Eq. (10) and Eq. (14) lead us to replace Eq. (23) and Eq. (24) with the linearized formsẋ = −Γx(t) + f x (t) + χy,(27)andẏ = −Γy(t) + f y (t),(28) where f x (t) and f y (t) are mutually uncorrelated Wiener noises. It is straightforward to show, using the lack of correlation between the two sources of noise, that the stationary cross-correlation function C(τ ) is C(τ ) ≡ lim t→∞ x(t + τ )y(t) = lim t→∞ χ t+|τ | 0 dt e −Γ(t+|τ |−t ) y(t)y(t )(29) In the absence of coupling, the two metronomes are characterized by the normalized auto-correlation functions A x (τ ) = A y (τ ) = e −Γ|τ | ≡ A(τ ).(30) As a consequence y(t)y(t ) = y 2 eq e −Γ|t−t | .(31) By inserting Eq. (31) into Eq. (29), and taking into account that for τ > 0, there are two distinct conditions, t < t and t < t < t + τ , we obtain C(τ ) ≡ lim t→∞ x(t + τ )y(t) = be −Γ|τ | , τ < 0,(32) and C(τ ) ≡ lim t→∞ x(t + τ )y(t) = be −Γ|τ | (1 + 2Γτ ), τ > 0.(33)Note that b ≡ y 2 eq χ 2Γ .(34) It is important to stress that the normalized autocorrelation function of the multi-fractal metronome A(τ ) ≡ lim t→∞ x(t)x(t + τ ) x(t) 2 = e −Γ|τ |(35) is evaluated numerically and it is illustrated in the top panel of Fig. 5. We derive the value of Γ from this numerical treatment and its value Γ = 0.01 is used in Eq. (32) and in Eq. (33). As a consequence, to make a comparison between theoretical and numerical crosscorrelation function we have only one fitting parameter, b, the intensity of autocorrelation function at τ = 0. The bottom of Fig. 9 depicts the comparison between numerical and theoretical results and shows that the agreement between the two goes far beyond the qualitative. It is interesting to notice Eq. (32) yields for the time shift ∆ of the cross-correlation function the following analytical expression ∆ = 1 2Γ .(36) This interesting expression shows that reducing Γ has the effect of increasing the time shift. On the other hand reducing Γ has the effect of making the non-ergodic time regime t < T eq more extended, thereby suggesting a close connection between complexity matching and ergodicity breaking. V. TRANSFER OF MULTI-FRACTAL SPECTRUM FROM A COMPLEX TO A DETERMINISTIC METRONOME After illustrating the similarity between the metronome-metronome interaction and the DMM-DMM interaction when both DMM networks are at criticality, let us move to discuss the transfer of information from a complex metronome to a a deterministic metronome. On the basis of the PCM we should expect that in this case no significant transfer of information occurs. See, for instance, the earlier work of Ref. [36], as an example of a lack of information transport when the DMM driving network is at criticality and the driven DMM network is in the subcritical condition. The field x(t) of the driving network is illustrated by the top panel of Fig.10. The waiting time PDF between two consecutive regressions to the origin is of the same kind as that illustrated in Fig. 4, with an intermediate time region with the complexity µ = 1.35, and an exponential truncation. The driven metronome in the absence of the influence of the driving metronome generates the field x(t) illustrated by the middle panel of Fig.10. This is a fully deterministic condition corresponding to the choice of τ m = 1, which implies a lack of complexity. More precisely, Eqs. (23) and (24) has been changed intȯ x = −γx(t) − βsin (x(t − τ S )) + χy, andẏ = −γy(t) − βsin (y(t − τ P )) , respectively, where τ S = 1 and τ P = 1000. The influence of the driving metronome on the driven one is illustrated by the bottom panel of Fig.10. It is evident that the driven metronome has absorbed the complexity of the driving metronome. This important property is made compelling from the results of Fig. 11. This figure was obtained by applying the multi-fractal algorithm of Ref. [41] to the time series x(t) of the driven metronome moving under the influence of the driving metronome. How to explain this surprising result? The earlier dynamical work on complexity matching was based on the assumption that a complex network made complex by criticality is characterized by K = K c and a network with no complexity has a control parameter distinctly smaller than the critical value K c . The crucial events of the driving network exert an influence on the time occurrence of crucial events of the driven network and consequently a form of synchronization takes place, when both networks are at criticality. However, the interaction between the two networks does not affect the values of their control parameters. If the driven network is in the subcritical regime, the statistics of its events remain of Poisson kind, thereby making it impossible to realize such a synchronization effect. The parameter τ m apparently plays the same role as the control parameter K of the DMM network, but βsin (x(t − τ S )) + χy of Eq. (37), with τ S = τ m under the influence of perturbation is turned into βsin (y(t − τ P )). The multi-fractal metronome is more flexible than a DMM network in the subcritical regime. VI. CONCLUDING REMARKS The main results of this paper are the following. The multi-fractal metronome used by Deligniéres and coworkers [10] is equivalent to a complex network at crtiticality in the sense that it hosts crucial events. This establishes a connection between multi-fractal spectrum f (α) and crucial events. This result is in a qualitative accordance with the observation done in Fig. 2. The multi-fractal metronome is equivalent to a complex network at criticality, although its temporal complexity is characterized by µ = 1.35 rather than µ = 1.5, as for DMM [7]. The cross-correlation between two identical networks in the long-time limit is indistinguishable from that of two DMM networks at criticality, as shown by Fig. 9. Thus, we can conclude that the complexity matching established by Deligniéres and co-workers [10] and illustrated in Fig. 1 is a process made possible by the influence that the crucial events of the metronome exert on the crucial events of the brain of the participants. The adoption of f (α) as a measure of the response of S to P seems to be more powerful than the use of crosscorrelation function. In fact, It also of remarkable interest to notice that the correspondence between crucial events and broader distribution of f (α) makes it possible to establish the existence of a correlation between the perturbed network S and the perturbing network P in conditions far from the complexity matching of both networks at criticality, where earlier work did not reveal any significant correlation [36] . The metronome in the physical condition making it equivalent to a network of interacting units at criticality exerts a strong influence on a network in the deterministic condition, see Fig. 10. Also in this case f (α) is a powerful indicator of correlation. Fig. 11 shows that the perturbed deterministic metronome inherits the spectral distribution f (α) of the perturbing metronome. FIG. 1 : 1Multifractal spectra for the participant (black circles) and the metronome (white circle). This figure is derived with permission from the left panel ofFig. 3 of where ξ i = 1 or ξ i = −1, according to whether the i − th individual is in the state C or in the state D, respectively. For values of the control parameter much smallerFIG. 2: Multi-fractal spectrum of the DMM network for different values of the control parameter K. Note the non-monotonic behavior of the location of the peak , as well as, the width of the distribution, with the value of K. FIG. 4 : 4(Color online) Waiting-time PDF for γ = 1, β = 200, t m = 10, L = 10 7 and τ a = 10. FIG. 5 : 5Waiting-time PDF for γ = 1, β = 1000, τ m = 1000, L = 10 7 and τ a = 100. Fig. 5 . 5According to the FIG. 7: The spectrum S(ω) of the metronome in the condition of Fig. 5. FIG. 9 : 9(Color online) Top panel: Numerical auto-correlation function A(τ ) of Eq. (35). The numerical value of Γ is Γ = 0.01. Bottom panel: The black curve denotes the numerical result for the cross-correlation function with γ = 1, β = 1000 and τ m = 1000. The red curve is derived from Eq. (32) and Eq. (33) with the same Γ as in the top panel and the fitting parameter b with the value b = 81.8. FIG. 10: (Color online) Top panel: Time evolution of driving metronome with γ = 1, β = 1000 and τ m = 1000. Middle panel: Time evolution of driven metronome (before connection) with γ = 1, β = 100 and τ m = 1. Bottom panel: Time evolution of driven metronome (after connection, χ = 0.1). FIG. 11: (Color online) Black curve: Parabola of driving metronome with γ = 1, β = 1000 and τ m = 1000. Red curve: Parabola of driven metronome with γ = 1, β = 100 and τ m = 1. χ = 0.1. Acknowledgments. The authors are grateful to David Lambert for contributing important discussions on the subject of this article and for his technical assistance on the preparation of the manuscript. KM and PG warmly thank ARO and Welch for support through Grant No. W911NF-15-1-0245 and Grant No. B-1577, respectively. Maximizing information exchange between complex networks. B J West, E L Geneston, P Grigolini, Phys. Rep. 4681B. J. West, E. L. Geneston, P. Grigolini, Maximizing information exchange between complex networks, Phys. Rep. 468, 1 (2008). Fractal fluctuations and complexity: Current debates and future challenges, Critical Rev. D Delignières, V Marmelat, Biomed. Eng. 40485D. Delignières and V. Marmelat, Fractal fluctuations and complexity: Current debates and future challenges, Crit- ical Rev. Biomed. Eng. 40, 485 (2012). Beyond the death of linear response: 1/f optimal information transport. G Aquino, M Bologna, P Grigolini, B J West, Phys. Rev. Lett. 10569901G. Aquino, M. Bologna, P. Grigolini and B.J. West, Be- yond the death of linear response: 1/f optimal informa- tion transport, Phys. Rev. Lett. 105, 069901 (2010). Power spectra for both interrupted and perennial aging processes. M Lukovic, P Grigolini, J. Chem. Phys. 129184102M. Lukovic, P. Grigolini, Power spectra for both inter- rupted and perennial aging processes J. Chem. Phys, 129, 184102 (2008). Renewal aging as emerging property of phase synchronization. S Bianco, E Geneston, P Grigolinia, M Ignaccolo, Physica A. 3871387S. Bianco, E. Geneston, P. Grigolinia, M. Ignaccolo, Re- newal aging as emerging property of phase synchroniza- tion, Physica A 387, 1387 (2008). Complexity and synchronization. M Turalska, M Lukovic, B J West, P Grigolini, Phys. Rev E. 8021110M. Turalska, M. Lukovic, B. J. West, P. Grigolini, Com- plexity and synchronization, Phys. Rev E 80, 021110 (2009). Critical slowing down in networks generating temporal complexity. M T Beig, A Svenkeson, M Bologna, B J West, P Grigolini, Phys. Rev. E. 9112907M. T. Beig, A. Svenkeson, M. Bologna, B. J. West, P. Grigolini, Critical slowing down in networks generating temporal complexity, Phys. Rev. E 91, 012907 (2015). Transmission of information between complex systems: 1/ f resonance. G Aquino, M Bologna, B J West, P Grigolini, Phys. Rev. E. 8351130G. Aquino, M. Bologna, B.J. West and P. Grigolini, Transmission of information between complex systems: 1/ f resonance, Phys. Rev. E, 83, 051130 (2011). Nonergodic complexity management. N Piccinini, D Lambert, B J West, M Bologna, P Grigolini, Phys. Rev. E. 9362301N. Piccinini, D. Lambert, B. J. West, M. Bologna and P. Grigolini, Nonergodic complexity management, Phys. Rev. E, 93, 062301 (2016). Multifractal signatures of complexity matching, Experimental Brain Research. D Delignières, Z M H Almurad, C Roume, V Marmelat, 2342773D. Delignières, Z. M. H. Almurad, C. Roume, V. Marme- lat, Multifractal signatures of complexity matching, Ex- perimental Brain Research, 234, 2773 (2016). Fractals: Form, Chance and Dimension. B B Mandelbrot, W.H. FreemanSan FranciscoMandelbrot B.B., Fractals: Form, Chance and Dimen- sion, W.H. Freeman, San Francisco (1977). Scaling detection in time series: Diffusion entropy analysis. N Scafetta, P Grigolini, Phys. Rev. E. 6636130N. Scafetta, P. Grigolini, Scaling detection in time se- ries: Diffusion entropy analysis, Phys. Rev. E 66, 036130 (2002). 1/f Noise Outperforms White Noise in Sensitizing Baroreflex Function in the Human Brain. R Soma, D Nozaki, S Kwak, Y Yamamoto, Phys, Rev. Lett. 9178101R. Soma., D. Nozaki, S. Kwak, and Y. Yamamoto, 1/f Noise Outperforms White Noise in Sensitizing Barore- flex Function in the Human Brain, Phys, Rev. Lett. 91, 078101 (2003). Preference of Sensory Neural Coding for 1/f Signals. Y Yu, R Romero, T S Lee, Phys, Rev. Lett. 94108103Yu Y., R. Romero and T.S. Lee, Preference of Sen- sory Neural Coding for 1/f Signals, Phys, Rev. Lett. 94, 108103 (2005). Intermittent dynamics underlying the intrinsic fluctuations of the collective synchronization patterns in electrocortical activity. P Gong, A R Nikolaev, C Van Leeuwen, Phys. Rev. E. 7611904P. Gong, A.R. Nikolaev and C. van Leeuwen, Intermit- tent dynamics underlying the intrinsic fluctuations of the collective synchronization patterns in electrocortical ac- tivity, Phys. Rev. E 76, 011904 (2007). Social Psychol. 1/f noise and effort on implicit measures of bias. J Correll, J Person, 9448J. Correll, J. Person. Social Psychol. 1/f noise and effort on implicit measures of bias., 94, 48 (2008). Spontaneous brain activity as a source of ideal 1/f noise. P Allegrini, D Menicucci, R Bedini, L Fronzoni, A Gemignani, P Grigolini, B J West, P Paradisi, Phys. Rev. E. 8061914P. Allegrini, D. Menicucci, R. Bedini, L. Fronzoni, A. Gemignani, P. Grigolini, B. J. West, P. Paradisi, Sponta- neous brain activity as a source of ideal 1/f noise, Phys. Rev. E, 80, 061914 (2009). Feedback modulates the temporal scale-free dynamics of brain electrical activity in a hypothesis testing task. M Buiatti, D Papo, P.-M Baudonnière, Van Vreeswijk, Neuroscience. 1461400M. Buiatti, D. Papo, P.-M. Baudonnière, Van Vreeswijk, Feedback modulates the temporal scale-free dynamics of brain electrical activity in a hypothesis testing task, Neu- roscience 146, 1400 (2007). Raffaelli, Memory beyond memory in heart beating, a sign of a healthy physiological condition. P Allegrini, P Grigolini, P Hamilton, G Palatella, Phys. Rev. E. 6541926P. Allegrini, P. Grigolini, P. Hamilton, Palatella, G. Raf- faelli, Memory beyond memory in heart beating, a sign of a healthy physiological condition, Phys. Rev. E, 65, 041926 (2002). Grigolini Crucial events, randomness and multi-fractality in heartbeats. G Bohara, D Lambert, B J West, P , submitted to Phys. Rev. EG. Bohara, D. Lambert, B. J. West, P. Grigolini Cru- cial events, randomness and multi-fractality in heart- beats, submitted to Phys. Rev. E. Dynamic correlations between heart and brain rhythm during Autogenic meditation. D Kim, K Lee, J Kim, M Whang, S W Kang, Front. Hum. Neurosci. 7414D. Kim, K. Lee, J. Kim, M. Whang, S. W. Kang, Dy- namic correlations between heart and brain rhythm dur- ing Autogenic meditation, Front. Hum. Neurosci. 7, 414 (2013). To Signal Is Human. A Pentland, American Scientist. 98204A. Pentland, To Signal Is Human, American Scientist, 98, 204 (2010). Complexity Matching in Dyadic Conversation. D H Abney, Al Paxton, R Dale, C T Kello, Journal of Experimental Psychology: General. 1432304D. H. Abney, Al.Paxton, R. Dale, C. T. Kello, Complex- ity Matching in Dyadic Conversation, Journal of Exper- imental Psychology: General, 143, 2304 (2014). A Physical (Homeokinetic) Foundation for the Gibsonian Theory of Perception and Action. A S Iberall, Ecological Psychology. 737A. S. Iberall, A Physical (Homeokinetic) Foundation for the Gibsonian Theory of Perception and Action, Ecolog- ical Psychology, 7, 37 (1995). The physics of life: one molecule at a time. M C Leake, Phil. Trans. R. Soc. B. 36820120248M. C. Leake, The physics of life: one molecule at a time, Phil. Trans. R. Soc. B 368 , 20120248 (2012). Single particle tracking in systems showing anomalous diffusion: the role of weak ergodicity breaking. S Burov, J.-H Jeon, R Metzler, E Barkai, Phys. Chem. Chem. Phys. 131800S. Burov, J.-H. Jeon, R. Metzler, E. Barkai, Single par- ticle tracking in systems showing anomalous diffusion: the role of weak ergodicity breaking, Phys. Chem. Chem. Phys., 13, 1800 (2011). Strong anticipation: Multifractal cascade dynamics modulate scaling in synchronization behaviors. D G Stephen, J A Dixon, Chaos, Solitons & Fractals. 44160D. G. Stephen, J. A. Dixon, Strong anticipation: Mul- tifractal cascade dynamics modulate scaling in synchro- nization behaviors,Chaos, Solitons & Fractals, 44, 160 (2011). Multifractal diffusion entropy analysis: Optimal bin width of probability histograms. P Jizba, J Korbel, Physica A. 413438P. Jizba, J. Korbel, Multifractal diffusion entropy analy- sis: Optimal bin width of probability histograms, Physica A, 413 , 438 (2014) . J Feder, Fractals. New YorkPlenum PressJ. Feder, Fractals, Plenum Press, New York (1988). High-dimensional chaotic behavior in systems with time-delayed feedback. K Ikeda, K Matsumoto, Physica D. 29223K. Ikeda, K. Matsumoto.High-dimensional chaotic behav- ior in systems with time-delayed feedback, Physica D, 29, 223 (1987). Optical turbulence: chaotic behavior of transmitted light from a ring cavity. K Ikeda, H Daido, O Akimoto, Phys. Rev. Lett. 45709K. Ikeda, H. Daido, O. Akimoto, Optical turbulence: chaotic behavior of transmitted light from a ring cavity, Phys. Rev. Lett. 45, 709 (1980). Anticipating chaotic syncronization. H U Voss, Phys. Rev. E. 615115H. U. Voss, Anticipating chaotic syncronization, Phys. Rev. E, 61, 5115 (2000). Temporal complexity of the order parameter at the phase transition. M Turalska, B J West, Paolo Grigolini, Phys. Rev. E. 8361142M. Turalska, B. J. West and Paolo Grigolini, Temporal complexity of the order parameter at the phase transition, Phys. Rev. E 83, 061142 (2011). Cooperation in neural systems: Bridging complexity and periodicity. M Zare, P Grigolini, Phys. Rev E. 8651918M. Zare, P. Grigolini, Cooperation in neural systems: Bridging complexity and periodicity, Phys. Rev E 86, 051918 (2012). Renewal aging as emerging property of phase synchronization. S Bianco, E Geneston, P Grigolini, M Ignaccolo, Physica A. 3871387S. Bianco, E. Geneston, P. Grigolini and M. Ignaccolo, Renewal aging as emerging property of phase synchro- nization, Physica A 387, 1387 (2008). . M Turalska, M Lukovic, B J West, Paolo Grigolini, Complexity and Synchronization. 80Phys. Rev. EM. Turalska, M. Lukovic, B. J. West, Paolo Grigolini, Complexity and Synchronization, Phys. Rev. E 80, 021110, 1-6, (2009). Temporal complexity of the order parameter at the phase transition. M Turalska, B J West, P Grigolini, Phys. Rev. E. 8361142M. Turalska, B. J. West, P. Grigolini, Temporal complex- ity of the order parameter at the phase transition, Phys. Rev. E 83, 061142 (2011). A new measure of network efficiency. N W Hollingshad, M Turalska, P Allegrini, . J Br, P West, Grigolini, Physica A. 3911894N. W. Hollingshad, M. Turalska, P. Allegrini, Br. J. West, P. Grigolini, A new measure of network efficiency, Phys- ica A , 391, 1894 (2012). Cooperation-induced topological complexity: a promising road to fault tolerance and Hebbian learning. M Turalska, E Geneston, B J West, P Allegrini, P Grigolini, Frontiers in Fractal Physiology. 3M. Turalska, E. Geneston, B. J. West, P. Allegrini, P. Grigolini, Cooperation-induced topological complexity: a promising road to fault tolerance and Hebbian learning, Frontiers in Fractal Physiology, 3, 52, 1-7 (2012). Networks of Echoes: Imitation, Innovation and Invisible Leaders. B J West, M Turalska, P Grigolini, Springer InternationalB. J. West, M. Turalska, P. Grigolini, Networks of Echoes: Imitation, Innovation and Invisible Leaders, Springer International (2014). Multifractal detrended fluctuation analysis of nonstationary time series. J W Kantelhardt, S A Zschiegner, E Koscielny-Bunde, S Havlin, A Bunde, H E Stanley, Physica A. 31687J. W. Kantelhardt, S. A. Zschiegner, E. Koscielny-Bunde, S. Havlin, A. Bunde, H. E. Stanley, Multifractal detrended fluctuation analysis of nonstationary time series, Physica A 316, 87 (2002). . C.-K Peng, S V Buldyrev, S Havlin, M Simons, H E Stanley, A L Goldberger, Phys. Rev. E. 491685C.-K. Peng, S.V. Buldyrev, S. Havlin, M. Simons, H.E. Stanley, A.L. Goldberger, Phys. Rev. E 49, 1685 (1994). Transmission of information at criticality. M Luković, F Vanni, A Svenkeson, Physica A. 416430M. Luković, F. Vanni, A. Svenkeson, Transmission of information at criticality, Physica A 416, 430 (2014). Self-Organizing Complex Networks: individual versus global rules, submitted to Frontiers. K Mahmoodi, B J West, P Grigolini, K. Mahmoodi, B. J. West, P. Grigolini, Self-Organizing Complex Networks: individual versus global rules, sub- mitted to Frontiers. Transfer of information from one to another complex network: How to bypass the technical and theoretical problems raised by criticality-induced ergodicity breaking?. N Piccinini, B J West, P Grigolini, submitted to Phys. RevN. Piccinini, B. J. West, P. Grigolini, Transfer of in- formation from one to another complex network: How to bypass the technical and theoretical problems raised by criticality-induced ergodicity breaking?, submitted to Phys. Rev. Renewal, modulation, and superstatistics in times series. P Allegrini, F Barbi, P Grigolini, P Paradisi, Phys. Rev. E. 7346136P. Allegrini, F. Barbi, P. Grigolini, and P. Paradisi, Re- newal, modulation, and superstatistics in times series Phys. Rev. E 73, 046136 (2006). Criticality and Transmission of Information in a Swarm of Cooperative Units. F Vanni, M Luković, P Grigolini, Phys. Rev. Lett. 1077813F. Vanni, M. Luković, P. Grigolini, Criticality and Trans- mission of Information in a Swarm of Cooperative Units, Phys. Rev. Lett, 107, 07813 (20111). Random growth of interfaces as a subordinated process. R Failla, P Grigolini, M Ignaccolo, A Schwettmann, Phys. Rev. E. 7010101R. Failla, P. Grigolini,M. Ignaccolo, A. Schwettmann, Random growth of interfaces as a subordinated process, Phys. Rev. E 70, 010101(R) (2004). Grigolini1Self-organizing Complex Networks: individual versus global rules, Frontiers in Physiology. K Mahmoodi, B J West, P , 8478K. Mahmoodi ,B. J.West, P. Grigolini1Self-organizing Complex Networks: individual versus global rules, Fron- tiers in Physiology, 8, 478 (2017).
[]
[ "Spectral Precoding for Out-of-band Power Reduction under Condition Number Constraint in OFDM-Based System", "Spectral Precoding for Out-of-band Power Reduction under Condition Number Constraint in OFDM-Based System" ]
[ "Lebing Pan [email protected] \nResearch Institute of China Electronic Technology Group Corporation\n200311ShanghaiChina\n" ]
[ "Research Institute of China Electronic Technology Group Corporation\n200311ShanghaiChina" ]
[]
Due to the flexibility in spectrum shaping, orthogonal frequency division multiplexing (OFDM) is a promising technique for dynamic spectrum access. However, the out-of-band (OOB) power radiation of OFDM introduces significant interference to the adjacent users. This problem is serious in cognitive radio (CR) networks, which enables the secondary system to access the instantaneous spectrum hole.Existing methods either do not effectively reduce the OOB power leakage or introduce significant biterror-rate (BER) performance deterioration in the receiver. In this paper, a joint spectral precoding (JSP) scheme is developed for OOB power reduction by the matrix operations of orthogonal projection and singular value decomposition (SVD). We also propose an algorithm to design the precoding matrix under receive performance constraint, which is converted to matrix condition number constraint in practice. This method well achieves the desirable spectrum envelope and receive performance by selecting zero-forcing frequencies. Simulation results show that the OOB power decreases significantly by the proposed scheme under condition number constraint.
10.1007/s11277-016-3874-8
[ "https://arxiv.org/pdf/1511.03928v1.pdf" ]
17,590,340
1511.03928
9b31129a6cd7b831e073367d742a7ffa354f8fa0
Spectral Precoding for Out-of-band Power Reduction under Condition Number Constraint in OFDM-Based System Lebing Pan [email protected] Research Institute of China Electronic Technology Group Corporation 200311ShanghaiChina Spectral Precoding for Out-of-band Power Reduction under Condition Number Constraint in OFDM-Based System Submitted 1Keyword: Spectral precodingOut-of-bandOrthogonal frequency division multiplexing (OFDM)Sidelobe suppressionCondition number constraint Due to the flexibility in spectrum shaping, orthogonal frequency division multiplexing (OFDM) is a promising technique for dynamic spectrum access. However, the out-of-band (OOB) power radiation of OFDM introduces significant interference to the adjacent users. This problem is serious in cognitive radio (CR) networks, which enables the secondary system to access the instantaneous spectrum hole.Existing methods either do not effectively reduce the OOB power leakage or introduce significant biterror-rate (BER) performance deterioration in the receiver. In this paper, a joint spectral precoding (JSP) scheme is developed for OOB power reduction by the matrix operations of orthogonal projection and singular value decomposition (SVD). We also propose an algorithm to design the precoding matrix under receive performance constraint, which is converted to matrix condition number constraint in practice. This method well achieves the desirable spectrum envelope and receive performance by selecting zero-forcing frequencies. Simulation results show that the OOB power decreases significantly by the proposed scheme under condition number constraint. Introduction Dynamic spectrum access [1,2] technology is extensively studied as an effective scheme to achieve high spectral efficiency, which is a crucial step for cognitive radio (CR) networks. Due to the flexible operability over non-continuous bands, orthogonal frequency division multiplexing (OFDM) is considered as a candidate transmission technology for CR system [3]. However, due to the use of rectangular pulse shaping, the power attenuation of its sidelobe is slow by the square of the distance to the main lobe in the Submitted 2 frequency domain. Therefore, the out-of-band (OOB) power radiation or sidelobe leakage of OFDM causes severe interference to the adjacent users. Furthermore, this problem is serious in CR networks, which enables the secondary users to access the instantaneous spectrum hole. Therefore, these secondary users need to ensure that the interference level of the power emission is acceptable for primary users. In practice, typically of the order of 10% guard-band is needed for an OFDM signal in the long term evolution (LTE) system [4]. Therefore, the spectrum efficiency is significantly reduced. The traditional method of sidelobe suppression is based on windowing techniques [5], such as the raised cosine windowing [6], which is applied to the time-domain signal wave. However, this scheme requires an extended guard interval to avoid signal distortion, and the spectrum efficiency is also reduced for large guard intervals. The cancellation carriers (CCs) [7,8] technique inserts a few carriers at the edge of the spectrum in order to cancel the sidelobe of the data carriers. However, this technique degraded the signalto-noise ratio (SNR) at the receiver. The subcarrier weighting method [9] is based on weighting the individual subcarriers in a way that the sidelobe of the transmission signal are minimized according to an optimization algorithm. However, the bit-error-rate (BER) increases in the receiver, and when the number of subcarriers is large, it is difficult to implement in real-time scenario. The multiple choice sequence method [10] maps the transmitted symbol into multiple equivalent transmit sequences. Therefore, the system throughput is reduced when the size of sequences set grows. Constellation adjustment [11] and constellation expansion [12] are difficult to implement when the order of quadrature amplitude modulation (QAM) is high. Strikingly, the methods [10,11] require the transmission of side information. The adaptive symbol transition [13] scheme usually provides weak sidelobe suppression in frequency ranges closely neighboring the secondary user occupied band. However, the schemes [7][8][9][10][11][12][13][14] depend heavily on the transmitted data symbols. The precoding technology is widely used in OFDM system to enhance the performance of transmission reliability over the wireless environment, in which there are many precoding methods Submitted 3 proposed for OOB power reduction [15]. There are two main optimization schemes with obtaining large suppression performance. One is to force the frequency response of some frequency points to be zero by orthogonal projection [16][17][18][19]. These frequency points are regarded as zero-forcing frequencies. The other method [20,21] is to minimize the power leakage in an optimization frequency region by adopting the quadratic optimization method, using matrix singular value decomposition (SVD). The precoding scheme in [16] is designed to satisfy the condition that the first N derivatives of the signals are continuous at the edges of symbols. However, this method introduces an error floor in error performance. The sidelobe suppression with orthogonal projection (SSOP) method in [19] adopts one reserved subcarrier for recovering the distorted signal in the receiver. To maintain the BER performance, data cost is introduced in [17] by exploiting the redundant information in the subsequent OFDM symbol. The sidelobe suppression in [21] is based on minimizing the OOB power by selecting some frequencies in an optimized region. The suppression problem is first treated as a matrix Frobenius norm minimization problem, and the optimal orthogonal precoding matrix is designed based on matrix SVD. In [20], an approach is proposed for multiuser cognitive radio system. This method ensures user independence by constructing individual precoder to render selected spectrum nulls. Unlike the methods that focus on minimizing or forcing the sidelobe to zero, the mask compliant precoder in [22] forces the spectrum below the mask by solving an optimization problem. However, the algorithm leads to high complexity. In this paper, a spectral precoding scheme is proposed with matrix orthogonal projection and SVD for OOB reduction in OFDM-based system. The main idea of this scheme is to reduce the OOB power under the receive quality. The condition number of the precoding matrix indicates the BER loss in the receiver. Therefore, we develop an iteration algorithm to design the precoding matrix under matrix condition number constraint. The proposed method has an appropriate balance among suppression performance, spectral efficiency and receive quality. Consequently, it is flexible for practical implementation. The rest of the paper is organized as follows. In Section 2, we introduce the system model of OOB power reduction by the spectral precoding method. In Section 3, we present the proposed spectral precoding approach. Next, in Section 4, we provide an iteration algorithm to design the precoding matrix according to the desirable spectrum envelope, spectral efficiency and BER performance. Simulations are presented in Section 5 to demonstrate the performance of the proposed method, followed by a summary in Section 6. System Model The block diagram of a typical OFDM system using a precoding technique is illustrated in Fig. 1. The number of total carriers used in the transmitter is M . The digital spectral precoding process before inverse fast Fourier transform (IFFT) operation is expressed as ,  s Pd (1) where d is the original OFDM symbol of size 1 N  , and s is the precoded vector. The size of the precoding matrix P is MN  ( MN  ) and the coding redundancy R M N  is usually small. The spectral precoding method achieves better suppression performance in zero-padding (ZP) OFDM system than cyclic-prefix (CP) OFDM [19,21]. What's more, the ZP scheme has already been proposed as an alternative to the CP in OFDM transmissions [23] and particularly for cognitive radio [24]. The proposed method also can be directly applied to CP systems with degraded performance on sidelobe suppression. Conventionally, power spectral density (PSD) analysis for multicarrier systems is based on an analog model with a sinc kernel function [25]. The PSD converges to the sinc function with the sampling rate increasing. In addition, the oversampling constraint presented in [26] ensures the desirable power spectral sidelobe envelope property after precoding for DFT-based OFDM. In a general OFDM system, the time-domain signal can be defined by a rectangular function (baseband-equivalent) as 1, 0 () 0, tT gt elsewhere     (2) where T is the symbol duration. The frequency domain representation of m-th subcarrier is written as /2 sin(( ) / 2) ( ) , ( ) / 2 jT m m m T Ge         (3) where m  is the center frequency of m-th subcarrier. Therefore, the magnitude envelope in the OOB region ( m   ) is expressed by sin(( ) / 2) () ( ) / 2 2 . m m m m T G         (4) Then we define the function () m C  to indicate the magnitude envelope by 1 ( ) . m m C    (5) The expression (5) is obtained from (4) by ignoring a constant factor. This operation does not affect the mathematical analysis for the design of the precoding matrix [19]. The complex computing by (3) is converted to real operation, which reduce the complexity of spectral precoding in the following. Based on (5), using the superposition of M carriers, the target function indicating the PSD at OOB frequency  is defined by Submitted 6 2 1 0 ( ) ( ) , M mm m A s C      (6) where m s is the m-th element in the precoded OFDM vector s . In OFDM system, the power spectrum of its sidelobe decays slowly as 2 ()    , where m       is the frequency distance to the mainlobe. From (1) and (6), the PSD target function of the OFDM signal is expressed in the matrix form as     2 11 ( ) ( ) , T ss PA TT   c Pd  (7) where   0 1 1 ( ), ( ),... ( ) T M C C C      c , (.) T and  denote transpose operation and expectation respectively. respectively, but indicate their envelope character. The goal of the precoding method is to design P to reduce the emission in OOB region. Simultaneously, the process is irrespective of the value of vector d . The Proposed Joint Spectral Precoding (JSP) Method In this paper, a joint spectral precoding (JSP) method is proposed with two times orthogonal projection and one time SVD. The process is given by three steps: Inner orthogonal projection → SVD → Outside orthogonal projection. As presented in the previous papers [19,21], in each step, the main idea of orthogonal projection is to force the power of zero-forcing frequency points to be zero, and the SVD operation is to minimize the power in optimized region. However, the precoding matrix derived from the final step does not achieve the goal in each step again. Unlike the methods that focus on minimizing or forcing the sidelobe to zero, we reduce the OOB power under the receive quality constraint by selecting the frequency points in each step. Therefore, the proposed method has an appropriate balance between the suppression performance and the receive quality. In addition, when the number of the reserved carriers is small, such as only one, then the number of zero-forcing frequencies in orthogonal projection method is one. But in the JSP, we set the zero-forcing frequency in inner and outside orthogonal projection to be different and better suppression performance is obtained. Submitted Fig. 2 The spectrum diagram for OOB power reduction. Generally, the subcarriers of OFDM-based system are considered to be continuous. The proposed method also can be used for non-continuous multi-carriers system. The number of employed carriers is M and the index is from 0 to 1 M  . Fig. 2 illustrates the frequency region of m   . The R reserved carriers at the upper and lower spectral edges, which are used to achieve sidelobe suppression and maintain the receive quality. Two groups of zero-forcing frequency points a ω and b ω are selected in inner and outside orthogonal projection respectively. The corresponding number is a N and b N ( a NR  , b NR  ). K frequency points in the optimized region are chosen to reduce the OOB power by SVD operation. We first give the process of JSP in this section, and the detail of how to select the parameters is presented in the next. In the first step, the group of zero-forcing frequency points a ω ( Obviously, a P is non-orthogonal. Therefore, it will cause significant BER performance degradation in the receiver. After the step of the inner orthogonal projection, the precoded data is given by ˆ. aa  s P d(11) Then in the second step, the magnitude response matrix of K frequency points TT b M b b b b   P I C C C C(14) where the matrix After precoding, the minimum error problem of decoding in the receiver, is given by m in , PPd -d (16) where P is the decoding matrix. The orthogonal projector N is small, the case of that P is full column rank, is easy to be achieved in practice. Then the pseudoinverse matrix of P is the optimal solution for (16) to achieve ˆN  PP I , which is given by 1 () TT   P P P P . This JSP scheme is unlike the methods that minimize the OOB power or forcing the sidelobe to zero. In the third step, we map As presented in the SVD operation [21] or orthogonal projection method [19], the receive quality and suppression performance have not been well balanced. We also had some tests to examine other combinations that only employing the first two steps or the last two steps in the JSP method. The results indicate that the suppression performance is similar to only using orthogonal projection or SVD operation. The reason is that the better OOB power reduction performance by the JSP method is obtained by two times orthogonal projection. In addition, the BER performance is improved by using reserved carriers in the second step. What's more, the desirable spectrum envelope also can be achieved by selecting the frequency points or optimized region in the three steps independently. Design of The Precoding Matrix Under Condition Number Constraint In this section, we develop an algorithm to design the precoding matrix, obtaining the desirable spectrum envelope under receive performance constraint. As illustrated in Fig.1, the data after FFT process in the receiver is expressed as ,  s Qd + n (17) where Q = HP , H is a complex diagonal matrix with the channel frequency response of M subcarriers and n is complex additive white Gaussian noise (AWGN) vector with zero mean. Q = P for AWGN channel. s  is the received signal from noise measurement. The matrix P is full column but T N  P P I . Thus, this non-orthogonal precoding matrix will introduce BER loss in the receiver. If some singular values of P are too small, compared to the other values, low additive noise will result of large errors. Therefore, the condition number  of P is introduced, which is to measure the sensitivity of the solution of linear equations to errors in the data [28]. where max  and min  is the largest and the smallest singular value of P . Con(.) denotes the condition number of a matrix. The value of  indicates the BER loss in the precoding. [1, )    and larger value of  leads to worse BER performance. Fig. 3 The condition number of transition matrix Q in AWGN channel and Rayleigh fading channel. The transition matrix Q significantly influence the receive quality. We examine the average condition number of Q through different channels in Fig. 3. (20) where 0  is the given condition number according to the BER constraint. The signal estimation performance under condition number constraint in a communication system is illustrated in [28,30]. In addition, as presented in [16,19], the zero-forcing frequency point close to mainlobe leads to quicker power reduction on the edge of the mainlobe, but the power far from the mainlobe is larger. Inversely, if these points are far from the mainlobe, the power decreases slowly on the edge, while the emission from the mainlobe decreases largely. Submitted Table 1 The correlation between OFDM performance and the parameters in the proposed method. With summarizing the properties analyzed above, we first give the correlation between the parameters and the performance of OFDM transceiver in Table 1, which is also presented in the simulation section. In the following, we develop an algorithm to obtain the precoding matrix P by selecting the frequencies b ω and a ω , according to the summary in Table 1. The main spectrum envelope is decided in the initialization process step by step as Algorithm. 1: Initialization: 0  , R , a N , iteration increment   , 0 bb  ω ω , 0 aa  ω ω , 0   , 0 i  . 1. : ω also should not be too small or too far from the mainlobe. +1 i i i  . i-th iteration. Design Otherwise, the matrix op C may be close to singular or badly scaled. That may lead to that the results may be inaccurate in (13) by SVD in practice. Simulation Results and Discussions In this section, some numerical results are presented to demonstrate the performance of the proposed method. An instance in LTE is selected that the subcarrier spacing is 15 kHz. The number of the subcarriers is 300 N  in 5 MHz bandwidth [4]. The frequency axis is normalized to the spacing 2/ T  . The index of data subcarriers is from -150 to 150, while the direct current carrier 0   is not employed. To illustrate the OOB power reduction effect, the PSD is obtained by computing the power of DFT coefficients of time-domain OFDM signal over a time span and averaging over thousands of symbols. The frequency-domain oversampling rate is eight and the QPSK modulation is employed. The simulated are mainly ZP-OFDM system, unless noted otherwise. A. Sidelobe Suppression Performance In the first experiment, the OOB reduction performance comparison is presented using different precoding technologies. The number of reserved subcarriers is selected as 2 R  . The SSOP [19], the orthogonal projection (OP) [16], the optimal orthogonal precoding (OOP) [21] and CC [8] As the PSD curves illustrated in Fig. 4, all the transmitted signals are ZP-OFDM, besides CP-OFDM signal is presented by the proposed JSP method. It is obviously that the OOB power using the JSP method is lower than other approaches. The OOP/OP/SSOP methods have similar suppression performance. The CC, which is not a spectral precoding method, achieves less OOB power reduction. In addition, the results show the performance degradation of sidelobe suppression in CP systems. increasing. In Fig. 6, the Case B is presented with 2 a N  . We fix b ω far from the mainlobe and 4000 b  ω , and the other parameters are same with in Fig. 5. The suppression performance is also improved by relaxing 0  . The OOB power in the region far from the mainlobe is lower than Case A. However, the power on the edge of mainlobe decreases slowly. In Fig. 7, the Case B is selected, 1 a N  and 0 1.5   . The right-side suppression is presented by choosing a ω and b ω in the right side of the mainlobe. The distance between two adjacent frequencies in a ω is five when 1 R  . As the symmetry presented in (5), (9) and (14), the difference between adopting the OOB frequency ω and ω to design the precoding matrix is only in the second step of SVD. Therefore, the power on the left side OOB region is also reduced, while the power emission is higher than Submitted on the right side. The suppression performance is improved by increasing the number of revered carriers. Although the simulation is not illustrated, this property is also presented in the case of 2 a N  by adjusting R . In cognitive radio system, the secondary users access the spectrum hole dynamically. In addition, carrier aggregation (CA) [31] is a critical technology in LTE system, which enable a user to employ noncontiguous subbands. Therefore, in Fig. 8 A comparison of the BER performance using spectral precoding techniques is shown in Fig.9. The condition number constraint is 0 2   for the JSP. The optimized region or zero-forcing frequencies for OOP, SSOP and OP method is same with the JSP method. As illustrated in Fig. 4, the sidelobe suppression Submitted 18 performance by OOP, SSOP and OP method is similar. The difference is that OOP scheme introduces no quality loss in the receiver and the OP method leads to large error as presented in Fig. 9. The SSOP method improves the receive quality by adopting a reserved subcarrier to decode the distorted signal. The BER quality loss by the JSP method is not notable. In Fig. 10 In Fig. 11, the results demonstrate that the BER performance of 1 a N  is better than 2 a N  under same constraint. This is why we select the value of a N as small as possible to maintain the receive quality. In Fig. 12, the receive quality is presented with 2 a N  . The number of revered carriers is 2 R  . The results show that the BER performance decreases by relaxing 0  , but the suppression performance is improved shown in Fig. 5-6. For example, the power in the OOB region can be reduced by nearly 40dB when 0 10   , while the SNR loss is less than 1dB at BER=10 −7 . When the condition number of precoding matrix is small, such as 0 1.5   in Fig. 12, the BER performance is slightly better than that of without precoding. Because the number of the modulated carriers of each symbol is extended from N to M , the receive quality is improved by frequency diversity. However, this property is not notable when the value of 0  is large. In Fig. 13, the parameters are set same with in Fig.12. Comparing to without precoding, the BER performance through the fading channel is similar to AWGN channel, when SNR is small. The receive quality decreases by relaxing 0  . But with SNR increasing, the BER line converges to without precoding. Conclusions In this paper, a joint spectral precoding (JSP) method is proposed to reduce the OOB power emission in OFDM-based system, as well as an iteration algorithm is given to obtain the precoding matrix. We also Submitted 20 convert the BER performance constraint to the condition number constraint of the precoding matrix. As summarized in Table I and presented in the simulations, the proposed method has an appreciate balance between suppression performance and spectral efficiency, under a receive quality constrain. We also can obtain the desirable spectrum envelope by parameters configuration. Therefore, this method is flexible for spectrum shaping in both conventional OFDM systems and non-continuous multi-carriers cognitive radio networks. References Fig. 1 1System diagram of spectrally precoded OFDM. P  and () C  are not the PSD presentation of the OFDM signal and the frequency response CV is a magnitude response matrix of the K optimized frequency points. The elements in o C is computed by (5). We then use the SVD operation to minimize the power leakage in the optimized region. The problem is determined is the i-th column of c V . The matrix o P of size MN  is composed of the last N columns in c V . o P is an orthogonal matrix that T o o N  P P I , where N I is a unit matrix of size NN  . The original data d is mapped to o P d , while the length is extended from N to M . The average power of each symbol of o Pdto the nullspace of b C rather than the nullspace of bo CP. If the nullspace of bo CP is selected, the JSP method turns to the traditional orthogonal projection method, which introduces large deterioration in the receiver. Furthermore, after the operations in the last two steps, the precoding matrix P does not achieve the goal in the first step of mapping d to the nullspace of a C . In addition, after the operation in the third step, P also does not achieve the goal of minimizing the power leakage in the optimized region in the second step.Submitted 10 . The results of through Rayleigh channel are averaged over 2000 realizations. Two Rayleigh channel is selected: 3GPP extended pedestrian A (EPA) model [29], whose excess tap delay = (0 30 70 90 110 190 410)ns with the relative power = (0 -1 -2 -3 -8 -17.2 -20.8)dB, and a ten-tap Rayleigh block fading channel with exponentially decaying powers set As illustrated in Fig. 3, the receive performance is better through the AWGN channel than the Rayleigh fading channel. Compared to the transmission without precoding, the condition number of Q linearly increases with condition number constraint 0  in AWGN channel, but decrease in the Rayleigh fading channel when the value of 0  is small. The zero-forcing equalizer is used for AWGN channel and fading channel respectively by : Single-side suppression Na=2: Double-side suppression ↓: Negative correlation; ↑: Positive correlation; -: Weak or no effect. N and Case is selected according to the power spectrum envelope property. (b.) R is chosen according to sidelobe suppression performance and spectral efficiency. (c.) 0  is the given condition number according to the BER quality constraint. If Case A is required, we fix a ω far from the mainlobe and adjust b ω . The process to solve (20) is given in the algorithm. 1. mainlobe. If the number of reserved carriers of R is large, a little adjusting of Fig. 5 5Quicker power reduction on the edge of the mainlobe (Case A) with different condition number constraint. Fig. 6 . 6Lower power leakage far from the mainlobe (Case B) with different condition number constraint. Obviously, the suppression performance is improved by relaxing the condition number constraint while the distance between b ω and mainlobe Fig. 7 7Single-side suppression with different number of revered carriers, N a =1. Fig. 8 8PSD of Multi-band OFDM signal. Fig. 12 12BER performance with difference condition number constraint through AWGN channel. Fig. 13 13BER performance with difference condition number constraint through 3GPP EPA fading channel. Where d is a vector of size M , derived from the original data symbol 1 N d by adding zero in the location of the reserved carriers, i.e., ˆ[ 0, 0,... ,...0, 0] ). j is the index of the carriers. The solution of (8) is equivalent to map d to the nullspace of a C , In[27], the orthogonal projector a P mapping vector onto a is a unit matrix of size MM  . This projector has a property that the precoded data vector â Pd,1 [ ,... ,... ] iN a a a a a     ω , 0 i a   or 1 i aM    ) are chosen to achieve ˆ, aa  C P d 0 (8) TT  dd. a C is the magnitude response matrix of size a NM  , whose element ( , ) ( ) i a j a C i j C   computed by (5 C ( a  C is the nullspace of a C ) along a C is given by  data carriers reserved carriers zero-forcing frequency optimized region … … … Submitted 8 1 ( ) , TT a M a a a a   P I C C C C (9) where M I in the nullspace of a C , is closest to d as ˆˆm in . a a a   P d d P d C OOB region, far from or close to the mainlobe. The one far from the mainlobe is fixed, for the amplitude response is weak by(5). The one closed to the mainlobe is used to maintain the receive quality by adjusting its location. In the SVD operation, in order to effectively abandon the largestSubmitted 1 1 , , TT HH AWGN channel Fading channel        (P P) P s d= (Q Q) Q s    (19) where (.) H and d  denotes conjugate transpose and the decoded data respectively. In this part, we develop an algorithm to obtain P under a condition number constraint by selecting the frequency points. In order to keep the receive performance decreases slightly, the number of zero- forcing frequency points in the first orthogonal projection is chosen as small as possible. In practice, we select 1 a N  for single-side suppression or 2 a N  for double-side suppression to keep the spectrum symmetric. If the special case of 1 R  is selected, then we fix 1 a N  . The envelope of the precoded PSD curve is mainly determined by the outside orthogonal projection. Thus, b NR  is selected for high suppression performance. The two group zero-forcing frequencies a ω and b ω are arranged in the different R singular values in c Σ , K should larger or equal to R . We select ob  ω ω for simple implementation and KR  . Then the variables for obtaining P are only the group closed to the mainlobe. Thus, we change these frequency points to achieve 0 maxCon( ) . . Con( ) , st   P P P P , the PSD of two non-contiguous subbands occupied by one user or two is presented. The condition number is 0 10   for all the cases. The number of the carriers of each subband is 150. The design of the precoding matrix for two users is independent, while the data transmitted by one user through two subbands is dependent. The results indicate that the proposed scheme is also suitable for multi-users transmitting through non-contiguous subbands.Fig. 9 BER performance with different spectral precoding techniques through AWGN channel.Fig. 10 BER performance with same condition number constraint through AWGN channel, N a =1.B. BER performance , the condition number constraint is fixed as0 10   , as well as single-side suppression is selected as 1 a N  . The results illustrated that the receive quality is approximate identical when the condition number and a N are fixed, although both of the number of reserved carriers R and the Case are different. This property indicates that if a N has been selected according to single-side suppression or double-side suppression, the BER performance constraint can be converted to the condition number constraint in practice. . 11 BER performance with difference condition number constraint and N a through AWGN channel, R=2.Fig Dynamic Spectrum Access in Open Spectrum Wireless Networks. Y P Xing, R Chandramouli, S Mangold, N S Shankar, IEEE Journal on Selected Areas in Communications. 243Xing, Y.P., Chandramouli, R., Mangold, S., and Shankar, N.S., 'Dynamic Spectrum Access in Open Spectrum Wireless Networks', IEEE Journal on Selected Areas in Communications, 2006, 24, (3), pp. 626-637. A Survey of Dynamic Spectrum Access. Q Zhao, B M Sadler, IEEE Signal Processing Magazine. 243Zhao, Q. and Sadler, B.M., 'A Survey of Dynamic Spectrum Access', IEEE Signal Processing Magazine, 2007, 24, (3), pp. 79-89. Ofdm for Cognitive Radio: Merits and Challenges. H A Mahmoud, T Yucek, Arslan , H , IEEE Wireless Communications16Mahmoud, H.A., Yucek, T., and Arslan, H., 'Ofdm for Cognitive Radio: Merits and Challenges', IEEE Wireless Communications, 2009, 16, (2), pp. 6-14. E Dahlman, S Parkvall, J Skold, 4g: Lte/Lte-Advanced for Mobile Broadband. Academic PressDahlman, E., Parkvall, S., and Skold, J., 4g: Lte/Lte-Advanced for Mobile Broadband, (Academic Press, 2013) The Effect of Filtering on the Performance of Ofdm Systems. M Faulkner, IEEE Transactions on Vehicular Technology. 495Faulkner, M., 'The Effect of Filtering on the Performance of Ofdm Systems', IEEE Transactions on Vehicular Technology, 2000, 49, (5), pp. 1877-1884. Window Designs for Dft-Based Multicarrier Systems. Y P Lin, S M Phoong, IEEE Transactions on Signal Processing. 533Lin, Y.P. and Phoong, S.M., 'Window Designs for Dft-Based Multicarrier Systems', IEEE Transactions on Signal Processing, 2005, 53, (3), pp. 1015-1024. Choose Your Subcarriers Wisely: Active Interference Cancellation for Cognitive Ofdm. J F Schmidt, S Costas-Sanz, R Lopez-Valcarce, IEEE Journal on Emerging and Selected Topics in Circuits and Systems. 3Schmidt, J.F., Costas-Sanz, S., and Lopez-Valcarce, R., 'Choose Your Subcarriers Wisely: Active Interference Cancellation for Cognitive Ofdm', IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 2013, 3, (4), pp. 615-625. Reduction of out-of-Band Radiation in Ofdm Systems by Insertion of Cancellation Carriers. S Brandes, I Cosovic, M Schnell, IEEE Communications Letters. 106Brandes, S., Cosovic, I., and Schnell, M., 'Reduction of out-of-Band Radiation in Ofdm Systems by Insertion of Cancellation Carriers', IEEE Communications Letters, 2006, 10, (6), pp. 420-422. Subcarrier Weighting: A Method for Sidelobe Suppression in Ofdm Systems. I Cosovic, S Brandes, M Schnell, IEEE Communications Letters. 106Cosovic, I., Brandes, S., and Schnell, M., 'Subcarrier Weighting: A Method for Sidelobe Suppression in Ofdm Systems', IEEE Communications Letters, 2006, 10, (6), pp. 444-446. Suppression of Sidelobes in Ofdm Systems by Multiple-Choice Sequences. I Cosovic, T Mazzoni, European Transactions on Telecommunications. 176Cosovic, I. and Mazzoni, T., 'Suppression of Sidelobes in Ofdm Systems by Multiple-Choice Sequences', European Transactions on Telecommunications, 2006, 17, (6), pp. 623-630. Sidelobe Suppression in Nc-Ofdm Systems Using Constellation Adjustment. D Li, X H Dai, H Zhang, IEEE Communications Letters. 135Li, D., Dai, X.H., and Zhang, H., 'Sidelobe Suppression in Nc-Ofdm Systems Using Constellation Adjustment', IEEE Communications Letters, 2009, 13, (5), pp. 327-329. Constellation Expansion-Based Sidelobe Suppression for Cognitive Radio Systems. S Liu, Y Li, H Zhang, Y Liu, Communications, IET. 7Liu, S., Li, Y., Zhang, H., and Liu, Y., 'Constellation Expansion-Based Sidelobe Suppression for Cognitive Radio Systems', Communications, IET, 2013, 7, (18), pp. 2133-2140. Sidelobe Suppression in Ofdm-Based Spectrum Sharing Systems Using Adaptive Symbol Transition. H A Mahmoud, H Arslan, IEEE Communications Letters. 122Mahmoud, H.A. and Arslan, H., 'Sidelobe Suppression in Ofdm-Based Spectrum Sharing Systems Using Adaptive Symbol Transition', IEEE Communications Letters, 2008, 12, (2), pp. 133-135. Spectrum Sidelobe Suppression for Discrete Fourier Transformation-Based Orthogonal Frequency Division Multiplexing Using Adjacent Subcarriers Correlative Coding. R Xu, M Chen, J Zhang, B Wu, Wang , H , IET Communications. 6Xu, R., Chen, M., Zhang, J., Wu, B., and Wang, H., 'Spectrum Sidelobe Suppression for Discrete Fourier Transformation-Based Orthogonal Frequency Division Multiplexing Using Adjacent Subcarriers Correlative Coding', IET Communications, 2012, 6, (11), pp. 1374-1381. Out-of-Band Emission Reduction and a Unified Framework for Precoded Ofdm. Huang, J A Zhang, Y J Guo, IEEE Communications Magazine. 536Huang., X., Zhang., J.A., and Guo, Y.J., 'Out-of-Band Emission Reduction and a Unified Framework for Precoded Ofdm', IEEE Communications Magazine, 2015, 53, (6), pp. 151-159. N-Continuous Ofdm. J Van De Beek, F Berggren, IEEE Communications Letters. 131van de Beek, J. and Berggren, F., 'N-Continuous Ofdm', IEEE Communications Letters, 2009, 13, (1), pp. 1-3. A Precoding Scheme for N-Continuous Ofdm. Y M Zheng, J Zhong, M J Zhao, Y L Cai, IEEE Communications Letters. 1612Zheng, Y.M., Zhong, J., Zhao, M.J., and Cai, Y.L., 'A Precoding Scheme for N-Continuous Ofdm', IEEE Communications Letters, 2012, 16, (12), pp. 1937-1940. Sculpting the Multicarrier Spectrum: A Novel Projection Precoder. J Van De Beek, IEEE Communications Letters. 1312van de Beek, J., 'Sculpting the Multicarrier Spectrum: A Novel Projection Precoder', IEEE Communications Letters, 2009, 13, (12), pp. 881-883. Sidelobe Suppression with Orthogonal Projection for Multicarrier Systems. J A Zhang, X J Huang, A Cantoni, Y J Guo, IEEE Transactions on Communications. 602Zhang, J.A., Huang, X.J., Cantoni, A., and Guo, Y.J., 'Sidelobe Suppression with Orthogonal Projection for Multicarrier Systems', IEEE Transactions on Communications, 2012, 60, (2), pp. 589- 599. Multiuser Spectral Precoding for Ofdm-Based Cognitive Radio Systems. X W Zhou, G Y Li, G L Sun, IEEE Journal on Selected Areas in Communications. 3Zhou, X.W., Li, G.Y., and Sun, G.L., 'Multiuser Spectral Precoding for Ofdm-Based Cognitive Radio Systems', IEEE Journal on Selected Areas in Communications, 2013, 31, (3), pp. 345-352. Optimal Orthogonal Precoding for Power Leakage Suppression in Dft-Based Systems. M Ma, X J Huang, B L Jiao, Y J Guo, IEEE Transactions on Communications. 593Ma, M., Huang, X.J., Jiao, B.L., and Guo, Y.J., 'Optimal Orthogonal Precoding for Power Leakage Suppression in Dft-Based Systems', IEEE Transactions on Communications, 2011, 59, (3), pp. 844- 853. Mask Compliant Precoder for Ofdm Spectrum Shaping. A Tom, A Sahin, Arslan , H , IEEE Communications Letters. 17Tom, A., Sahin, A., and Arslan, H., 'Mask Compliant Precoder for Ofdm Spectrum Shaping', IEEE Communications Letters, 2013, 17, (3), pp. 447-450. Cyclic Prefixing or Zero Padding for Wireless Multicarrier Transmissions?. B Muquet, Z D Wang, G B Giannakis, M De Courville, P Duhamel, IEEE Transactions on Communications. 5012Muquet, B., Wang, Z.D., Giannakis, G.B., de Courville, M., and Duhamel, P., 'Cyclic Prefixing or Zero Padding for Wireless Multicarrier Transmissions?', IEEE Transactions on Communications, 2002, 50, (12), pp. 2136-2148. On the Potential of Zp-Ofdm for Cognitive Radio. H Lu, H Nikookar, Chen , H , Proc. WPMC'09. WPMC'09Lu, H., Nikookar, H., and Chen, H., 'On the Potential of Zp-Ofdm for Cognitive Radio', Proc. WPMC'09, 2009, pp. 7-10. Analytical Expressions for the Power Spectral Density of Cp-Ofdm and Zp-Ofdm Signals. T Van Waterschoot, V Le Nir, J Duplicy, M Moonen, Signal Processing Letters. 4IEEEVan Waterschoot, T., Le Nir, V., Duplicy, J., and Moonen, M., 'Analytical Expressions for the Power Spectral Density of Cp-Ofdm and Zp-Ofdm Signals', Signal Processing Letters, IEEE, 2010, 17, (4), pp. 371-374. Spectrally Precoded Dft-Based Ofdm and Ofdma with Oversampling. T W Wu, C D Chung, IEEE Transactions on Vehicular Technology. 636Wu, T.W. and Chung, C.D., 'Spectrally Precoded Dft-Based Ofdm and Ofdma with Oversampling', IEEE Transactions on Vehicular Technology, 2014, 63, (6), pp. 2769-2783. Matrix Analysis and Applied Linear Algebra. C D Meyer, SIAMMeyer, C.D., Matrix Analysis and Applied Linear Algebra, (SIAM, 2000) Condition Number-Constrained Matrix Approximation with Applications to Signal Estimation in Communication Systems. J Tong, Q H Guo, S Tong, . T Xi, Yu , Y G , IEEE Communications Letters. 8Tong, J., Guo, Q.H., Tong, S., Xi, j.T., and Yu, Y.G., 'Condition Number-Constrained Matrix Approximation with Applications to Signal Estimation in Communication Systems', IEEE Communications Letters, 2014, 21, (8), pp. 990-993. E-Utra Base Station (Bs) Conformance Testing. 1413GPP TS 36.141: 'E-Utra Base Station (Bs) Conformance Testing', 2015 Maximum Likelihood Estimation of a Structured Covariance Matrix with a Condition Number Constraint', Signal Processing. A Aubry, A De Maio, L Pallotta, A Farina, IEEE Transactions on. 606Aubry, A., De Maio, A., Pallotta, L., and Farina, A., 'Maximum Likelihood Estimation of a Structured Covariance Matrix with a Condition Number Constraint', Signal Processing, IEEE Transactions on, 2012, 60, (6), pp. 3004-3021. Carrier Aggregation for Lte-Advanced Mobile Communication Systems. G X Yuan, X Zhang, W B Wang, Yang , Y , IEEE Communications Magazine. 2Yuan, G.X., Zhang, X., Wang, W.B., and Yang, Y., 'Carrier Aggregation for Lte-Advanced Mobile Communication Systems', IEEE Communications Magazine, 2010, 48, (2), pp. 88-93.
[]
[ "POINTS OF ORDER TWO ON THETA DIVISORS", "POINTS OF ORDER TWO ON THETA DIVISORS" ]
[ "Valeria Ornella ", "Gian Pietro Pirola " ]
[]
[]
We give a bound on the number of points of order two on the theta divisor of a principally polarized abelian variety A. When A is the Jacobian of a curve C the result can be applied in estimating the number of effective square roots of a fixed line bundle on C.
10.4171/rlm/630
[ "https://arxiv.org/pdf/1202.1517v1.pdf" ]
119,317,343
1202.1517
618560d734ce6c1a12ef117bd564439e3872c572
POINTS OF ORDER TWO ON THETA DIVISORS 7 Feb 2012 Valeria Ornella Gian Pietro Pirola POINTS OF ORDER TWO ON THETA DIVISORS 7 Feb 2012 We give a bound on the number of points of order two on the theta divisor of a principally polarized abelian variety A. When A is the Jacobian of a curve C the result can be applied in estimating the number of effective square roots of a fixed line bundle on C. Introduction In this paper we give an upper bound on the number of 2-torsion points lying on a theta divisor of a principally polarized abelian variety. Given any principally polarized abelian variety A of dimension g and symmetric theta divisor Θ ⊂ A, Θ contains at least 2 g−1 (2 g − 1) points of order two, the odd theta characteristics. Moreover, in [Mum66] and [Igu72, Chapter IV, Section 5] it is proved that Θ cannot contain all points of order two on A. In this work we use the projective representation of the theta group to prove the following: Given a principally polarized abelian variety A, any translated t * a Θ of a theta divisor Θ ⊂ A contains at most 2 2g − 2 g points of order 2 (2 2g − (g + 1)2 g if t * a Θ is irreducible and not symmetric). Our bound is far from being sharp and we conjecture that the right estimate should be 2 2g − 3 g as in the case of a product of elliptic curves. When A is the Jacobian of a curve C the result can be applied in estimating the number of effective square roots of a fixed line bundle on C (cf. Section 2). Main result In this section we prove our main result. Theorem 1.1. Let A be a principally polarized abelian variety of dimension g and let Θ be a symmetric theta divisor. (1) For each a ∈ A there are at most 2 2g − 2 g points of order two lying on t * a Θ. (2) Let a ∈ A and assume that Θ is irreducible and t * a Θ is not symmetric with respect to the origin. Then there are at most 2 2g − (g + 1)2 g points of order two lying on t * a Θ. Proof. Denote by (K, ·, · ) the group of 2-torsion points on A with the perfect pairing induced by the polarization. Let {a 1 , . . . , a g , b 1 , . . . , b g } be a basis of K over the field of order two such that H := a 1 , . . . , a g be the subgroup of K generated by the elements a 1 , . . . , a g . Consider the projective morphism ϕ : A → P 2 g −1 associated to the divisor 2Θ. By the construction of the projective representation of the theta group K(2Θ) (see [Mum66], [Kem91,Chapter 4] and [Kem89]), we know that the elements of ϕ(H) are a basis of the projective space. In the same way, the images of the elements of a coset H b of H in K generate the projective space P 2 g −1 . a i , b j = δ ij , a i , a j = 0, b i , b j = 0, Suppose by contradiction that there exists a subset S ⊂ K such that all points of S lie on t * a Θ and |S| > 2 2g − 2 g . By the previous argument, since H b ⊂ S for some b, the points of ϕ(S) generate the entire projective space P 2 g −1 . On the other hand, by the Theorem of the Square ([Mum08, Chapter II, Section 6, Corollary 4]), t * a Θ + t * −a Θ ≡ 2Θ. It follows that the points of ϕ(S) lie on an hyperplane of P 2 g −1 . This proves (1). Now we prove the second part. Suppose by contradiction that there exists a subset S ⊂ K such that all points of S lie on t * a Θ and |S| > 2 2g − (g + 1)2 g . We claim that ( * ) the points in ϕ(S) lie on a 2 g − g − 2-plane in P 2 g −1 . Given a point ε ∈ S, it holds also ε ∈ t * −a Θ. Thus S ⊂ t * a Θ ∩ t * −a Θ. If t * a Θ is not symmetric and irreducible, t * a Θ ∩ t * −a Θ has codimension 2 in A and we can consider the natural exact sequence 0 → O A (−2Θ) → O A −t * −a Θ ⊕ O A (−t * a Θ) → I t * a Θ∩t * −a Θ → 0; by tensoring it with O A (2Θ) we get 0 → O A → O A (t * a Θ) ⊕ O A t * −a Θ → I t * a Θ∩t * −a Θ ⊗ O A (2Θ) → 0. Passing to the corresponding sequence on the global sections, we have (2) 0 → H 0 (A, O A ) → H 0 (A, O A (t * a Θ)) ⊕ H 0 A, O A t * −a Θ → H 0 (I t * a Θ∩t * −a Θ ⊗ O A (2Θ)) → H 1 (A, O A ) → 0, since, by the Kodaira vanishing theorem (see e.g. [GH94, Chapter 1, Section 2]), H 1 (A, O A (t * a Θ)) = H 1 A, O A t * −a Θ = 0. It follows that dim H 0 (I t * a Θ∩t * −a Θ ⊗ O A (2Θ)) = g + 1. Thus the points in ϕ t * a Θ ∩ t * −a Θ lie on a 2 g − g − 2-plane of P 2 g −1 and the claim ( * ) is proved. To conclude the proof of (2) we notice that if |S| > 2 2g −(g+1)2 g then |S ∩ H b | > 2 g − (g + 1) for some coset H b of H (see (1)). Then it follows that ϕ(S) contains at least 2 g − g independent points and we get a contradiction. Remark 1.2. One might expect the right bound to be 2 2g − 3 g and that this is realized only in the case of a product of elliptic curves. Remark 1.3. The argument of Theorem 1.1 can be also used to obtain a bound on the number of n-torsion points (with n > 2) lying on a theta divisor. Applications In this section we apply Theorem 1.1 to the case of Jacobians. This gives a generalization of [MP, Proposition 2.5]. Proposition 2.1. Let C be a curve of genus g and M be a line bundle of degree d ≤ g − 1. Given an integer k ≤ g − 1 − d, for each L ∈ Pic 2k (C) there are at least 2 g line bundles η ∈ Pic k (C) such that η 2 ≃ L and h 0 (η ⊗ M ) = 0. Proof. We prove the statement for M ≃ O C and k = g −1. The general case follows from this by replacing L with M 2 ⊗ L ⊗ O C (p) 2n , where p is an arbitrary point of C and n := g − 1 − k − d. Denote by Θ the divisor of effective line bundles of degree g − 1 in Pic g−1 (C). Given the morphism m 2 : Pic g−1 (C) → Pic 2g−2 (C) η → η 2 , we want to prove that m −1 2 (L) ∩ Θ ≤ 2 2g − 2 g . Let α ∈ m −1 2 (L), we have m −1 2 (L) = α ⊗ σ s.t. σ 2 = O C . If m −1 2 (L) ∩ Θ > 2 2g − 2 g , then there are more than 2 2g − 2 g points of order two lying on a translated of a symmetric theta divisor of J(C) and, by (1) of Theorem 1.1, we get a contradiction. Remark 2.2. If we apply Proposition 2.1 to M = O C , L = ω C , we get that on a curve of genus g there are at most 2 2g − 2 g effective theta characteristics. We notice that when g = 2 they are the 6 line bundles of type O C (p) where p is a Weierstrass point. When g = 3 and C is not hyperelliptic, they correspond to the 28 bi-tangent lines to the canonical curve. We remark that Λ is a connected 2 2g -étale covering of the image of the 2k-th symmetric product of C in Pic 2k (C). By Proposition 2.1, for each effective L ∈ Pic 2k (C) there exists η ∈ Λ \ Λ i such that η 2 ≃ L. It follows that Λ i is a proper subset of Λ. Since Λ is irreducible, also the set N i=1 Λ i = η ∈ Pic k (C) : h 0 (M i ⊗ η) > 0 for some i is a proper closed subset of Λ. Date: February 8, 2012. 2010 Mathematics Subject Classification. 14K25. This work has been partially supported by 1) FAR 2010 (PV) "Varietà algebriche, calcolo algebrico, grafi orientati e topologici" 2) INdAM (GNSAGA) 3) PRIN 2009 "Moduli, strutture geometriche e loro applicazioni". Corollary 2 . 3 . 23Let C be a curve of genus g and M 1 , . . . M N be a finite number of line bundles of degree d ≤ g − 1. Given an integer k ≤ g − 1 − d, if η is a generic line bundle of degree k such that h 0 η 2 > 0, then h 0 (η ⊗ M i ) = 0 ∀i = 1, . . . , N. Proof. Let Λ := η ∈ Pic k (C) : h 0 η 2 > 0 , and, for each i = 1, . . . N , consider its closed subset Λ i := η ∈ Λ : h 0 (M i ⊗ η) > 0 . Principles of algebraic geometry. P Griffiths, J Harris, Wiley Classics Library. John Wiley & Sons IncNew YorkReprint of the 1978 originalP. Griffiths and J. Harris. Principles of algebraic geometry. Wiley Classics Library. John Wiley & Sons Inc., New York, 1994. Reprint of the 1978 original. Die Grundlehren der mathematischen Wissenschaften. J Igusa, Springer-VerlagNew YorkTheta functions. Band 194J. Igusa. Theta functions. Springer-Verlag, New York, 1972. Die Grundlehren der math- ematischen Wissenschaften, Band 194. The addition theorem for abstract theta functions. G R Kempf, Algebraic geometry and complex analysis. Pátzcuaro; BerlinSpringer1414G. R. Kempf. The addition theorem for abstract theta functions. In Algebraic geometry and complex analysis (Pátzcuaro, 1987), volume 1414 of Lecture Notes in Math., pages 1-14. Springer, Berlin, 1989. Complex abelian varieties and theta functions. G R Kempf, Springer-VerlagBerlinUniversitextG. R. Kempf. Complex abelian varieties and theta functions. Universitext. Springer- Verlag, Berlin, 1991. Generic Torelli theorem for Prym varieties of ramified coverings. V Marcucci, G P Pirola, arXiv:1010.4483v3Compositio Math. to appear. V. Marcucci and G. P. Pirola. Generic Torelli theorem for Prym varieties of ramified coverings. Compositio Math. to appear, arXiv:1010.4483v3. On the equations defining abelian varieties. D Mumford, I. Invent. Math. 1D. Mumford. On the equations defining abelian varieties. I. Invent. Math., 1:287-354, 1966. Abelian varieties. D Mumford, Tata Institute of Fundamental Research Studies in Mathematics. 5Published for the Tata Institute of Fundamental ResearchD. Mumford. Abelian varieties, volume 5 of Tata Institute of Fundamental Research Studies in Mathematics. Published for the Tata Institute of Fundamental Research, Bombay, 2008. Dipartimento Di Matematica, &quot; F Casorati, Università di Pavia, via Ferrata. 127100Dipartimento di Matematica "F. Casorati", Università di Pavia, via Ferrata 1, 27100 Italy E-mail address: [email protected] Dipartimento di Matematica "F. Casorati. Pavia , Università di Paviavia Ferrata 1Pavia, Italy E-mail address: [email protected] Dipartimento di Matematica "F. Casorati", Università di Pavia, via Ferrata 1, Italy E-mail address: gianpietro. Pavia , [email protected], Italy E-mail address: [email protected]
[]
[ "A TWISTED FIRST HOMOLOGY GROUP OF THE HANDLEBODY MAPPING CLASS GROUP", "A TWISTED FIRST HOMOLOGY GROUP OF THE HANDLEBODY MAPPING CLASS GROUP" ]
[ "Masatoshi Sato " ]
[]
[]
Let H g be a 3-dimensional handlebody of genus g. We determine the twisted first homology group of the mapping class group of H g with coefficients in the first integral homology group of the boundary surface ∂H g for g ≥ 2.
10.18910/67001
[ "https://arxiv.org/pdf/1502.07048v1.pdf" ]
118,267,236
1502.07048
a87f86d4f178c5e0b672cc8744f92980d96d3550
A TWISTED FIRST HOMOLOGY GROUP OF THE HANDLEBODY MAPPING CLASS GROUP 25 Feb 2015 Masatoshi Sato A TWISTED FIRST HOMOLOGY GROUP OF THE HANDLEBODY MAPPING CLASS GROUP 25 Feb 2015 Let H g be a 3-dimensional handlebody of genus g. We determine the twisted first homology group of the mapping class group of H g with coefficients in the first integral homology group of the boundary surface ∂H g for g ≥ 2. Introduction Let H g be a 3-dimensional handlebody of genus g, and Σ g the boundary surface ∂H g . We denote by H g and M g the mapping class group of H g and the boundary surface Σ g , respectively. These are the groups of isotopy classes of orientation preserving homeomorphisms of Σ g and H g . Let D be a closed 2-disk in the boundary Σ g of the handlebody, and pick a point * in Int D. Let us denote by H * g and H g,1 the groups of the isotopy classes of orientation preserving homeomorphisms of H g fixing * and D pointwise, respectively. We also denote by M * g and M g,1 the groups of the isotopy classes of orientation preserving homeomorphisms of Σ g fixing * and D pointwise, respectively. We use integral coefficients for homology groups unless specified throughout the paper. In the cases of the mapping class group M * g and M g of a surface Σ g , Morita [12,Corollary 5.4] determined the first homology group with coefficients in the first integral homology group of the surface. Morita [13,Remark 6.3] extended the first Johnson homomorphism to a crossed homomorphism M * g → 1 2 Λ 3 (H 1 (Σ g )), and showed that the contraction of this crossed homomorphism gives isomorphisms H 1 (M * g ; H 1 (Σ g )) ∼ = Z and H 1 (M g ; H 1 (Σ g )) ∼ = Z/(2g − 2)Z when g ≥ 2. For twisted homology groups of the mapping class groups of nonorientable surfaces, see Stukow [18]. In the cases of the automorphism group Aut F n and the outer automorphism group Out F n of a free group F n of rank n, Satoh [17] computed H 1 (Aut F n ; H 1 (F n )) and H 1 (Out F n ; H 1 (F n )) for n ≥ 2. Kawazumi [8] extended the first Andreadakis-Johnson homomorphism to a crossed homomorphism Aut F n → H 1 (F n ) ⊗ H 1 (F n ) ⊗2 . The contraction of this crossed homomorphism also gives isomorphisms H 1 (Aut F n ; H 1 (F n )) ∼ = Z and H 1 (Out F n ; H 1 (F n )) ∼ = Z/(n − 1)Z. In this paper, we compute the twisted first homology groups of H g and H * g with coefficients in the first integral homology group of the boundary surface Σ g . Note that the restrictions of homeomorphisms of H g to Σ g induce an injective homomorphism H g → M g , and we treat the group H g as a subgroup of M g . The followings are main theorems in this paper. Theorem 1.1. H 1 (H g ; H 1 (Σ g )) ∼ =        Z/(2g − 2)Z if g ≥ 4, Z/4Z ⊕ Z/2Z if g = 3, (Z/2Z) 2 if g = 2, Furthermore, when g ≥ 4, the homomorphism H 1 (H g ; H 1 (Σ g )) → H 1 (M g ; H 1 (Σ g )) induced by the inclusion is an isomorphism. When g = 2, 3, this homomorphism is surjective and the kernel is isomorphic to Z/2Z. Theorem 1.2. H 1 (H g,1 ; H 1 (Σ g )) ∼ = H 1 (H * g ; H 1 (Σ g )) ∼ =    Z if g ≥ 4, Z ⊕ Z/2Z if g = 2, 3. Furthermore, when g ≥ 4, the homomorphism H 1 (H * g ; H 1 (Σ g )) → H 1 (M * g ; H 1 (Σ g )) induced by the inclusion is an isomorphism. When g = 2, 3, this homomorphism is surjective and the kernel is isomorphic to Z/2Z. In this paper, we also study relationships between the second homology groups of H g , H * g , and H g,1 . The second homology group of M g is calculated by Harer [3] when g ≥ 5. It contains some minor mistakes and these are corrected in [4] later. For surfaces with an arbitrary number of punctures and boundary components, see Korkmaz-Stipsicz [9]. See also Benson-Cohen [1] and Sakasai [16] for low genera. There are some results which imply that the cohomology group of the handlebody mapping class group H g is similar to that of M g . Morita [11,Proposition 3.1] showed that the rational cohomology group of any subgroup of the mapping class group decomposes into a direct sum. Later, Kawazumi-Morita [7,Proposition 5.2] generalized it to the cohomology group with coefficients in A = Z[1/(2g −2)]. In particular, the cohomology group of the handlebody mapping class group with a puncture decomposes as H n (H * g ; A) ∼ = H n (H g ; A) ⊕ H n−1 (H g ; H 1 (Σ g ; A)) ⊕ H n−2 (H g ; A) . Hatcher-Wahl [5] showed that the integral cohomology groups of the mapping class groups of 3-manifolds stabilize in more general settings. Hatcher also announced that the rational stable cohomology group coincides with the polynomial ring generated by the even Morita-Mumford classes. However, as far as we know, even the second integral homology group of handlebody mapping class groups has not been computed yet. Here is the outline of our paper: In Section 2, we investigate the relationship between the second integral homology group of the handlebody mapping class group fixing a point or a 2-disk in Σ g pointwise with that of H g using Theorem 1.1. In Section 3, we compute the twisted first homology group H 1 (H g ; H 1 (Σ g )) to prove Theorem 1.1 in the case when g ≥ 4. We also compute the twisted first homology groups of H g with coefficients in Ker(H 1 (Σ g ) → H 1 (H g )) and H 1 (H g ). Let L g denote the kernel of the homomorphism H g → Out(π 1 H g ). The exact sequence 1 − −− → L g − −− → H g − −− → Out(π 1 H g ) − −− → 1 induces exact sequences between their first homology groups with coefficients in Ker(H 1 (Σ g ) → H 1 (H g )) and H 1 (H g ). Luft [10] showed that the group L g coincides with the twist group, which is generated by Dehn twists along meridian disks. Satoh [17] determined the twisted first homology groups H 1 (Out F n ; H 1 (F n )) and H 1 (Out F n ; H 1 (F n )). Applying Luft's and Satoh's results to the exact sequences, we can determine H 1 (H g ; H 1 (Σ g )) when g ≥ 4. In Section 4, we review a finite presentation of the handlebody mapping class group H g given by Wajnryb [19]. In Section 5, we compute the twisted first homology group H 1 (H g ; H 1 (Σ g )), using the Wajnryb's presentation of the handlebody mapping class group H g to prove Theorem 1.1 in the case when g = 2, 3. In Section 6, we prove Theorem 1.2 and also compute the twisted first homology groups of H * g with coefficients in Ker(H 1 (Σ g ) → H 1 (H g )) and H 1 (H g ). 2. On the second homology of the handlebody mapping class groups fixing a point or a 2-disk pointwise In this section, we introduce some corollaries of Theorem 1.1 which give relationships between the second homology groups of H g , H * g and H g,1 . Figure 1. a 2-disk D and simple closed curves α 1 , . . . , α g , β 1 , . . . , β g , γ Let UΣ g denote the unit tangent bundle of Σ g . Let α 1 , . . . , α g , β 1 , . . . , β g be oriented smooth simple closed curves as in figure 1, and denote their homology classes in H 1 (Σ g ) by x 1 = [α 1 ], x 2 = [α 2 ], . . . , x g = [α g ], y 1 = [β 1 ], y 2 = [β 2 ] , . . . , y g = [β g ]. We also denote by γ a null-homotopic smooth simple closed curve in figure 1. There are natural liftings of α 1 , . . . , α g , β 1 , . . . , β g , γ to UΣ g , and let us denote their homology classes in H 1 (UΣ g ) bỹ x 1 ,x 2 , . . . ,x g ,ỹ 1 ,ỹ 2 , . . . ,ỹ g , z, respectively. For a group G and a G-module M, let us denote by M G its coinvariant, that is, the quotient of M by the submodule spanned by the set {gm − m | m ∈ M, g ∈ G}. Lemma 2.1. For g ≥ 2, H 1 (UΣ g ) Hg = 0. Proof. For a simple closed curve c in Σ g , we denote by t c the Dehn twist along c. As in [6, Theorem 1B], we have t α i (ỹ i ) =ỹ i +x i for i = 1, . . . , g. Note that ourc is denoted by c in [6], and is different fromc. Hence, we havex 1 = · · · =x g = 0 ∈ H 1 (UΣ g ) Hg . Let δ ′ i and α ′ i be simple closed curves as depicted in Figure 2 for 1 ≤ i ≤ g − 1. Let us denote h i = t −1 δ ′ i t β i t α i+1 ∈ M g . Since h i (α l ) = α l when l = i, and h i (α i ) = α ′ i , the mapping class h i is actually an element of the handlebody mapping class group H g . We obtain Figure 2. simple closed curves δ ′ and α ′ i h i (x i ) =x i −x i+1 − z and h i (ỹ i+1 ) =ỹ i +ỹ i+1 − z, for i = 1, . . . , g − 1. Thus we have z =ỹ 1 = · · · =ỹ g−1 = 0 ∈ H 1 (UΣ g ) Hg . Since the rotation r ∈ H g of the surface Σ g about a vertical line by 180 degrees mapsỹ g to −ỹ 1 , we also obtaiñ y g = 0. Using the mapping classes h i and r, we can also show: Lemma 2.2. For g ≥ 2, Ker(H 1 (Σ g ) → H 1 (H g )) Hg = 0. Proposition 2.3. When g ≥ 4, H 2 (H * g ) ∼ = H 2 (H g ) ⊕ Z. Proof. Let us denote the Lyndon-Hochschild-Serre spectral sequences of the forgetful exact sequences 1 − −− → π 1 Σ g − −− → M * g − −− → M g −−− → 1, 1 − −− → π 1 Σ g − −− → H * g − −− → H g −−− → 1 by {E r p,q } and {Ē r p,q }, respectively. By Lemma 2.1, we have H 1 (Σ g ) Mg = H 1 (Σ g ) Hg = 0. Thus the E ∞ terms of both spectral sequences are as follows. E ∞ 0,2 * * 0 E ∞ Therefore, we have a morphism of exact sequences 0 − −− →Ē ∞ 0,2 −−− → Ker(H 2 (H * g ) → H 2 (H g )) − −− →Ē ∞ 1,1 − −− → 0       0 − −− → E ∞ 0,2 −−− → Ker(H 2 (M * g ) → H 2 (M g )) − −− → E ∞ 1,1 − −− → 0 induced by the inclusion H * g → M * g .(M * g ) → H 2 (M g )) ∼ = Z and E ∞ 0,2 = E 2 0,2 ∼ = Z when g ≥ 4. It is also true when g = 3 as in [16,Corollary 4.9] (see also [14]). Moreover, there exists a surjective homomorphism S 1 : H 2 (M * g ) → Z defined in [3, Section 0] which maps the fundamental class [Σ g ] ∈ H 2 (Σ g ) = E ∞ 0,2 to (2g − 2)times a generator and whose restriction to Ker(H 2 (M * g ) → H 2 (M g )) is surjective. These facts show that E ∞ 1,1 is a cyclic group of order 2g − 2. Since Morita [12] showed E 2 1,1 = H 1 (M g ; H 1 (Σ g )) ∼ = Z/(2g − 2)Z when g ≥ 2, we obtain E 2 1,1 = E ∞ 1,1 . When g ≥ 4, this fact and the isomorphism H 1 (H g ; H 1 (Σ g )) ∼ = H 1 (M g ; H 1 (Σ g )) show that in the commutative diagramĒ 2 1,1 − −− →Ē ∞ 1,1     E 2 1,1 − −− → E ∞ 1,1 we have an isomorphismĒ ∞ 1,1 ∼ = E ∞ 1,1 . As a conclusion, we obtain Ker(H 2 (H * g ) → H 2 (H g )) ∼ = Ker(H 2 (M * g ) → H 2 (M g )) ∼ = Z. Consider the commutative diagram 0 − −− → Z − −− → H 2 (H * g ) − −− → H 2 (H g ) − −− → 0,     0 − −− → Z − −− → H 2 (M * g ) − −− → H 2 (M g ) − −− → 0. Since the lower exact sequence splits, we obtain H 2 (H * g ) ∼ = H 2 (H g ) ⊕ Z. When g ≥ 2, Lemma 2.1 and the five term exact sequences induced by the exact sequences 1 −−− → π 1 Σ g − −− → H * g − −− → H g − −− → 1, 1 −−− → π 1 UΣ g − −− → H g,1 − −− → H g − −− → 1 imply: Lemma 2.4. When g ≥ 2, H 1 (H g,1 ) ∼ = H 1 (H * g ) ∼ = H 1 (H g ). Remark 2.5. By the Wajnryb's presentation which we review in Section 4.1, we can compute the abelianization as follows: H 1 (H g ) ∼ =        Z ⊕ Z/2Z if g = 1, Z ⊕ (Z/2Z) 2 if g = 2, Z/2Z if g ≥ 3. We can also see that it is generated by s 1 = t β 1 t 2 α 1 t β 1 when g ≥ 3. Note that Wajnryb made a mistake in his calculation of the abelianization in [19, Theorem 20] when g = 2. In the following, we choose a 2-disk D in the boundary Σ g so that it is disjoint from the simple closed curves α 1 , . . . , α g , β 1 , . . . , β g as in Figure 1 and pick a point * in Int D. Lemma 2.6. When g ≥ 3, H 2 (H * g ) ∼ = H 2 (H g,1 ) ⊕ Z. Proof. Let π : H g,1 → H * g denote the forgetful map. The Gysin exact sequence of the central extension 0 − −− → Z − −− → H g,1 π − −− → H * g − −− → 1. is written as H 1 (H * g ) π ! − → H 2 (H g,1 ) → H 2 (H * g ) → Z π ! − → H 1 (H g,1 ). Recall that the Gysin homomorphism π ! : H 1 (H * g ) → H 2 (H g,1 ) maps [h] to [h|t ∂D ] − [t ∂D |h] in the bar resolution for g ∈ H * g , whereh ∈ H g,1 is the inverse image of h under π. By Lemma 2.4 and [19, Theorem 20], H 1 (H * g ) is the cyclic group of order 2 generated by s 1 when g ≥ 3. Note that we can choose a representating diffeomorphism of s 1 whose support is in a genus 1 subsurface S of Σ g − Int D. Moreover, using the lantern relation, we can obtain a 2-chain which bounds [t ∂D ] ∈ C 1 (H g,1 ) whose support is in (Σ g −Int D)−S. Thus there exists a 3-chain which bounds [s 1 |t ∂D ]−[t ∂D |s 1 ] ∈ C 2 (H g,1 ), wheres 1 = t β 1 t 2 α 1 t β 1 ∈ H g,1 . Hence, the Gysin homomorphism π ! : H 1 (H * g ) → H 2 (H g,1 ) is the zero map. Since [t ∂D ] = 0 ∈ H 1 (H g,1 ) , the homomorphism π ! : Z → H 1 (H g,1 ) is also trivial. Thus we obtain the exact sequence 0 → H 2 (H g,1 ) → H 2 (H * g ) → Z → 0. Both of the direct sum decompositions of H 2 (H * g ) in Proposition 2.3 and Lemma 2.6 are induced by the composition of the natural homomorphism H 2 (H * g ) → H 2 (M * g ) and S 1 : H 2 (M * g ) → Z defined in [3, Section 4] up to sign. Thus we obtain: Corollary 2.7. When g ≥ 4,H 2 (H g,1 ) ∼ = H 2 (H g ). 3. Proof of Theorem 1.1 for g ≥ 4 In the rest of this paper, we write H for H 1 (Σ g ) and denote by L the kernel of the homomorphism H 1 (Σ g ) → H 1 (H g ) induced by the inclusion for simplicity. Note that H 1 (H g ) is isomorphic to H/L as an H g -module. In this section, we prove Theorem 1.1 when g ≥ 4. Luft's result on Ker(H g → Out F g ) and Satoh's result on Out F g make it much easier to determine the first homology H 1 (H g ; H) when g ≥ 4 than when g = 2, 3. Lemma 3.1. Let g ≥ 2, and G a subgroup of the mapping class group M g . When the induced map H 1 (UΣ g ) G → H G by the natural projection is injective, there exists a surjective homomorphism H 1 (G; H) → Z/(2g − 2)Z. Proof. The exact sequence 0 → Z/(2g − 2)Z → H 1 (UΣ g ) → H → 0 induces the exact sequence H 1 (G; H) → Z/(2g − 2)Z → H 1 (UΣ g ) G → H G . Thus, we obtain the surjective homomorphism [12,Section 6] explicitly. This coincide with the mod (2g − 2)-reduction of the contraction of the twisted homomorphism called the first Johnson homomorphism. In particular, the homomorphism H 1 (H g ; H) → Z/(2g − 2)Z. Remark 3.2. The homomorphism H 1 (G; H) → Z/(2g − 2)Z is written inH 1 (G; H) → H 1 (M g ; H) induced by the inclusion is surjective. Note that the handlebody mapping class group H g satisfies the assumption of Lemma 3.1 because of Lemma 2.1. By Lemma 2.1, we obtain a lower bound on the order of H 1 (H g ; H). For a simple closed curve c in Σ g , we denote by H g (c) the subgroup of H g which preserves the curve c setwise. M Hg (α 1 ) − −− → H 1 (H g ; M) −−− → H 1 (Out F g ; M Lg ) − −− → 0. Proof. The short exact sequence 1 → L g → H g → Out F g → 1 induces an exact sequence (3.1) H 1 (L g ; M) Hg → H 1 (H g ; M) → H 1 (Out F g ; M Lg ) → 0. Luft [10,Corollary 2.4] proved that L g is normally generated by the Dehn twists along the curves α 1 and δ in Figure 1. When g ≥ 2, the lantern relation implies that the Dehn twist t δ can be written as a product of Dehn twists along boundary curves of meridian disks. Thus, L g is normally generated by the Dehn twist along α 1 . Since L g acts on M trivially, we have H 1 (L g ; M) Hg = (H 1 (L g ) ⊗ M) Hg , and it is generated by {t α 1 ⊗ m | m ∈ M}. Since the surjective homomorphism M → H 1 (L g ; M) Hg defined by m → t α 1 ⊗ m factors through M Hg (α 1 ) , the exact sequence (3.1) and this homomorphism induce the desired exact sequence. Lemma 3.4. (1) H 1 (L g ; H/L) Hg ∼ = 0 or Z/2Z when g ≥ 2. (2) H 1 (L g ; L) Hg = 0 when g ≥ 3, and H 1 (L 2 ; L) H 2 ∼ = 0 or Z/2Z when g = 2. Proof. (1) Let us denote byȳ i the image of y i under the natural homomorphism H → H 1 (H g ) ∼ = H/L induced by the inclusion. There exists a mapping class r 1,j ∈ H g for 1 < j ≤ g which preserves α 1 setwise and satisfies r 1,j (x l ) =    −x 1 − x 2 − · · · − x j , if l = j, x l , otherwise, r 1,j (y l ) =        y l − y j , if 1 ≤ l ≤ j − 1, −y j , if l = j, y l , otherwise. See Lemma 4.3 for details. Then, we have r 1,j (t α 1 ⊗ȳ 1 ) = t α 1 ⊗ȳ 1 ∈ H 1 (L g ; H/L) Hg . Since r 1,j commutes with t α 1 , and r 1, j (ȳ 1 ) =ȳ 1 −ȳ j , we obtain t α 1 ⊗ [ȳ j ] = 0 ∈ H 1 (L g ; H/L) Hg for j = 2, . . . , g. Since the mapping class (t β 1 t α 1 ) 3 preserves each α i setwise for i = 1, 2, . . . , g, it is an element in H g . Since it satisfies (t β 1 t α 1 ) 3 (ȳ 1 ) = −ȳ 1 , we have t α 1 ⊗ [2ȳ 1 ] = 0 ∈ H 1 (L g ; H/L) Hg . As a conclusion, we obtain H 1 (L g ; H/L) Hg = 0 or Z/2Z. (2) Since r 1,j commutes with t α 1 for j = 2, 3, . . . , g, we obtain t α 1 ⊗ [x 1 + x 2 + · · · + 2x j ] = 0 ∈ H 1 (L g ; L) Hg . For j = 1, 2, . . . , g, the mapping class s j = t β j t 2 α j t β j ∈ H g also preserves α 1 setwise, and satisfies s j (x j ) = −x j . Thus, we also have t α 1 ⊗ [2x j ] = 0 ∈ H 1 (L g ; L) Hg . Consequently, we obtain t α 1 ⊗ [x 1 ] = t α 1 ⊗ [x 2 ] = · · · = t α 1 ⊗ [x g−1 ] = t α 1 ⊗ [2x g ] = 0, and it implies H 1 (L g ; L) Hg ∼ = 0 or Z/2Z. Now suppose g ≥ 3. Then, there exists a mapping class t g−1 (see Lemma 4.3) which preserves α 1 setwise and satisfies t g−1 (x g ) = x g−1 . Thus, we also obtain t α 1 ⊗ [x g − x g−1 ] = 0 when g ≥ 3, and it implies H 1 (L g ; L) Hg = 0. Applying Lemma 3.3 to the cases M = H/L and L, Lemma 3.4 implies: Lemma 3.5. When g ≥ 3, the exact sequence 1 → L g → H g → Out F g → 1 induces an isomorphism H 1 (H g ; L) ∼ = H 1 (Out F g ; H 1 (F g )). When g ≥ 2, it also induces an exact sequence Z/2Z −−− → H 1 (H g ; H/L) −−− → H 1 (Out F g ; H 1 (F g )) − −− → 0. The twisted first homology groups of Out F n with coefficients in H 1 (F n ) and H 1 (F n ) were computed by Satoh [17, Theorem 1 (2)] as follows. H 1 (Out F n ; H 1 (F n )) ∼ =        Z/(n − 1)Z when n ≥ 4, (Z/2Z) 2 when n = 3, Z/2Z when n = 2, H 1 (Out F n ; H 1 (F n )) ∼ =    0 when n ≥ 4, Z/2Z when n = 2, 3. By Lemma 3.5 and Theorem 3.6, we obtain: Lemma 3.7. H 1 (H g ; L) ∼ =    Z/(g − 1)Z if g ≥ 4, (Z/2Z) 2 if g = 3, H 1 (H g ; H/L) ∼ = Z/2Z if g ≥ 4. Remark 3.8. Theorem 3.6 and Lemma 3.7 show Ker( H 1 (H g ; H/L) → H 1 (Out F g ; H 1 (F g ))) ∼ = Z/2Z when g ≥ 4. Thus Lemma 3.4 (1) implies H 1 (L g ; H/L) Hg ∼ = Z/2Z when g ≥ 4. Remark 3.9. By Lemma 3.5 and Theorem 3.6, we see that the order of H 1 (H g ; H/L) for g = 2, 3 is at most 4. In Propositions 5.9 and 5.18, we will show H 1 (H 2 ; L) ∼ = Z/2Z and H 1 (H g ; H/L) ∼ = (Z/2Z) 2 for g = 2, 3. By Lemma 3.4 (1) and Theorem 3.6, it also follows that H 1 (L g ; H/L) Hg ∼ = Z/2Z for g = 2, 3. By Lemma 2.2, H 0 (H g ; L) = L Hg = 0 for g ≥ 2. Thus, the short exact sequence of Remark 3.10. In the proof of Theorem 1.1 above, we also see the sequence H g -modules 0 → L → H → H/L → 0 induces an exact sequence (3.2) H 1 (H g ; L) − −− → H 1 (H g ; H) − −− → H 1 (H g ; H/L) − −− → 0.0 − −− → H 1 (H g ; L) − −− → H 1 (H g ; H) − −− → H 1 (H g ; H/L) − −− → 0 is exact when g ≥ 4. 4. The Wajnryb's presentation of the handlebody mapping class group In this section, we review the Wajnryb's presentation of the handlebody mapping class group H g and compute the action of the handlebody mapping class group H g to the first homology H 1 (Σ g ). This is for preparing to calculate the twisted first homology H 1 (H g ; H) when g = 2, 3 in Section 5. 4.1. A presentation of the handlebody mapping class group. Let g ≥ 2. We identify the surface in Figure 1 with that in Figure 3. Let ǫ i be a simple closed curve in Figure 3 for i = 1, . . . , g − 1. By cutting the surface Σ g along the simple closed curves α 1 , . . . , α g , we obtain a (2g)-holed sphere with boundary components Figure 4, where α i and β i correspond to the boundary components ∂ −i ∐ ∂ i and the path from ∂ −i to ∂ i , respectively. For integers i, j satisfying 1 ≤ i < j ≤ g, we denote by δ −j,−i and δ i,j the simple closed curves in Figure 4. For integers i, j satisfying 1 ≤ i ≤ g and 1 ≤ j ≤ g, we also denote {∂ −i , ∂ i } g i=1 as ins 1 = b 1 a 2 1 b 1 , t i = e i a i a i+1 e i , for i = 1, . . . , g − 1. Since t i permutes the simple closed curves α i and α i+1 , and fixes other α j , we also have t i ∈ H g . In the following, we denote ϕ * ψ = ϕψϕ −1 for ϕ, ψ ∈ H g . For i, j ∈ I 0 satisfying i < j, we denote d i,j = (t i−1 t i−2 · · · t 1 t j−1 t j−2 · · · t 2 ) * d 1,2 if i > 0, d i,j = (t −1 −i−1 t −1 −i−2 · · · t −1 1 s −1 1 t j−1 t j−2 · · · t 2 ) * d 1,2 if i < 0 and i + j > 0, d i,j = (t −1 −i−1 t −1 −i−2 · · · t −1 1 s −1 1 t j t j−1 · · · t 2 ) * d 1,2 if j > 0 and i + j < 0, d i,j = (t −1 −j−1 t −1 −j−2 · · · t −1 1 t −1 −i−1 t −1 −i−2 · · · t −1 2 s −1 1 t −1 1 s −1 1 ) * d 1,2 if j < 0, d i,j = (t −1 j−1 d j−1,j t −1 j−2 d j−2,j−1 · · · t −1 1 d 1,2 ) * (s 2 1 a 4 1 ), if i + j = 0. Here, d i,j is actually the Dehn twist along δ i,j in Figure 4 as explained in [19, p. 211]. However, to give a presentation of H g with a small generating set, we treat d i,j as the products above. We also denote d I = (d i 1 ,i 2 d i 1 ,i 3 · · · d i 1 ,in d i 2 ,i 3 · · · d i 2 ,in d i 3 ,i 4 · · · d i n−1 ,in )(a i 1 · · · a in ) 2−n , where I = {i 1 , . . . , i n } ⊂ I 0 and i 1 < · · · < i n , c i,j = d I , where I = {k ∈ I 0 | i ≤ k ≤ j} for i ≤ j. Here, d I and c i,j are the Dehn twists along simple closed curves which enclose {∂ i 1 , . . . , ∂ in } and {∂ i , . . . , ∂ j }, respectively. See [19, p. 211], for details. Let us denotẽ I = {(i, j) ∈ I 2 0 | i = 1, 1 < j} ∪ {(i, j) ∈ I 2 0 | i < 0, −i < j ≤ g + i}, and r i,j = b j a j c i,j b j , for (i, j) ∈Ĩ, k j = a j a j+1 t j d −1 j,j+1 for j = 1, . . . , g − 1, s j = (k j−1 k j−2 · · · k 1 ) * s 1 for j = 2, . . . , g, z = a 1 a 2 · · · a g s 1 t 1 t 2 · · · t g−1 s 1 t 1 · · · t g−2 s 1 · · · s 1 t 1 s 1 d I , where I = {1, . . . , g}, z j = k j−1 k j−2 · · · k g+1−j z for j > g 2 . Here, r i,j also lies in H g as is explained in [19, p. 211]. For ϕ, ψ ∈ H g , let us denote their commutator by [ϕ, ψ] = ϕψϕ −1 ψ −1 . (P1) [a i , a j ] = 1, [a i , d j,k ] = 1, for all i, j, k ∈ I 0 , (P2) Let i, j, r, s ∈ I 0 . (a) d −1 r,s * d i,j = d i,j if r < s < i < j or i < r < s < j, (b) d −1 r,i * d i,j = d r,j * d i,j if r < i < j, (c) d −1 i,s * d i,j = (d i,j d s,j ) * d i,j if i < s < j, (d) d −1 r,s * d i,j = [d r,j , d s,j ] * d i,j if r < i < s < j, (P3) d I 0 = 1, (P4) d I k = a |k| where I k = I 0 − {k} for k ∈ I 0 , (P5) t i t i+1 t i = t i+1 t i t i+1 for i = 1, . . . , g − 2, and [t i , t j ] = 1 if 1 ≤ i < j − 1 < g − 1, (P6) t 2 i = d i,i+1 d −i−1,−i a −2 i a −2 i+1 for i = 1, . . . , g − 1, (P7) [s 1 , a i ] = 1 for i = 1, . . . , g, t i * a i = a i+1 for i = 1, . . . , g − 1, [a i , t j ] = 1 for i, j ∈ I 0 satisfying j = i, i − 1, and [t i , s 1 ] = 1 for i = 2, . . . , g − 1, (P8) [s 1 , d 2,3 ] = 1, [s 1 , d −2,2 ] = 1, s 1 t 1 s 1 t 1 = t 1 s 1 t 1 s 1 , and [t i , d 1,2 ] = 1 for i = 1, 3, . . . , g −1, (P9) r 2 i,j = s j c i,j s j c −1 i,j for (i, j) ∈Ĩ, (P10) Let (i, j) ∈Ĩ. (a) r i,j * a j = c i,j and [r i,j , a k ] = 1 if k = j, (b) [r i,j , t k ] = 1 if k = |i|, j or k = i = 1 < j − 1, (c) [r i,j , s k ] = 1 if k < |i|, j < k or k = −i, (d) [r i,j , d k,m ] = 1 if k, m ∈ {i, . . . , j − 1} or k, m / ∈ {−j, i, i + 1, . . . , j}, (e) [r i,j , z j ] = 1 if (i, j) = (1, g) or j = g + i, (f) r i,j * d i,j = d J where J = {k ∈ I 0 ; i < k ≤ j}, (g) r 1,j * d −j,1−j = (t j−2 t j−3 · · · t 1 ) * c −1,j , (h) r i,j * d −j,1−j = (t j−2 t j−3 · · · t 1−i ) * c i−1,j if i < 0 and j + i > 1, (i) r −1 i,j * d −j−1,−j = s −1 j+1 * c i,j+1 if j < g, (P11) r i,j * t j−1 = t −1 j−1 * r i,j if (i, j) ∈Ĩ and −i + 1 = j, (P12) (a) Let h 2 = k −1 j−1 t −1 j−2 t −1 j−3 · · · t −1 1 k j−1 k j−2 · · · k 2 . r 1,j = s j c 1,j s j c −1 1,j k j−1 a j c 1,j−2 t j−1 c −1 1,j−1 t −1 j−1 r −1 1,j−1 s j−1 h 2 r −1 1,2 h −1 2 k −1 j−1 for 3 ≤ j ≤ g. (b) Let h 3 = s 1 k j−1 k j−2 · · · k 2 . r −1,j = h 3 r −1 1,2 h −1 3 s j r −1 1,j c −1 −1,j−1 c 1,j−1 a 1 s j c −1,j s j c −1 −1,j for 2 ≤ j ≤ g − 1. (c) Let h 3 = s −i t −1 −1−i t −1 −2−i · · · t −1 1 k j−1 k j−2 · · · k 3 k 2 . r i,j = h 3 r −1 1,2 h −1 3 s j r −1 i+1,j c −1 i,j−1 c i+1,j−1 a −i s j c i,j s j c −1 i,j for i < −1 and (i, j) ∈Ĩ. Remark 4.2. Note that there are some mistakes in the Wajnryb's presentation in [19]. The mapping class z j is defined as the conjugation of z by k j−1 k j−2 · · · k g+1−j in [19]. However, as mentioned in [15], it should be defined as the product k j−1 k j−2 · · · k g+1−j z. In (P11), the condition −i + 1 = j is needed. The relations of type (P11) are obtained in the situation when the pair of simple closed curves ∂ k and ∂ −k are separated by γ i,j for k = j, j − 1 (see CASE 1 in [19, p.223]), and the equation r −(j−1),j * t j−1 = t −1 j−1 * r −(j−1),j in fact does not hold for any 2 ≤ j ≤ g. We also erase the relation s 2 1 = d −1,1 a −4 1 in (P6) written in [19]. This is because we already defined d −1,1 as s 2 1 a 4 1 . 4.2. Action on the first homology H 1 (Σ g ). Here, we compute the action of the handlebody mapping class group H g on the first homology H 1 (Σ g ) of the boundary surface. Recall that x 1 , . . . , x g , y 1 , . . . , y g are the homology classes represented by the simple closed curves α 1 , . . . , α g , β 1 , . . . , β g in Figures 1 and 3. Lemma 4.3. For 1 ≤ i ≤ g, a i (x l ) = x l , a i (y l ) = x i + y i if l = i, y l otherwise, and s i (x l ) = −x i if l = i, x l otherwise, s i (y l ) = 2x i − y i if l = i, y l otherwise. For each i, j ∈ I 0 such that i < j, d i,j (x l ) = x l , d i,j (y l ) = ε(i)x |i| + ε(j)x |j| + y l if l = |i|, |j|, y l otherwise, where ε(i) = 1 if i > 0, and ε(i) = −1 if i < 0. For 1 ≤ i ≤ g − 1, t i (x l ) =      x i+1 if l = i, x i if l = i + 1, x l otherwise, t i (y l ) =      x i + y i+1 if l = i, x i+1 + y i if l = i + 1, y l otherwise, and k i (x l ) =      x i+1 if l = i, x i if l = i + 1, x l otherwise, k i (y l ) =      y i+1 if l = i, y i if l = i + 1, y l otherwise. For 1 < j ≤ g, r 1,j (x l ) = −x 1 − · · · − x j if l = j, x l otherwise, r 1,j (y l ) =      x 1 + · · · + x j + y l − y j if 1 ≤ l ≤ j − 1, x 1 + · · · + x j−1 + 2x j − y j if l = j, y l otherwise, and for (i, j) ∈Ĩ such that i < 0, r i,j (x l ) = −x −i+1 − · · · − x j if l = j, x l otherwise, r i,j (y l ) =      x −i+1 + · · · + x j + y l − y j if − i + 1 ≤ l ≤ j − 1, x −i+1 + · · · + x j−1 + 2x j − y j if l = j, y l otherwise. Proof. The equations for the mapping classes a i and d i,j are obvious because a i and d i,j are Dehn twists along α i and δ i,j respectively. Similarly we have b i (x l ) = x i − y i if l = i, y l otherwise, b i (y l ) = y l , for 1 ≤ i ≤ g and e i (x l ) =      x i − y i + y i+1 if l = i, x i+1 + y i − y i+1 if l = i + 1, x l otherwise, e i (y l ) = y l , for 1 ≤ i ≤ g − 1. Since t i = e i a i a i+1 e i , t i (x i ) = (e i a i a i+1 )(x i − y i + y i+1 ) = e i (x i+1 − y i + y i+1 ) = x i+1 , t i (x i+1 ) = (e i a i a i+1 )(x i+1 + y i − y i+1 ) = e i (x i + y i − y i+1 ) = x i , t i (y i ) = (e i a i a i+1 )(y i ) = e i (x i + y i ) = x i + y i+1 , t i (y i+1 ) = (e i a i a i+1 )(y i+1 ) = e i (x i+1 + y i+1 ) = x i+1 + y i and t i acts trivially on other x l 's and y l 's. Since k i = a i a i+1 t i d −1 i,i+1 , k i (x i ) = (a i a i+1 t i )(x i ) = (a i a i+1 )(x i+1 ) = x i+1 , k i (x i+1 ) = (a i a i+1 t i )(x i+1 ) = (a i a i+1 )(x i ) = x i , k i (y i ) = (a i a i+1 t i )(−x i − x i+1 + y i ) = (a i a i+1 )(−x i+1 + y i+1 ) = y i+1 , k i (y i+1 ) = (a i a i+1 t i )(−x i − x i+1 + y i+1 ) = (a i a i+1 )(−x i + y i ) = y i , and k i acts trivially on other x l 's and y l 's. Since s 1 = b 1 a 2 1 b 1 , s 1 (x 1 ) = (b 1 a 2 1 )(x 1 − y 1 ) = b 1 (−x 1 − y 1 ) = −x 1 , s 1 (y 1 ) = (b 1 a 2 1 )(y 1 ) = b 1 (2x 1 + y 1 ) = 2x 1 − y 1 , and s 1 acts trivially on other x l 's and y l 's. The elements s i 's are inductively defined by the recurrence relation s i+1 = k i s i k −1 i . The element k i replaces x i and x i+1 with each other and y i and y i+1 also. Hence the equation for s i follows by induction. Lastly, we verify the equations for r i,j . In the case 0 < i < j, c i,j (x l ) = x l , c i,j (y l ) = x i + · · · + x j + y l if i ≤ l ≤ j, y l otherwise. Since r i,j = b j a j c i,j b j , r 1,j (x j ) = (b j a j c 1,j )(x j − y j ) = (b j a j )(−x 1 − · · · − x j−1 − y j ) = −x 1 − · · · − x j , and for 1 ≤ l ≤ j r 1,j (y l ) = (b j a j c 1,j )(y l ) = (b j a j )(x 1 + · · · + x j + y l ) = x 1 + · · · + x j + y l − y j if 1 ≤ l ≤ j − 1, x 1 + · · · + x j−1 + 2x j − y j if l = j. The element r 1,j acts trivially on other x l 's and y l 's. In the case (i, j) ∈Ĩ and i < 0, c i,j (x l ) = x l , c i,j (y l ) = x −i+1 + · · · + x j + y l if − i + 1 ≤ l ≤ j, y l otherwise. . Hence r i,j (x j ) = (b j a j c i,j )(x j − y j ) = (b j a j )(−x −i+1 − · · · − x j−1 − y j ) = −x −i+1 − · · · − x j , and for −i + 1 ≤ l ≤ j r i,j (y l ) = (b j a j c i,j )(y l ) = (b j a j )(x −i+1 + · · · + x j + y l ) = x −i+1 + · · · + x j + y l − y j if − i + 1 ≤ l ≤ j − 1, x −i+1 + · · · + x j−1 + 2x j − y j if l = j. The element r i,j acts trivially on other x l 's and y l 's. 5. Proof of Theorem 1.1 for g = 2, 3 In this section, we prove Theorem 1.1 for g = 2, 3. We denote by A the ring Z or Z/nZ for an integer n ≥ 2, and H A = H 1 (Σ g ; A). As written in [19,Theorem 19], the handlebody mapping class group H g is generated by a 1 , s 1 , r 1,2 , t 1 , and u = t 1 t 2 · · · t g−1 . Therefore, crossed homomorphisms d : H 1 (H 2 ; H A ) ∼ = {d ∈ Z 1 (H g ; H A ); d(r 1,2 ) 1 = d(s 1 ) 3 − d(r 1,2 ) 4 = d(u) 2 = d(u) 4 = 0},H 1 (H 3 ; H A ) ∼ = {d ∈ Z 1 (H g ; H A ); d(r 1,2 ) 1 = d(s 1 ) 4 − d(r 1,2 ) 5 = d(u) 2 = d(u) 3 = d(u) 5 = d(u) 6 = 0}. Proof. Let f 2 : Z 1 (H 2 ; H A ) → A 4 and f 3 : Z 1 (H 3 ; H A ) → A 6 be homomorphisms defined by 4 ), and f 2 (d) = (d(r 1,2 ) 1 , d(s 1 ) 3 − d(r 1,2 ) 4 , d(u) 2 , d(u)f 3 (d) = (d(r 1,2 ) 1 , d(s 1 ) 4 − d(r 1,2 ) 5 , d(u) 2 , d(u) 3 , d(u) 5 , d(u) 6 ), respectively. Then, the composition maps f g • δ : H A → A 2g are written as f 2 • δ(v) = (−v 2 + v 3 + v 4 , −v 3 + 2v 4 , v 1 − v 2 + v 4 , v 3 − v 4 ), f 3 • δ(v) = (−v 2 + v 4 + v 5 , −v 4 + 2v 5 , v 1 − v 2 + v 5 + v 6 , v 2 − v 3 + v 6 , v 4 − v 5 , v 5 − v 6 ), for v ∈ H A . Since these maps are isomorphisms, we have H 1 (H g ; H A ) = Z 1 (H g ; H A )/B 1 (H g ; H A ) ∼ = Ker f g for g = 2, 3. Lemma 5.2. Suppose d ∈ Z 1 (H g ; H A ) satisfies d(u) 2 = · · · = d(u) g = d(u) g+2 = · · · = d(u) 2g = 0 as in Lemma 5.1. Then, (1) d(a i ) = u i−1 d(a 1 ). (2) d(t i ) = u i−1 d(t 1 ). Proof. Note that a i = u i−1 a 1 u −(i−1) . It can be checked using the relation (P7). Hence we have d(a i+1 ) = d(u) + ud(a i ) − ua i u −1 d(u) = d(u) + ud(a i ) − a i+1 d(u). Since (a i+1 v) 1 = v 1 and (a i+1 v) g+1 = v g+1 for any v ∈ H A , we have a i+1 d(u) = d(u), and thus d(a i+1 ) = ud(a i ). By induction on i, we have the equation (1). The equation (2) can be similarly verified. Lemma 5.3. Suppose d ∈ Z 1 (H g ; H A ) satisfies d(u) 2 = · · · = d(u) g = d(u) g+2 = · · · = d(u) 2g = 0 as in Lemma 5.1. Then (1) d(a 1 ) g+1 = · · · = d(a 1 ) 2g = 0. (2) 2d(a 1 ) 2 = · · · = 2d(a 1 ) g = 0. (3) d(s 1 ) g+2 = · · · = d(s 1 ) 2g = 0. (4) d(s 1 ) g+1 = −2d(a 1 ) 1 . (5) d(a 1 ) 2 + d(r 1,2 ) g+1 = 0. Proof. For any i and j, d(a i a j ) = d(a i ) + a i d(a j ) = d(a i ) + d(a j ) + d(a j ) g+i x i . Since a 1 and a i commute for any 1 ≤ i ≤ g by the relation (P1), it must be d(a 1 a i ) = d(a i a 1 ), and thus d(a 1 ) g+i = 0 for any 2 ≤ i ≤ g. Since a 1 and r 1,2 commute by the relation (P10)(a), it must be (1 − a 1 )d(r 1,2 ) = (1 − r 1,2 )d(a 1 ). Since ((1 − r 1,2 )v) g+2 = v g+1 + 2v g+2 for any v ∈ H A while ((1 − a 1 )v) g+2 = 0, we have d(a 1 ) g+1 = 0 and thus the equation (1). Since ((1 − r 1,2 )v) 1 = v 2 − v g+1 − v g+2 for any v ∈ H A while ((1 − a 1 )v) 1 = −v g+1 , we have the equation (5). Note that a i and s j commute for any 1 ≤ i, j ≤ g. It can be verified using the relations (P1) and (P7). Hence it must be (1 − s j )d(a i ) = (1 − a i )d(s j ). Suppose i = 1 and 2 ≤ j ≤ g. Then we have the equation (2) because ((1−s j )v) j = 2v j −2v g+j and ((1−a 1 )v) j = 0 for any v ∈ H A . Suppose 2 ≤ i ≤ g and j = 1. Then we have the equation (3) because ((1 − s 1 )v) i = 0 and ((1 − a i )v) i = −v g+i for any v ∈ H A . Suppose i = j = 1. Then we have the equation (4) because ((1 − s 1 )v) 1 = 2v 1 − 2v g+1 and ((1 − a 1 )v) 1 = −v g+1 for any v ∈ H A . 5.1. H 1 (H 2 ; H A ). Here, we assume g = 2 and prove that H 1 (H 2 ; H A ) ∼ = Hom((Z/2Z) 2 , A). Then, the universal coefficient theorem implies H 1 (H 2 ; H) ∼ = (Z/2Z) 2 and we complete the proof of Theorem 1.1 when g = 2. Let d ∈ Z 1 (H 2 ; H A ) be a crossed homomorphism satisfying the condition d(r 1,2 ) 1 = d(s 1 ) 3 − d(r 1,2 ) 4 = d(u) 2 = d(u) 4 = 0 as in Lemma 5.1. Note that in this case u = t 1 . By Lemma 5.3, we can set d(a 1 ) = w 1,1 x 1 + w 1,2 x 2 , d(s 1 ) = w 2,1 x 1 + w 2,2 x 2 + w 2,3 y 1 , d(t 1 ) = w 3,1 x 1 + w 3,3 y 1 , d(r 1,2 ) = w 4,2 x 2 + w 4,3 y 1 + w 4,4 y 2 . By the condition on d and Lemma 5.3, we also have (5.1) 2w 1,2 = 0, w 2,3 = w 4,4 = −2w 1,1 , and w 1,2 + w 4,3 = 0. Lemma 5.4. w 1,2 = w 4,3 = 0. Moreover, we have d(d 1,2 ) = w 1,1 (x 1 + x 2 ). Proof. Since w 1,2 + w 4,3 = 0, it suffices to prove that w 4,3 = 0. Note that by Lemma 5.2, we have d(a 2 ) = t 1 d(a 1 ) = w 1,2 x 1 + w 1,1 x 2 . Since d 1,2 = r 1,2 a 2 r −1 1,2 by the relation (P10)(a), d(d 1,2 ) = d(r 1,2 ) + r 1,2 d(a 2 ) − r 1,2 a 2 r −1 1,2 d(r 1,2 ) = (1 − d 1,2 )d(r 1,2 ) + r 1,2 d(a 2 ) = (−w 1,1 − w 4,3 − w 4,4 )(x 1 + x 2 ) + w 1,2 x 1 . Since a 2 = r 1,2 d 1,2 r −1 1,2 by the relation (P10)(f), d(a 2 ) = (1 − a 2 )d(r 1,2 ) + r 1,2 d(d 1,2 ) = w 1,2 x 1 + (w 1,1 + w 4,3 )x 2 . Thus we obtain w 4,3 = 0. By the equation w 4,4 = −2w 1,1 in (5.1), we have d(d 1,2 ) = w 1,1 (x 1 + x 2 ). Lemma 5.5. w 2,3 = w 3,1 = w 3,3 = w 4,4 = 0. In particular, we have d(t 1 ) = 0 and 2w 1,1 = 0. Proof. Recall that d −2,−1 = (s 1 t 1 s 1 ) −1 d 1,2 (s 1 t 1 s 1 ). In the case g = 2, the Dehn twist d −2,−1 coincides with d 1,2 . Hence the elements d 1,2 and s 1 t 1 s 1 commute and it must be (1 − d 1,2 )d(s 1 t 1 s 1 ) = (1 − s 1 t 1 s 1 )d(d 1,2 ). Since ((1 − d 1,2 )d(s 1 t 1 s 1 )) 1 = ((1 − d 1,2 )d(s 1 t 1 s 1 )) 2 = −2w 2,3 + w 3,3 , while ((1 − s 1 t 1 s 1 )d(d 1,2 )) 1 = ((1 − s 1 t 1 s 1 )d(d 1,2 )) 2 = 2w 1,1 , we have 2w 1,1 = −2w 2,3 + w 3,3 . The equation w 2,3 = −2w 1,1 in (5.1) shows w 2,3 = w 3,3 . Since w 2,3 = w 3,3 = w 4,4 , it remains to prove that w 3,1 = w 3,3 = 0. Note that each of d(a 1 ), d(a 2 ), and d(d 1,2 ) are in L A = Ker(H 1 (Σ 2 ; A) → H 1 (H 2 ; A)). By the relation (P6), t 2 1 = d 2 1,2 a −2 1 a −2 2 . Since each of a 1 , a 2 and d 1,2 acts on L trivially, we have d(d 2 1,2 a −2 1 a −2 2 ) = 2(d(d 1,2 ) − d(a 1 ) − d(a 2 )) = 0. On the other hand, we have d(t 2 1 ) = d(t 1 ) + t 1 d(t 1 ) = w 3,1 (x 1 + x 2 ) + w 3,3 (x 3 + x 4 ) + w 3,3 x 1 . These equations show w 3,1 = w 3,3 = 0. Lemma 5.6. w 4,2 = 0. In particular, we have d(r 1,2 ) = 0. Proof. The relation t 1 r 1,2 t 1 = r 1,2 t 1 r 1,2 in (P11) shows (1 − t 1 + r 1,2 t 1 )d(r 1,2 ) = (1 − r 1,2 + t 1 r 1,2 )d(t 1 ). By Lemma 5.5, the right hand side is equal to zero. Since (1 − t 1 + r 1,2 t 1 )d(r 1,2 ) = w 4,2 x 2 , we have w 4,2 = 0. Lemma 5.7. Recall that k 1 = a 1 a 2 t 1 d −1 1,2 and s 2 = k 1 s 1 k −1 1 . Hence we have d(k 1 ) = d(a 1 ) + d(a 2 ) − t 1 d(d 1,2 ) = 0, and d(s 2 ) = d(k 1 ) + k 1 d(s 1 ) − s 2 d(k 1 ) = k 1 d(s 1 ). Therefore, we have d(s 2 d 1,2 s 2 d −1 1,2 ) = (1 + s 2 d 1,2 )d(s 2 ) + (s 2 − 1)d(d 1,2 ) = 2w 2,2 x 1 and thus 2w 2,2 = 0. Lemma 5.8. w 2,1 = w 2,2 . Proof. Recall that z = a 1 a 2 s 1 t 1 s 1 d 1,2 and z 2 = k 1 z. Hence we have Proof. As well as Lemma 5.3, we can verify that d(z) = d(a 1 ) + d(a 2 ) + (1 + s 1 t 1 )d(s 1 ) + s 1 t 1 s 1 d(d 1,2 ) = (w 2,1 + w 2,2 )(x 1 + x 2 ), and d(z 2 ) = d(k 1 ) + k 1 d(z) = d(z).H 1 (H 2 ; H A /L A ) ∼ = {d ′ ∈ Z 1 (H 2 ; H A /L A ); d ′ (s 1 ) 1 − d ′ (r 1,2 ) 2 = d ′ (u) 2 = 0}. Similar calculations in Section 5.1 show that a crossed homomorphism d ′ : Let d ∈ Z 1 (H 3 ; H A ) be a crossed homomorphism satisfying the condition d(r 1,2 ) 1 = d(s 1 ) 4 − d(r 1,2 ) 5 = d(u) 2 = d(u) 3 = d(u) 5 = d(u) 6 = 0 as in Lemma 5.1. By Lemma 5.3, we can set d(a 1 ) = w 1,1 x 1 + w 1,2 x 2 + w 1,3 x 3 , d(s 1 ) = w 2,1 x 1 + w 2,2 x 2 + w 2,3 x 3 + w 2,4 y 1 , d(t 1 ) = w 3,1 x 1 + w 3,2 x 2 + w 3,3 x 3 + w 3,4 y 1 + w 3,5 y 2 + w 3,6 y 3 , d(r 1,2 ) = w 4,2 x 2 + w 4,3 x 3 + w 4,4 y 1 + w 4,5 y 2 + w 4,6 y 3 . H 2 → H A /L A such that d ′ (s 1 ) 1 − d ′ (r 1,2 ) 2 = d ′ (u) 2 = 0 is compatible with the relations (P1)-(P12) if and only if d ′ (a 1 ) = d ′ (t 1 ) = 0, d ′ (s 1 ) = w ′ 2,3 (y 1 + y 2 ) and d ′ (r 1,2 ) = w ′ 2,3 y 2 where w ′ 2,3 ∈ A satisfies 2w ′ 2,3 = 0. Thus, we obtain H 1 (H 2 ; H A /L A ) ∼ = {w ′ 2,3 ∈ A; 2w ′ 2, By the condition on d and Lemma 5.3, we also have Proof. Since a 1 and r 1,3 commute by the relation (P10)(a), it must be (1 − a 1 )d(r 1,3 ) = (1 − r 1,3 )d(a 1 ). Since ((1 − r 1,3 )d(a 1 )) 2 = w 1,3 , while (1 − a 1 )d(r 1,3 ) 2 = 0, we have the equation (2). Since d(t 2 ) = ud(t 1 ) = t 1 t 2 d(t 1 ) by Lemma 5.2, we have d(t 2 ) = (w 3,3 + w 3,4 + w 3,5 )x 1 + (w 3,1 + w 3,6 )x 2 + (w 3,2 + w 3,6 )x 3 + w 3,6 y 1 + w 3,4 y 2 + w 3,5 y 3 . Since a 1 and t 2 commute by the relation (P7), it must be (1 − a 1 )d(t 2 ) = (1 − t 2 )d(a 1 ). Since ((1 − t 2 )d(a 1 )) 2 = w 1,2 while (1 − a 1 )d(t 2 ) 2 = 0, we have the equation (1). Furthermore, since (1 − a 1 )d(t 2 ) 1 = −w 3,6 while ((1 − t 2 )d(a 1 )) 1 = 0, we have the equation (5). Now we have d(a 1 ) = w 1,1 x 1 . Note that by Lemma 5.2, d(a i ) = u i−1 d(a 1 ) = w 1,1 x i for any we obtain 2w 2,2 = 2w 2,3 = 0. Lemma 5.13. 4w 1,1 = 2w 2,4 = 2w 3,1 = 2w 4,5 = 0. Proof. As in (P8), t 1 and s 1 t 1 s 1 commute. Thus it must be (1−t 1 )d(s 1 t 1 s 1 ) = (1−s 1 t 1 s 1 )d(t 1 ). Since d(s 1 t 1 s 1 ) = d(s 1 ) + s 1 d(t 1 ) + s 1 t 1 d(s 1 ) = (w 2,1 + w 2,2 − w 2,4 − w 3,1 )x 1 + (w 2,1 + w 2,2 + w 3,2 )x 2 + w 3,3 x 3 + w 2,4 (y 1 + y 2 ), we have (1 − t 1 )d(s 1 t 1 s 1 ) = −(w 3,1 + w 3,2 )(x 1 − x 2 ) − 2w 2,4 x 1 , while (1 − s 1 t 1 s 1 )d(t 1 ) = (w 3,1 − w 3,2 )(x 1 − x 2 ). Hence we have 2w 2,4 = 2w 3,1 = 0. Since w 2,4 = w 4,5 = −2w 1,1 , we also have 4w 1,1 = 2w 2,4 = 2w 4,5 = 0. Lemma 5.14. Recall that k i = a i a i+1 t i d −1 i,i+1 for i = 1, 2. Hence we have d(k i ) = d(a i a i+1 t i d −1 i,i+1 ) = d(a i ) + a i d(a i+1 ) + a i a i+1 d(t i ) − k i d(d i,i+1 ) = d(t i ). Since s i+1 = k i s i k −1 i for i = 1, 2, d(s i+1 ) = d(k i ) + k i d(s i ) − s i+1 d(k i ) = k i d(s i ). Thus we obtain d(s 2 d 1,2 s 2 d −1 1,2 ) = (1 + s 2 d 1,2 )d(s 2 ) + s 2 (1 − d 1,2 s 2 d −1 1,2 )d(d 1,2 ) = −2w 1,1 x 2 + w 2,4 (x 1 + x 2 ) = w 2,4 x 1 . Comparing d(r 2 1,2 ) and d(s 2 d 1,2 s 2 d −1 1,2 ), we have w 4,2 = 2w 4,3 = 0. Lemma 5.15. w 3,1 + w 3,2 = w 4,5 , w 3,3 = w 4,3 and w 4,6 = 0. In particular, 2w 3,1 = 2w 3,2 = 2w 3,3 = 2w 4,3 = 0. Proof. The relation t 1 r 1,2 t 1 = r 1,2 t 1 r 1,2 in (P11) shows (1 − t 1 + r 1,2 t 1 )d(r 1,2 ) = (1 − r 1,2 + t 1 r 1,2 )d(t 1 ). A straightforward calculation shows (1 − r 1,2 + t 1 r 1,2 )d(t 1 ) = (w 3,1 + w 3,2 )x 2 + w 3,3 x 3 , and (1 − t 1 + r 1,2 t 1 )d(r 1,2 ) = w 4,5 x 2 + w 4,3 x 3 + w 4,6 y 3 . Thus we have w 3,1 + w 3,2 = w 4,5 , w 3,3 = w 4,3 , and w 4,6 = 0. In particular, w 2,4 = w 3,1 = w 4,5 = 2w 1,1 . Proof. It is sufficient to prove that w 3,2 = 0. Recall that z = (a 1 a 2 a 3 )s 1 t 1 t 2 s 1 t 1 s 1 c 1,3 and z 3 = k 2 k 1 z. Hence we have d(z) = w 2,4 (x 1 + x 2 + x 3 + y 1 + y 2 + y 3 ) + w 3,3 x 1 + w 3,1 x 2 + w 3,2 x 2 , Lemma 3 . 3 . 33Let M be an H g -module on which L g acts trivially. Then, we have an exact sequence the exact sequence(3.2) give an upper bound on the order of H 1 (H g ; H). Comparing this with the lower bound obtained in Lemma 3.1, we complete the proof of Theorem 1.1 for g ≥ 4. Figure 3 . 3the surface Σ g Figure 4 . 4the (2g)-holed sphere by δ −i,j the simple closed curve inFigure 4. For simplicity, we denote by a i , b i , e i , d 1,2 the Dehn twists along the curves α i , β i , ǫ j , δ 1,2 , respectively. Let us denoteI 0 = {−g, −(g− 1), . . . , −2, −1, 1, 2 . . . , g − 1, g}, Theorem 4. 1 . 1[19, Theorem18] The handlebody mapping class group of genus g admits the following presentation: The set of generators consists of a 1 , . . . , a g , d 1,2 , s 1 , t 1 , . . . , t g−1 , and r i,j for (i, j) ∈Ĩ. The set of defining relations is: Recall that, for a group G and a left G-module M, a map d : G → M is called a crossed homomorphism if it satisfies d(hh ′ ) = d(h) + hd(h ′ ) for h, h ′ ∈ G. For a group G and a left G-module M, we consider group cohomology H * (G; M) as that of the standard chain complex induced by the bar resolution. Then, the space of 1-cocycles Z 1 (G; M) is identified with the space of crossed homomorphisms from G to M, and the space of 1-coboundaries B 1 (G; M) is identified with the image of the coboundary map δ : M → Z 1 (G; M) defined by δ(m)(h) = hm − m for m ∈ M. See [2, Section 2.3] for details. H g → H A are uniquely determined by the values d(a 1 ), d(s 1 ), d(r 1,2 ), d(t 1 ), and d(u). Moreover, a 5-tuple of elements in H A becomes values of a 1 , s 1 , r 1,2 , t 1 , and u under some crossed homomorphism d on H g if and only if they are compatible with the relations (P1)-(P12) in Theorem 4.1. The basis {x 1 , . . . , x g , y 1 , . . . , y g } of H A induces an isomorphism H A ∼ = A 2g .For v ∈ H A , we denote its projection to the i-th coordinate of A 2g by v i ∈ A for i = 1, 2, . . . , 2g. Hence ( 1 1− r 1,2 )d(z 2 ) = (w 2,1 + w 2,2 )(x 1 + 2x 2 ) Since r 1,2 and z 2 commute by the relation (P10)(e), it must be (1−r 1,2 )d(z 2 ) = (1−z 2 )d(r 1,2 ) = 0. Thus we have w 2,1 = w 2,2 . Summarizing Lemmas 5.4, 5.5, 5.6, 5.7 and 5.8, we have d(a 1 ) = w 1,1 x 1 , d(s 1 ) = w 2,1 (x 1 + x 2 ) and d(t 1 ) = d(r 1,2 ) = 0, where 2w 1,1 = 2w 2,1 = 0. It can be verified that such d is compatible with the relations (P1)-(P12). Now we have H 1 (H 2 ; H A ) ∼ = Ker f 2 ∼ = {(w 1,1 , w 2,1 ) ∈ A 2 ; 2w 1,1 = 2w 2,1 = 0}.Proposition 5.9. H 1 (H 2 ; 12L) ∼ = Z/2Z, H 1 (H 2 ; H/L) ∼ = H 1 (H 2 ; H), and the homomorphism H 2 (H 2 ; H/L) → H 1 (H 2 ; L) induced by the exact sequence 0 → L → H → H/L → 0 is surjective. 3 = 0}, and the universal coefficient theorem implies H 1 (H 2 ; L) ∼ = Z/2Z. Next, we prove that H 1 (H 2 ; H/L) ∼ = H 1 (H 2 ; H) and the homomorphism H 2 (H 2 ; H/L) → H 1 (H 2 ; L) is surjective. Since H 0 (H 2 ; L) = L H 2 = 0 as shown in Lemma 2.2, we have the exact sequenceH 2 (H 2 ; H/L) − −− → H 1 (H 2 ; L) − −− → H 1 (H 2 ; H) − −− → H 1 (H 2 ; H/L) − −− → 0.Thus, it suffices to show that the homomorphismH 1 (H 2 ; L) → H 1 (H 2 ; H) is the zero map.As we saw in the proof of Theorem 1.1, Ker(f 2 :Z 1 (H 2 ; H A ) → A 4 ) is contained in the image of the homomorphism Z 1 (H 2 ; L A ) → Z 1 (H 2 ; H A ). Thus, the homomorphism H 1 (H 2 ; H A ) → H 1 (H 2 ; H A /L A ) induced by the projection H A → H A /L A is the zero map. The universal coefficient theorem implies H 1 (H 2 ; L) → H 1 (H 2 ; H) is the zero map. 5.2. H 1 (H 3 ; H A ).Here, we assume g = 3 and prove that H 1 (H 3 ; H A ) ∼ = Hom(Z/4Z ⊕ Z/2Z, A). Then, the universal coefficient theorem implies H 1 (H 3 ; H) ∼ = Z/4Z ⊕ Z/2Z, and we complete the proof of Theorem 1.1 when g = 3. , 2 + w 4 24= 2w 1,3 = 0, w 2,4 = w 4,5 = −2w 1,1 , and w 1,2 d(d 1,2 ) = w 1,2 (x 1 + x 2 ). Proof. By the relation r2 1,2 = s 2 d 1,2 s 2 d −1 1,2 in (P9), we have d(r 2 1,2 ) = d(s 2 d 1,2 s 2 d −1 1,2 ). First,we have d(r 2 1,2 ) = d(r 1,2 ) + r 1,2 d(r 1,2 ) = (−w 4,2 + w 4,5 )x 1 + 2w 4,3 x 3 + 2w 4,6 y 3 . Next, we compute d(s 2 d 1,2 s 2 d −1 1,2 ). Since d 1,3 = t 2 d 1,2 t −1 2 and d 2,3 = t 1 d 1,3 t −1 1 , we have d(d 1,3 ) = w 1,1 (x 1 + x 3 ) and d(d 2,3 ) = w 1,1 (x 2 + x 3 ). 3 ) = (−w 2,1 + w 2,2 )x 1 + (w 2,3 + w 3,2 )x 2 − w 2,1 x 3 + w 4,5 y 3 andd(r 2 1,3 ) = d(r 1,3 ) + r 1,3 d(r 1,3 ) = (−w 2,1 + w 4,5 )x 1 + (w 2,1 + w 4,5 )x 2 by a straightforward calculation. The relation r 2 1,3 = s 3 c 1,3 s 3 c −1 1,2 in (P9) and the equations (5.4) show that w 2,1 = 0. Lemma 5.17. w 3,2 = w 3,3 = w 4,3 = 0. = 0 . 0and d(z 3 ) = d(z). Since r 1,3 and z 3 commute by the relation (P10)(e), it must be (1 −r 1,3 )d(z 3 ) = (1 − z 3 )d(r 1,3 ). A straightforward calculation shows (1 − r 1,3 )d(z 3 ) = w 3,2 (x 1 + x 2 )while (1 − z 3 )d(r 1,3 ) = 0. Thus we obtain w 3,2 It can be verified that such d is compatible with the relations (P1)-(P12). Now we haveH 1 (H 3 ; H A ) ∼ = Ker f 3 ∼ = {(w 1,1 , w 2,2 ) ∈ A 2 ; 4w 1,1 = 2w 2,2 = 0}.Proposition 5.18. H 1 (H 3 ; 13H/L) ∼ = (Z/2Z) 2 , and the image of the homomorphism H 2 (H 3 ; H/L) → H 1 (H 3 ; L) induced by the exact sequence 0 → L → H → H/L → 0 is isomorphic to Z/2Z.Proof. As we saw in the proof of Theorem 1.1, under the isomorphismH 1 (H 3 ; H A ) ∼ = {(w 1,1 , w 2,2 ) ∈ A 2 ; 4w 1,1 = 2w 2,2 = 0}, the submodule {(w 1,1 , w 2,2 ) ∈ A 2 ; 2w 1,1 = 2w 2,2 = 0} is in Im(H 1 (H 3 ; L A ) → H 1 (H 3 ; H A )). The universal coefficient theorem implies Im(H 1 (H 3 ; H) → H 1 (H 3 ; H/L)) isof order at least 4. On the other hand, H 1 (H 3 ; H/L) is at most order 4 as explained in Remark 3.9. Thus we obtain H 1 (H 3 ; H/L) ∼ = (Z/2Z) 2 . By Lemma 2.2, the coinvariant H 0 (H 3 ; L) is trivial. Thus the homomorphism H 1 (H 3 ; H) → H 1 (H 3 ; H/L) is surjective. Since H 1 (H 3 ; H) ∼ = Z/4Z ⊕ Z/2Z, we have H 1 (H 3 ; L) ∼ = (Z/2Z) 2 and H 1 (H 3 ; H/L) ∼ = (Z/2Z) 2 . The exact sequence H 2 (H 3 ; H/L) −−− → H 1 (H 3 ; L) − −− → H 1 (H 3 ; H) − −− → H 1 (H 3 ; H/L) − −− → 0 shows Im(H 2 (H 3 ; H/L) → H 1 (H 3 ; L)) ∼ = Z/2Z. 6. Proof of Theorem 1.2 In this section, we prove Theorem 1.2, and calculate H 1 (H * g ; L) and H 1 (H * g ; H/L). Lemma 6.1. (L ⊗ L * ) Hg ∼ = (L ⊗ H) Hg ∼ = Z. Proof. The action of H g on L factors through GL(g; Z), and L is isomorphic to V = Z g endowed with the natural left GL(g; Z)-module. Thus, the fact that (V ⊗ V * ) GL(g;Z) ∼ = Z implies (L ⊗ L * ) Hg = Z. Next, recall that the intersection form on H induces an isomorphism L * ∼ = H/L. Since there is an exact sequence (L ⊗2 ) Hg − −− → (L ⊗ H) Hg − −− → (L ⊗ L * ) Hg − −− → 0, Z ⊕ Z/2Z if g = 2, 3. Acknowledgments. The authors wish to express their gratitude to Susumu Hirose for his helpful advices. The first-named author is supported by JSPS Research Fellowships for Young Scientists (26·110).we obtain Im((L ⊗2 ) Hg → (L ⊗ H) Hg ) = 0.Proof of Theorem 1.2. The exact sequence 1 → Z → H g,1 → H * g → 1 induces an exact sequenceIn Lemma 6.1, we showed the isomorphism H 1 (π 1 Σ g ; H) H * g ∼ = Z induced by the intersection form on H. Thus, restricting the exact sequence to H * g , we obtain a commutative diagram(Z/2Z) 2 if g = 2, 3.(2) When g ≥ 4, the homomorphism H 1 (H * g ; L) → H 1 (H * g ; H) induced by the inclusion L → H is injective. When g = 2, 3, Ker(H 1 (H * g ; L) → H 1 (H * g ; H)) ∼ = Z/2Z. In particular, we haveProof. Consider the exact sequences between homology groups with coefficients in L induced by the forgetful exact sequences 1 → π 1 Σ g → M * g → M g → 1 and its restriction 1 → π 1 Σ g → H * g → H g → 1. Applying Lemma 6.1, we obtain a commutative diagramIn Remark 3.10 and Propositions 5.9 and 5.18, We see that Ker(H 1 (H g ; L) → H 1 (H g ; H)) is trivial when g ≥ 4, and is isomorphic to Z/2Z. In Lemma 3.7 and Propositions 5.9 and 5.18, we also see that Coker(H 1 (H g ; L) → H 1 (H g ; H)) is isomorphic to Z/2Z when g ≥ 4, and is isomorphic to (Z/2Z) 2 when g = 2, 3. Thus, we can determine H 1 (H * g ; L). The mod-2 cohomology of the mapping class group for a surface of genus 2. D J Benson, F R Cohen, Mem. Amer. Math. Soc. 443MRD. J. Benson and F. R. Cohen, The mod-2 cohomology of the mapping class group for a surface of genus 2, Mem. Amer. Math. Soc. 443 (1991), 93-104. MR L Evens, The cohomology of groups, Oxford Mathematical Monographs. Clarendon Press93MR 1144017L. Evens, The cohomology of groups, Oxford Mathematical Monographs, Clarendon Press, 1991. MR 1144017 (93i:20059) The second homology group of the mapping class group of an orientable surface. J Harer, Invent. Math. 72257006MRJ. Harer, The second homology group of the mapping class group of an orientable surface, Invent. Math. 72 (1983) no. 2, 221-239. MR 700769 (84g:57006) Stability of the homology of the mapping class groups of orientable surfaces. J Harer, Annals of Mathematics. 12132030MRJ. Harer, Stability of the homology of the mapping class groups of orientable surfaces, Annals of Mathe- matics 121 (1985), 215-249. MR 830043 (87c:32030) Stabilization for mapping class groups of 3-manifolds. A Hatcher, N Wahl, Duke Math. J. 155257001A. Hatcher and N. Wahl, Stabilization for mapping class groups of 3-manifolds, Duke Math. J. 155 (2010), no. 2, 205-269. MR 2736166 (2012c:57001) Spin structures and quadratic forms on surfaces. D Johnson, J. London Math. Soc. 22257015MRD. Johnson, Spin structures and quadratic forms on surfaces, J. London Math. Soc. 22 (1980), no. 2, 365-373. MR 588283 (81m:57015) The primary approximation to the cohomology of the moduli space of curves and cocycles for the Mumford-Morita-Miller classes. N Kawazumi, S Morita, preprint Univ. of TokyoN. Kawazumi and S. Morita, The primary approximation to the cohomology of the moduli space of curves and cocycles for the Mumford-Morita-Miller classes, preprint Univ. of Tokyo, 2001. N Kawazumi, arXiv:0505497Cohomological aspects of Magnus expansions, preprint. N. Kawazumi, Cohomological aspects of Magnus expansions, preprint, arXiv:0505497. The second homology groups of mapping class groups of oriented surfaces. M Korkmaz, A I Stipsicz, 479-489. MR 1981213Math. Proc. Cambridge Philos. Soc. 134357033M. Korkmaz and A. I. Stipsicz, The second homology groups of mapping class groups of oriented surfaces, Math. Proc. Cambridge Philos. Soc. 134 (2003), no. 3, 479-489. MR 1981213 (2004c:57033) Actions of the homeotopy group of an orientable 3-dimensional handlebody. E Luft, 279-292. MR 0500977Math. Ann. 234318461E. Luft, Actions of the homeotopy group of an orientable 3-dimensional handlebody, Math. Ann. 234 (1978), no. 3, 279-292. MR 0500977 (58 #18461) Characteristic classes of surface bundles. S Morita, 551-577. MR 0914849Invent. Math. 90357022S. Morita, Characteristic classes of surface bundles, Invent. Math. 90 (1987), no. 3, 551-577. MR 0914849 (89e:57022) Families of jacobian manifolds and characteristic classes of surface bundles. I. S Morita, 777-810. MR 1030850Ann. Inst. Fourier. 39391d:32028S. Morita, Families of jacobian manifolds and characteristic classes of surface bundles. I, Ann. Inst. Fourier 39 (1989), no. 3, 777-810. MR 1030850 (91d:32028) The extension of Johnson's homomorphism from the Torelli group to the mapping class group. S Morita, Invent. math. 111MR 1193604 (93j:57001S. Morita, The extension of Johnson's homomorphism from the Torelli group to the mapping class group, Invent. math. 111 (1993), 197-224. MR 1193604 (93j:57001) The 2-torsion in the second homology of the genus 3 mapping class group. W Pitsch, arXiv:1311.5705preprintW. Pitsch, The 2-torsion in the second homology of the genus 3 mapping class group, preprint, arXiv:1311.5705. A simple presentation of the handlebody group of genus 2. C R Popescu, 83-92. MR 2799246Bull. Math. Soc. Sci. Math. Roumanie (N.S.). 5410257004C. R. Popescu, A simple presentation of the handlebody group of genus 2, Bull. Math. Soc. Sci. Math. Roumanie (N.S.) 54(102) (2011), no. 1, 83-92. MR 2799246 (2012f:57004) Lagrangian mapping class groups from a group homological point of view. T Sakasai, MR 2916276Algebr. Geom. Topol. 121T. Sakasai, Lagrangian mapping class groups from a group homological point of view, Algebr. Geom. Topol. 12 (2012), no, 1, 267-291. MR 2916276 Twisted first homology groups of the automorphism group of a free group. T Satoh, 334-348. MR 2184815J. Pure Appl. Algebra. 2042T. Satoh, Twisted first homology groups of the automorphism group of a free group, J. Pure Appl. Algebra 204 (2006), no. 2, 334-348. MR 2184815 (2006j:20079) The first homology group of the mapping class group of a nonorientable surface with twisted coefficients. M Stukow, arXiv:1501.01810preprintM. Stukow, The first homology group of the mapping class group of a nonorientable surface with twisted coefficients, preprint, arXiv:1501.01810. Mapping class group of a handlebody. B Wajnryb, 195-228. MR 1663329Fund. Math. 1583B. Wajnryb, Mapping class group of a handlebody, Fund. Math. 158 (1998), no. 3, 195-228. MR 1663329 (2000a:20075)
[]
[ "Quantum measurement optimization by decomposition of measurements into extremals", "Quantum measurement optimization by decomposition of measurements into extremals" ]
[ "Esteban Martínez-Vargas \nFísica Teòrica: Informació\n\n", "Carlos Pineda \nInstituto de Física\nUniversidad Nacional Autónoma de México\nMéxico City\n\nCDMX\n01000Mexico\n", "Pablo Barberis-Blostein \nInstituto de Investigaciones en Matemáticas Aplicadas y en Sistemas\nUniversidad Nacional Autónoma de México\nMéxico city\n\nCDMX\n01000\n", "\nFenòmens Quàntics\nDepartament de Física\nUniversitat Autònoma de Barcelona\n08193BarcelonaBellaterraSpain\n" ]
[ "Física Teòrica: Informació\n", "Instituto de Física\nUniversidad Nacional Autónoma de México\nMéxico City", "CDMX\n01000Mexico", "Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas\nUniversidad Nacional Autónoma de México\nMéxico city", "CDMX\n01000", "Fenòmens Quàntics\nDepartament de Física\nUniversitat Autònoma de Barcelona\n08193BarcelonaBellaterraSpain" ]
[]
Using the convex structure of positive operator value measurements and several quantities used in quantum metrology, such as quantum fisher information or the quantum Van trees information, we present an efficient numerical method to find the best strategy allowed by quantum mechanics to estimate a parameter. This method explores extremal measurements thus providing a significant advantage over previously used methods. We exemplify the method for different cost functions in a qubit and in a harmonic oscillator and find a strong numerical advantage when the desired target error is sufficiently small.One of the goals of metrology is to provide an optimal strategy to measure the value of a parameter under certain fixed conditions. For completeness, both an approximate value of the parameter and an estimation of the error must be given. If the physical system from which the parameter is to be estimated is analyzed within the framework of quantum mechanics, we shall speak about quantum metrology 1-5 . Motivated by the fact that quantum systems can offer an important advantage over classical systems in precision when estimating a parameter 6 , there has been intense theoretical and experimental advances in the area in the last years 7,8 . Moreover, precision in the estimation of parameters has several applications in the development of quantum technologies 9,10 and quantum state manipulation 11 .To estimate a parameter of a physical system one acquires data through measurements; the estimation of the parameter is obtained by applying a function, known as the estimator, to the data. The probability distribution of measurement outcomes can be modelled using a statistical model of the experiment: the probability distribution of outcomes conditioned to the value of the parameter. This statistical model might describe, say, a noisy measurement apparatus. In this article we specialize to the case of quantum mechanics, where the probability distribution is given by the Born rule and the statistical model is obtained once it is decided which operator is going to be measured. In addition to the random component of measurement, we also consider classical noise, which we include through the density matrix formalism.The cost function that quantifies the error of the parameter estimation of a quantum system (for example the mean square error) depends on both the measurement to be performed and the estimator; the optimal measurement strategy consists of the quantum measurement and the estimator that extremizes it. However, finding the extreme of a cost function over all possible quantum measurements and estimators is not simple. Alternatively, when a Cramér-Rao type inequality exists, the problem can be reduced to find the extreme of another cost function (for example the Fisher information) over all the quantum measurements. This simplifies the problem because it is not longer necessary to maximize over the space of estimators. One still has to deal with extremizing a cost function over all quantum measurements, which in general is difficult. However, under special circumstances, such as symmetries, the problem can be simplified 12 .It is possible, though very costly, to numerically find the maximum over all quantum measurements of cost functions. The straightforward way to solve the problem is to randomly sample the space of all positive operator value measures (POVMs), evaluate the cost function on this sample, and keep the maximum value obtained. This method, that we call the random sampling method (RSM), is very inefficient since the POVM space is large.In this paper, we show how to numerically find the maximum over all quantum measurements of the Fisher information and the Bayesian version of this bound, the Van Trees bound 13 , in a way that is orders of magnitude faster than using the RSM. We rely on the following: (i) the cost functions of interest in quantum metrology are
10.1038/s41598-020-65934-w
null
119,445,306
1901.06179
700916a7b951bd555e96d70a5b276d8139933f91
Quantum measurement optimization by decomposition of measurements into extremals Esteban Martínez-Vargas Física Teòrica: Informació Carlos Pineda Instituto de Física Universidad Nacional Autónoma de México México City CDMX 01000Mexico Pablo Barberis-Blostein Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas Universidad Nacional Autónoma de México México city CDMX 01000 Fenòmens Quàntics Departament de Física Universitat Autònoma de Barcelona 08193BarcelonaBellaterraSpain Quantum measurement optimization by decomposition of measurements into extremals 10.1038/s41598-020-65934-w1 Scientific RepoRtS | (2020) 10:9375 | https:// Using the convex structure of positive operator value measurements and several quantities used in quantum metrology, such as quantum fisher information or the quantum Van trees information, we present an efficient numerical method to find the best strategy allowed by quantum mechanics to estimate a parameter. This method explores extremal measurements thus providing a significant advantage over previously used methods. We exemplify the method for different cost functions in a qubit and in a harmonic oscillator and find a strong numerical advantage when the desired target error is sufficiently small.One of the goals of metrology is to provide an optimal strategy to measure the value of a parameter under certain fixed conditions. For completeness, both an approximate value of the parameter and an estimation of the error must be given. If the physical system from which the parameter is to be estimated is analyzed within the framework of quantum mechanics, we shall speak about quantum metrology 1-5 . Motivated by the fact that quantum systems can offer an important advantage over classical systems in precision when estimating a parameter 6 , there has been intense theoretical and experimental advances in the area in the last years 7,8 . Moreover, precision in the estimation of parameters has several applications in the development of quantum technologies 9,10 and quantum state manipulation 11 .To estimate a parameter of a physical system one acquires data through measurements; the estimation of the parameter is obtained by applying a function, known as the estimator, to the data. The probability distribution of measurement outcomes can be modelled using a statistical model of the experiment: the probability distribution of outcomes conditioned to the value of the parameter. This statistical model might describe, say, a noisy measurement apparatus. In this article we specialize to the case of quantum mechanics, where the probability distribution is given by the Born rule and the statistical model is obtained once it is decided which operator is going to be measured. In addition to the random component of measurement, we also consider classical noise, which we include through the density matrix formalism.The cost function that quantifies the error of the parameter estimation of a quantum system (for example the mean square error) depends on both the measurement to be performed and the estimator; the optimal measurement strategy consists of the quantum measurement and the estimator that extremizes it. However, finding the extreme of a cost function over all possible quantum measurements and estimators is not simple. Alternatively, when a Cramér-Rao type inequality exists, the problem can be reduced to find the extreme of another cost function (for example the Fisher information) over all the quantum measurements. This simplifies the problem because it is not longer necessary to maximize over the space of estimators. One still has to deal with extremizing a cost function over all quantum measurements, which in general is difficult. However, under special circumstances, such as symmetries, the problem can be simplified 12 .It is possible, though very costly, to numerically find the maximum over all quantum measurements of cost functions. The straightforward way to solve the problem is to randomly sample the space of all positive operator value measures (POVMs), evaluate the cost function on this sample, and keep the maximum value obtained. This method, that we call the random sampling method (RSM), is very inefficient since the POVM space is large.In this paper, we show how to numerically find the maximum over all quantum measurements of the Fisher information and the Bayesian version of this bound, the Van Trees bound 13 , in a way that is orders of magnitude faster than using the RSM. We rely on the following: (i) the cost functions of interest in quantum metrology are Quantum measurement optimization by decomposition of measurements into extremals esteban Martínez-Vargas 1 , carlos pineda 2 & pablo Barberis-Blostein 3 ✉ Using the convex structure of positive operator value measurements and several quantities used in quantum metrology, such as quantum fisher information or the quantum Van trees information, we present an efficient numerical method to find the best strategy allowed by quantum mechanics to estimate a parameter. This method explores extremal measurements thus providing a significant advantage over previously used methods. We exemplify the method for different cost functions in a qubit and in a harmonic oscillator and find a strong numerical advantage when the desired target error is sufficiently small. One of the goals of metrology is to provide an optimal strategy to measure the value of a parameter under certain fixed conditions. For completeness, both an approximate value of the parameter and an estimation of the error must be given. If the physical system from which the parameter is to be estimated is analyzed within the framework of quantum mechanics, we shall speak about quantum metrology [1][2][3][4][5] . Motivated by the fact that quantum systems can offer an important advantage over classical systems in precision when estimating a parameter 6 , there has been intense theoretical and experimental advances in the area in the last years 7,8 . Moreover, precision in the estimation of parameters has several applications in the development of quantum technologies 9,10 and quantum state manipulation 11 . To estimate a parameter of a physical system one acquires data through measurements; the estimation of the parameter is obtained by applying a function, known as the estimator, to the data. The probability distribution of measurement outcomes can be modelled using a statistical model of the experiment: the probability distribution of outcomes conditioned to the value of the parameter. This statistical model might describe, say, a noisy measurement apparatus. In this article we specialize to the case of quantum mechanics, where the probability distribution is given by the Born rule and the statistical model is obtained once it is decided which operator is going to be measured. In addition to the random component of measurement, we also consider classical noise, which we include through the density matrix formalism. The cost function that quantifies the error of the parameter estimation of a quantum system (for example the mean square error) depends on both the measurement to be performed and the estimator; the optimal measurement strategy consists of the quantum measurement and the estimator that extremizes it. However, finding the extreme of a cost function over all possible quantum measurements and estimators is not simple. Alternatively, when a Cramér-Rao type inequality exists, the problem can be reduced to find the extreme of another cost function (for example the Fisher information) over all the quantum measurements. This simplifies the problem because it is not longer necessary to maximize over the space of estimators. One still has to deal with extremizing a cost function over all quantum measurements, which in general is difficult. However, under special circumstances, such as symmetries, the problem can be simplified 12 . It is possible, though very costly, to numerically find the maximum over all quantum measurements of cost functions. The straightforward way to solve the problem is to randomly sample the space of all positive operator value measures (POVMs), evaluate the cost function on this sample, and keep the maximum value obtained. This method, that we call the random sampling method (RSM), is very inefficient since the POVM space is large. In this paper, we show how to numerically find the maximum over all quantum measurements of the Fisher information and the Bayesian version of this bound, the Van Trees bound 13 , in a way that is orders of magnitude faster than using the RSM. We rely on the following: (i) the cost functions of interest in quantum metrology are convex with respect to the POVMs and (ii) the set of POVMs is convex 14 . Therefore, following the maximum principle 15,16 , the maximum over the POVMs must be on an extremal point of the POVM set. Our approach is simply to sample randomly extremal POVMs and get the highest value of the appropriate cost function. It is easy to produce efficiently a general POVM from a random unitary matrix, however, it is not trivial to produce randomly extremal POVMs. For this we used the algorithm proposed by Sentís et al. 17 . We call our method the random extreme sampling method (RESM). The techniques presented here can be used, for example, to find numerically the maximum value of the quantum Van Trees information 18 , the quantum Fisher information (even if the input state is not pure), or any convex cost function. Together with the maximum, the corresponding quantum measurement is obtained. The method can also be applied to the case of multivariable convex cost functions, and thus used to find the optimal measurement in the estimation of several parameters. For example: Given a multivariable statistical model we can construct the Fisher matrix, its diagonal elements gives a bound on the variance of each parameter. We can use our method to find the quantum measurement that maximizes any convex cost function of the diagonal elements. Our approach can also be used in other areas different from quantum metrology, where a maximization over POVMs is required. An example is statistical decision theory 19 . The problem is the following: we have a set of possible decisions, from which one is chosen depending on the outcome of a quantum system measurement. The distribution probability for the decisions is given by a quantum measurement. We look for the best decision. How good is the decision is rated by a cost function defined in the space of quantum measurements. The problem of finding the best decision is mapped to the problem of maximizing this cost function over all the quantum measurements. Note that quantum parameter estimation can be cast as a problem of statistical decision theory, where the decision to be chosen is an estimation of the parameter. Results In this section we first introduce some convex cost functions that give bounds to the error of parameter estimation. Then we present our main result: a numerical algorithm that allow us to find the maximum over all quantum measurement of any convex cost. We finish with examples of interest where we apply the algorithm. Bounds in the error of parameter estimation. We discuss how to get bounds for the error made in an estimation process. We use two error measures: the mean squared error and the Bayesian mean squared error, which is used when some information is known about the parameter. This discussion is general and just assumes that a statistical model is given. We also discuss how to apply these ideas for a quantum system. Crámer-Rao inequality. In this section, we introduce some basic quantities needed to develop further the discussion. Let θ | p y ( )(1) be the distribution probability of the outcomes y, of the random variable Y, conditioned to a fixed value of the real parameter θ. We assume that each y is a set of real numbers of fixed finite size. The function θ | p y ( ) is the statistical model; y represents what is measured in an experiment and its probability distribution, p, depends on the parameter θ. Using the outcomes y, the parameter is estimated through the real-valued function θ y ( ), known as the estimator. The estimator is unbiased if it is on average correct, meaning that its expected value is equal to the actual value of the parameter,ˆ∫ θ θ θ θ 〈 〉 = | = . p y y y y ( ) d ( ) ( )(2) The uncertainty of the estimator is given by the mean squared error, defined as 2 2 We say that the measurement strategy is optimal if the estimator minimizes the mean squared error. Finally, let us define the Fisher information ∫ ς θ θ θ ≡ − | . p y y y ( ( ) ) ( )d(3)∫ θ θ θ θ ≡      ∂ | ∂      | . F p p y y y ( ) ln ( ) ( )d (4) 2 If the estimator is unbiased (i.e. if Eq. (2) holds), using the Cauchy-Schwarz inequality, one arrives to the Cramér-Rao inequality 20,21 : F( ) 1 ( 5) 2 ς θ ≥ . Note that ς 2 depends on the choice of the specific estimator θˆy ( ), whereas the Fisher information depends only on the distribution probability of the random variable. From the Cramér-Rao inequality, we see that the inverse of the Fisher information bounds from below the mean squared error independently of the estimator we use: the larger the Fisher information the smaller the error bound. Fisher showed that in the limit where the number of measurements goes to infinity, the maximum likelihood estimator saturates this inequality 22 . Quantum Cramér-Rao inequality. We want to find the best measurement strategy to estimate a parameter, θ, that appears in the Hamiltonian of a quantum system. In order to estimate the parameter we proceed as follows: we start with an initial state and let the system evolve some fixed time. The dynamics of the system depends on the parameters of the Hamiltonian; after the evolution, the state of the system depends on the parameter we want to estimate: ρ ρ θ = ( ). We then measure some observable of the system and use the result to estimate θ. Fixing the Hamiltonian, time and initial state, we want to know if the strategy we are using minimizes the error in the parameter estimation. General measurements in quantum mechanics are described by the positive operator valued measure (POVM) formalism, which we briefly recall in order to fix the notation. If E { ( )} ξ is a POVM parametrized by the real parameter ξ, for each value of ξ, ξ E( ) is a self-adjoint operator on the system Hilbert space. They satisfy the completeness relationÊ ( )d 1,(6) ∫ ξ ξ = and the probability of measuring the result ξ is ξ θ ρ θ ξ | = . p T r E ( ) ( ( ) ( ))(7) In order for Eq. (7) to be a probability distribution we require that the elements E( ) ξ to be positive semidefinite, ξ ≥ . E( ) 0(8) Notice that ξ can also belong to a finite set (or a combination of several discrete and continuous indices) if the number of possible outcomes is finite. The expressions throughout this article generalize replacing ∫ ξ d by ∑ ξ . Fixing the POVM and thinking of (7) as the distribution probability of the outcomes [as in Eq. (1)], one can use the tools introduced in Section (2.1.1). In particular, we can calculate the Fisher information and use the Cramér-Rao inequality to know if a given estimator is optimal. However, note that there is a dependence of the Fisher information on the POVM we choose. In order to have the lowest bound for the error, we maximize the Fisher information over all the possible measurements 23 F p p ( ) max ln ( ) ( )d (9) Q E { ( )} 2 ∫ θ ξ θ θ ξ θ ξ =      ∂ | ∂      | . ξ The quantity θ F ( ) Q is known as the quantum Fisher information, and through the Cramér-Rao inequality, ς θ ≥ F ( ) 1,(10) Q 2 tells us the minimal possible error for the best measurement strategy for estimating a parameter appearing in the Hamiltonian of a quantum system. Equation (10) is a direct result from Eq. (5). Since Eq. (5) is valid for every POVM, it is valid for the one in which the maximum Fisher information is attained. The POVM that maximizes F Q is the one that should be used to get the smallest error in the parameter estimation 1 ; we call this POVM the optimal POVM. If the quantum state is pure there are analytical formulas for finding F Q . Otherwise no general formula is known and one must rely in numerical methods. Bayesian Cramér-Rao inequality. We consider the case where we have some partial knowledge of the parameter to be estimated. An example: We want to estimate the velocity of one particle in a dilute gas at temperature T . Without doing any measurement, we know that the particle velocity can be interpreted as a random variable that satisfies the Maxwell-Boltzmann distribution. Note that with the information we already have, we can estimate the velocity as the mean of the Maxwell- Boltzmann distribution v K T m 8 /( ) b π = with a variance µ π π = − K T m (3 8)/( ) b . Here K b is the Boltzmann constant and m the mass of the particle. It is reasonable to design the experiment to measure velocities around v. The result of the measurement should improve the estimation, giving an error smaller than µ. Note that, in general, experiments designed to measure a parameter usually work for some expected range of its value, which implies assumptions were taken about the value of the parameter. The Bayesian Cramér-Rao inequality can be used to decide what is the best estimator in the situation where we have partial knowledge of the parameter. We model the parameter as the random variable Θ, with outcomes θ, and probability distribution λ θ ( ). The outcomes of the experiment are modelled as the random variable Y, with outcomes y, and probability distribution p y ( ) θ | . The cost function we want to minimize is the mean square error P y y y ( ( ) ) ( , )d d ,(11)2 2 ∫ θ θ θ θ Ξ = − where P p y y ( , ) ( ) ( ) θ θλ θ = | . It can be shown that the error is bound from below by the Cramér-Rao type inequality 13 , Ξ ≥ Z 1,(12)∫ ∫ θ λ θ θ λ θ θ λ θ θ = +      ∂ ∂      . Z F( ) ( )d ln ( ) ( )d (13) 2 The first term of the sum is the expectation value of the Fisher information; the second term is the Fisher information of the probability distribution of the possible values of the parameter. The last term codifies what we already know about the parameter. As can be seen from the previous equation, the generalized Fisher information is larger than the Fisher information due to the knowledge we already have of the parameter. This has a simple interpretation: we can use λ θ ( ) to estimate the parameter, and measuring the system necessarily diminishes the error in the estimation of the parameter. The best strategy for measuring a parameter, with known information codified in a probability distribution, is given by the estimator, θˆ, that saturates inequality (12). If we want to estimate the outcomes of a random variable, the problem of finding the best strategy is exactly the same as discussed in this section. In this case the experiment is repeated several times with the values of the parameter satisfying the probability distribution of the random variable. An example: measure the velocity of several particles in thermal equilibrium in a dilute gas. The velocities for the classical particles obey a Maxwell-Boltzmann distribution. In this context, we found useful 24 , a review of bayesian inference in physics. Bayesian quantum Crámer-Rao inequality. We want to estimate the parameter θ of a Hamiltonian in a quantum system where the known information about θ is codified in the probability distribution λ θ ( ). Given a POVM, E { ( )} ξ , a statistical model can be built through Eq. (7), and the problem is reduced to the classical one. The POVM, ξ E { ( )} max , that maximizes the generalized Fisher information (13), together with the appropriate estimator, saturates the Cramér-Rao type inequality . For a combination with other weights, continuity and a recursive procedure imply convexity of F. From these two observations, we infer that the Van Trees information is also convex, as the integral of convex functions is also convex. Ξ ≥ Z 1,(14)Q 2w here Z p P max ln ( ) ( , )d d ln ( ) ( )d (15) Q E { ( )} 2 2 ∫ ∫ ξ θ θ ξ θ θ ξ λ θ θ λ θ θ =             ∂ | ∂             +      ∂ ∂      .1 ( ) [ ]d 0,(16) Since the maximum of the convex cost functions lies on the extremal points of all POVMs, we only need to search in this subset simplifying greatly the optimization task. A way to sample randomly such a set is presented in the following sub-section. The algorithm. The outline of the algorithm is as follows: We produce a random POVM, and decompose it in extremals. We then evaluate the cost function using extremal POVMs and choose the one which yields the highest value. We repeat the procedure several times and keep the optimal POVM. We provide an implementation in an online repository 26 . Random Sampling Method. To produce a random POVM, we use the purification algorithm 27 backwards, which transforms a general POVM into a usual projective measurement in an enlarged space. The first step is to produce a random unitary matrix that acts on both the original space and an ancilla Hilbert space. The dimension of this ancilla space is the number of outputs of the initial POVM. Because the extremal POVMs have at most d 2 elements, where d is the Hilbert space dimension 14 , nothing is gained if the dimension of the ancilla space is larger than d 2 . For example, if we are trying to estimate a parameter of a qubit, we only need to use a dimension 2 Hilbert space and a dimension 4 ancilla space. If the Hilbert space is large or has an infinite dimension, we run the algorithm for some arbitrary dimension of the ancilla space and then increase its dimension until stable results are obtained. The aforementioned unitary matrix is chosen with a measure invariant with respect to multiplication by unitary operators, i.e., with the Haar measure. The ensemble induced by the aforementioned measure is called the circular unitary ensemble (CUE) 28 . The easiest way to construct a representative member of the CUE is to construct a member of the Gaussian unitary ensemble (GUE) 28 , the ensemble of hermitian matrices invariant under unitary conjugation subject to the condition that the ensemble average of the trace of the square of the matrices is fixed. Generating a member H of such an ensemble is simple: we build a matrix A with Gaussian complex numbers, all with equal standard deviation and zero mean. Let † = + H A A, we can then calculate the matrix elements of † UHU , for any unitary U, and verify that the distribution of all matrix elements of the rotated matrix remains invariant. Thus, the eigenbasis of H has the Haar measure, and then the matrix U that diagonalizes H is a member of the CUE with the appropriate measure. The routine CUEMember calculates such an operator and can be found in 26 . It is well known that we can interpret an arbitrary n output POVM as a projective measurement in a space built from the original one and a coupled n-dimensional ancilla space. If µ | 〉 µ= … { } n 1, , is an orthonormal base in the ancilla space, |Ψ〉 a state in the original Hilbert space, and Q { } m the operators characterizing the POVM, we can define a unitary operator U such that ∑ |Ψ〉| 〉 = |Ψ〉| 〉 U Q m 1 ,(17) ⊗ | 〉〈 〉 = … is equivalent to the measurement defined by Q { } m , in the sense that the probabilities are the same, and the states, after discarding the ancilla space, are also identical, see 27 . Equation (17) can also be interpreted as a way to induce a POVM measurement in the original Hilbert space, starting from a unitary operator in an extended Hilbert space. In fact, if we replace |Ψ〉 by the basis state | 〉 j and premultiply by the bra 〈 |〈 | i m , we obtain 〈 | | 〉 = 〈 |〈 | | 〉| 〉. i Q j i m U j 1 (18) m Thus, from the random unitary U we can get all matrix elements of each of the Q m , according to Eq. (18). Since for all POVMs one can build a unitary transformation in the extended space such that Eq. (18) holds 27 , sampling all unitaries in the extended space guaranties sampling all POVMs with the corresponding number of outcomes. The routine POVM calculates the POVM in this way and can be found in 26 . The aforementioned method to sample POVMs will be called Random sampling method, or RSM for short. Naimark dilationAt this point, we would like to mention another POVM sampling method, inspired in Naimark's dilation theorem 29 . We start with a unitary matrix, acting on the original space tensored with an ancilla space  ancilla . The columns | 〉 v i of this matrix can be interpreted as an orthonormal basis on the extended space of dimension d′. Notice that d′ is a multiple of d, the dimension of the original Hilbert space. Let us define the d′ operators Conversion to a rank-1 POVM. To proceed further, we need a rank-1 POVM, so we must transform the aforementioned POVM accordingly. Recall that a rank-1 POVM is one whose elements are all rank-1 operators. However, typically, the Q { } m are not rank-1 operators.  = | 〉 ⊗ 〈 | M v v d r r 1 Tr , i i i ancilla where | 〉 For any given Q Q { } m ∈ that is not a rank-1 operator, we perform the spectral decomposition Q Q Q i l i i 1 λ = ∑ = =˜ † using a standard algorithm. Notice that the Q i are projectors, and 0 i λ > . We then replace Q with the l operators λQ i i , and the completeness relation remains valid since Q Q Q Q ( ) i i i i ĩ˜˜ † † λ λ ∑ = . Notice that the number of elements at the end of the algorithm will be larger than the number of outputs of the initial POVM. This Even though x a = is a solution to the problem, the usual numerical algorithms provide an extremal point. Notice that if an element x i of the solution is 0, it means that we do not include the operator in the POVM. The construction of the matrix A and vector b is done with the routine AConstruction. Let this extremal solution be x ext . The Mathematica LinearProgramming[c,A,b] implementation solves the following linear program: = ≥ c x Ax b x minimize subject to , 0,(20) T where c is a constant vector. As we are interested only in finding a solution and not in optimizing a given vector, we give a random vector c to the usual Mathematica LinearProgramming routine. This vector c has random entries between 0 and 1 given by the uniform distribution with the Mathematica routine RandomReal. This process is done by our routine LinearProg. To obtain the extremal POVM we start by defining x′ via = + − ′ a px px (1 ) ,(21) ext with p a scalar. Requiring that x 0 ′ ≥ can be enforced letting = p a x min ,(22) i i i ext which in turn implies that p is a probability and that for some i, x 0 i ′ = . If we define = x Q Q ext e xt and x Q Q ′ = ′ , we can write p p Q Q Q (1 ) ext = + − ′. Indeed, Q ext is an extremal POVM 17 , and since one of the elements of x′ is null, ′ Q is a n 1 − output POVM (given that Q is an n output POVM) for which we can iterate the algorithm until a single output POVM is obtained. Notice that with this algorithm, all POVMs with a given number of outputs can in principle be sampled. The routine BuildExtremal constructs an extremal POVM from the output of the Linear Program. The routine CalculateP calculates the p probability from Eqs. (21) and (22). Finally, the auxiliary solution ′ Q is built with the routine AuxiliarSol. Examples. In this section we apply the method described in the section Numerical algorithm to estimate the quantum Fisher information and the quantum Van Trees information. In the examples we observe a computational speedup when using the algorithm presented here compared with methods that randomly sample the whole POVMs space. Qubit. We use the algorithm to calculate the quantum Fisher information for estimating a parameter encoded in a pure qubit state. We use known analytical results to benchmark our numerical method. We consider a spin 1/2 particle in the state, θ η η η |Ψ 〉 =              θ θ − e e ( , ) cos( /2) sin( /2) ,(23)i i /2 /2 where θ π ∈ [0, 2 ) is the phase between the two basis states and [0, ] η π ∈ is a known parameter that characterizes the weight of each element of the superposition. The problem is the following: we want to find the best strategy to estimate the phase θ given that η is known 30 . Given a set of states parametrized by ξ, i.e. ρ ξ ( ), we consider the following family of POVMs: ξ ρ ξ ρξ = − . P( ) { ( ), ( )} (24)  Each of these POVMs has two elements that corresponds to the outcomes 1 and 2. In this subsection we shall consider the particular case ( ) ( , ) ( , ) (25) ρ ξ ξ η ξ η = |Ψ 〉〈Ψ |. Using Eq. (7) we find that the probability of measuring outcome 1 or 2 for the POVM ξ is given by which is a function of the parameter we want to estimate, i.e. θ. Notice that there is a dependence on the initial state, via η, and on the POVM used, via ξ; we make these dependencies on the POVM explicit via a superscript. When the state is pure, the maximum quantum Fisher information can be analytically calculated 23 . In this example the quantum Fisher information is the maximum of F ( ) ( , ) θ ξ η with respect to ξ: θ θ η = = . ξ ξ η F F ( ) max ( ) sin ( ) (28) Q ( , ) 2 In order to evaluate the performance of the proposed algorithm, we apply the RSM and RESM methods and compare their performance with the exact result (28). We define the errors θ θ ∆ = − ∆ = − F F F F ( ) , ( ) , Q Q RSM R SM RESM RESM where F RSM and F RESM are the Fisher information numerically calculated using the RSM and RESM respectively. In Fig. 1(b), we plot running time vs error for the two methods. It is clear from the plot that RESM is better, and the longer the program runs, the better the results using RESM compared with RSM. For this example, we obtain an error two orders of magnitude smaller running the program for the same length of time. Now we consider that we have some information about θ codified in the probability distribution θ p( ); limits to the error in the estimation are given by the Cramér-Rao type inequality (14). We assume that the angle θ has the uniform distribution p( ) 1/2 θ π = in Eq. (29). First we consider the maximization of the generalized Fisher information over the family of POVMs, ξ P( ), given by Eqs. (24) and (25) ∫ θ θ θ = . ξ ξ η Z max d p( )F ( ) (29) Q P ( , ) Because we are using a subset of all the POVMS ˜Z Z Q Q P ≤ , this approach allows us to get an analytic approximation for the quantum Van Trees information. For a uniform superposition (η π = /2), the Fisher information becomes independent of ξ; in fact ˜Z Z 1 Q Q P = = , see Eq. (27). This implies that any POVM from the family ξ P( ) maximizes the Fisher information. In general we obtain η η = − | | Z ( ) 1 cos( ) ,(30) Q P so we can assert that if only POVMs of the family ξ P( ) are allowed, the best estimation is in the case where /2 η π = . Now we apply RESM to calculate Z Q and compare with Z ( ) Q P η , see Fig. 1(a). The maximum of Z Q is again obtained when /2 η π = . That means that the lowest error in the phase estimation is obtained when the weights of the superposition are the same. The figure suggests that Z Z Q Q P= . We observed that surprisingly, almost any extremal POVM is useful for finding the maximum Z Q . We require very few samplings (≈10) to observe good agreement between Eq. (30) and the numerical calculations. We also arrive at a reasonable estimate with 1 sample for most ηs. Phase estimation. We calculate the Van Trees information for the phase estimation problem, one of the workhorses of quantum metrology. Since for this case no analytical solutions are known, this is an interesting testbed for our method. Initial coherent state. We want to estimate the phase difference θ between two paths that light can follow, see 18 for a similar calculation. We probe the system with a coherent state, such that one path yields the state α | 〉 (with α a complex number) and the other φ θ α α | 〉 = | 〉 = | 〉 θ θ e e ( ) ,(31)in i where n is the number operator. Assume that we know, with some error, the size and the refractive index of the object that creates the phase difference. We can make an initial estimation of the phase difference between the two paths because it is proportional to the length travelled inside the object. We can model this situation assuming that θ is a random variable. As an example, we consider a Gaussian distribution centered at π, with standard deviation π/4 and trimmed at the edges (0 and 2π). Using RESM, we calculate the quantum Van Trees information for different values of α | |, see Fig. 2(a). The line is obtained using Eq. (24) with ( ) ( ) ( ) ρ ξ φ ξ φ ξ = | 〉〈 | as an ansatz. The figure suggests that the family of POVMS proposed is a good ansatz. When α | | decreases, the Van Trees information decreases, but when α | | = 0 still Z 0 Q > , since an estimation of the phase difference can be done with the information we already have about the parameter prior to any measurements. In order to do the calculation we approximate the coherent states with a truncated Hilbert space and limited the number of outcomes of the POVMs. We observe the results as a www.nature.com/scientificreports www.nature.com/scientificreports/ function of Hilbert space dimension and the number of outcomes of the POVMs. We find that a Hilbert space of dimension 7 and a POVM of 10 outcomes give us stable results. Initial displaced thermal state. As a final example, we consider estimating a parameter, chosen from a given distribution, encoded in a non-pure state. Again, there are no analytical expressions for the quantum Fisher information for this case. We calculate Z Q in order to bound the error in estimating the parameter. We build upon the last example, considering a thermal state displaced by the same operator that would give the state (31). Let ρ θ α ρ α =       θ θ −ˆ †T e D T D e ( , ) ( ) ( ) ( )(32)ν α 〈 〉 =                  −      = | | − be the state in which the parameter (θ) is encoded. For the Fig. 2(b), we used again a Gaussian distribution with mean π and standard deviation π/4. In Fig. 2(b) we show the numerical calculations of Z Q using the algorithm RESM. We compare it with the the ansatz composed of the two outcome POVM (24) with ρ ξ φ ξ φ ξ = | 〉〈 | ( ) ( ) ( ) , see Eq. (32). We see that the points calculated with the RESM algorithm which are a lower bound of Z Q beat the ansatz case for most points. We expect such behavior as the state in consideration (32) is a mixed state. Discussion The random extreme sampling method (RESM) can be used to find efficiently the maximum of a cost function over all possible quantum measurements. Particularly it is useful for finding limits in the precision of parameter estimation, through the cost function known as the quantum Fisher information, when the state to be measured is a mixed state. It can also be used to find the optimal measurement strategy by a given convex cost function by finding the POVM that maximizes it, at a considerable lower computational cost. (29), resulting in Eq. (30), is shown as a blue line. We consider different families of states parametrized by η, see Eq. (23). The number of outcomes of the POVM is fixed to 4, as this is the maximum number of outcomes for an extremal POVM in a 2-dimensional Hilbert space 25 . Note that the RESM method gives much better results with one sample than the RSM method with 1000. (b) Error in the numerical estimation of F Q , see Eq. (28), sampling directly the whole space of POVMs (RMS) or only its extremal points (RESM) for η π = /2, with respect to the computational time invested. The slope for RSM case is m 0 63 RSM = − . and for RESM is m 1 84 RESM = − . . The error with the method proposed decreases much faster using RESM rather than RMS. Scientific RepoRtS | (2020) 10:9375 | https://doi.org/10.1038/s41598-020-65934-w www.nature.com/scientificreports www.nature.com/scientificreports/ Methods The code implementation can be obtained in the repository 26 (we varied r) with a random phase chosen from Gaussian distribution. It can be seen how the ansatz does not work. For the figure (a) we sampled 150 times, we considered a POVM with 10 outcomes and we truncated the photon space to the lowest seven states of the harmonic oscillator. For the figure (b) we sampled 200 times and the approximation of the dimension of the Hilbert space and the number of outcomes was the same, the dimensionless temperature is = − k T 10 . ξŴ e call Z Q the quantum Van Trees information. If we want to minimize the error in the parameter estimation, and we codify what we know about the parameter in the probability distribution ( ) λ θ , we have to implement the quantum measurement given by E { ( )} max ξ 18 .numerical algorithm. The calculation of cost functions such as F Q or Z Q is not easy, as it implies an optimization over all POVMs. In this section, we present our main result: an efficient numerical procedure to calculate the maxima, over all POVMS, of convex cost functions. , ) ( , ) ( , ) ( , ) , Figure 1 . 1We study the numerical accuracy (a) and time cost (b) of estimating the quantum Van Trees information, for a qubit encoding parameter θ on its phase, see Section 2.3.1. (a) A numerical estimation was done using the RSM method with n 10 ens = (green triangles) and diamonds) samples. The ansatz . To reproduce the results presented in 2.3.1, set the flag -o to Qubit and vary the flag --EtaAngle from 0 to π. For the results in sections 2.3.2 and 2.3.2, set the flag -o to CohPlusTherGaussian and to DispTherGaussian respectively. We also set the temperature with -T 0.001, the number of times to sample the space with -s 150 (or 200 for the displaced thermal state), the dimension of the Hilbert space to describe the system with --HilbertDim 7 and the number of outcomes of the POVM with --Outcomedim 10. For the pure state, as in 2.3.2, set -o CohPlusTherGaussian --MixConstant 1. The squared norm of α is set with the option --MeanPhotonNumb, which can be varied to reproduce the plots. The whole data set can be obtained with the command make all. Figure 2 . 2(a) Quantum Van Trees information for estimating a Gaussian distributed random phase acquired by a coherent state. (b) Calculation of Z Q for a displaced thermal state with where the generalized Fisher information, Z, can be written as (2020) 10:9375 | https://doi.org/10.1038/s41598-020-65934-w www.nature.com/scientificreports www.nature.com/scientificreports/2 Convexity. The quantum Van Trees information is convex; this follows directly noticing that the set of POVMs 14,25 and the Fisher information are convex. Fisher information can be rewritten as , where the prime indicates the derivative with respect to θ. It then follows that∫ = ′ F p pdx ( ) / 2 ∫ ∫ ∫ ∫ ′ + ′ − ′ + ′ + = + ′ − ′ ≥ p p p p p p p p p p p p p p p p x x x x 1 2 ( ) d 1 2 ( ) d [( )/2] ( ) /2 d 1 2 r is a random state on the ancilla space. Notice that 〈Ψ| |Ψ〉 = |〈 | |Ψ〉 ⊗ | 〉| |Ψ〉 ⊗ | 〉, so all M i are semipositive defined operators. Moreover, they inherit the completeness relation M 1 | 〉〈 | = of the orthonormal basis in the enlarged space. Thus, M { } i form a POVM of d′ outcomes. If we build a POVM with the aforementioned recipe, with the unitary matrix chosen from the CUE and the state | 〉 r with the Haar measure, we say that we are sampling a POVM with the ND method.M v r ( i i 2 , the probability of projecting to | 〉 v i the state r i i ∑ = from the completeness relation v v 1 i i check and the corresponding transformation of the POVMs are done with the routines projector and eivalues. Random extremal sampling method Let us define a Q Tri i = , and = − A a Q G Tr( ) ij j j i 1 with G { } j an orthonormal traceless base for hermitian matrices of the appropriate dimension. In our case, we used the Gell-Mann matrices. We also define = A 1 d j , 2 , so that the completeness condition over POVMS reads = Aa b, if we define the d 2 -dimensional vector b d (0, , 0, ) = … . We now propose the linear program Scientific RepoRtS | (2020) 10:9375 | https://doi.org/10.1038/s41598-020-65934-w www.nature.com/scientificreports www.nature.com/scientificreports/ x Ax b x find subject to , 0 (19) = ≥ . © The Author(s) 2020 AcknowledgementsWe would like to thank François Leyvraz for discussions. Support by PASPA-DGAPA, UNAM, projects CONACyT 285754 and UNAM-PAPIIT IG100518, IN-107414. EMV acknowledges support from Spanish MINECO project FIS2016-80681-P with the support of AEI/FEDER, UE funds and Generalitat de Catalunya, project CIRIT 2017-SGR-1127.Author contributionscompeting interestsThe authors declare no competing interests.Additional informationCorrespondence and requests for materials should be addressed to P.B.-B.Reprints and permissions information is available at www.nature.com/reprints.Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Quantum Metrology for Noisy Systems. Braz. B M Escher, R L Matos Filho, L Davidovich, J. Phys. 41Escher, B. M., Matos Filho, R. L. & Davidovich, L. Quantum Metrology for Noisy Systems. Braz. J. Phys 41, 229-247 (2011). Strong consistency and asymptotic efficiency for adaptive quantum estimation problems. A Fujiwara, J. Phys. A. 3912489Fujiwara, A. Strong consistency and asymptotic efficiency for adaptive quantum estimation problems. J. Phys. A 39, 12489 (2006). Advances in quantum metrology. V Giovannetti, S Lloyd, L Maccone, Nat. Photonics. 5Giovannetti, V., Lloyd, S. & Maccone, L. Advances in quantum metrology. Nat. Photonics 5, 222-229 (2011). Conservative classical and quantum resolution limits for incoherent imaging. M Tsang, Nat. Photonics. 65J. Mod. Opt.Tsang, M. Conservative classical and quantum resolution limits for incoherent imaging. J. Mod. Opt. 65, 1385-1391, Nat. Photonics (2018). True precision limits in quantum metrology. M Jarzyna, R Demkowicz-Dobrzański, New J. Phys. 1713010Jarzyna, M. & Demkowicz-Dobrzański, R. True precision limits in quantum metrology. New J. Phys. 17, 013010 (2015). Quantum-enhanced measurements: Beating the standard quantum limit. V Giovannetti, S Lloyd, L Maccone, Science. 306Giovannetti, V., Lloyd, S. & Maccone, L. Quantum-enhanced measurements: Beating the standard quantum limit. Science 306, 1330-1336 (2004). Quantum metrology from a quantum information science perspective. G Tóth, I Apellaniz, J. Phys. A. 47424006Tóth, G. & Apellaniz, I. Quantum metrology from a quantum information science perspective. J. Phys. A 47, 424006 (2014). Simultaneous tracking of spin angle and amplitude beyond classical limits. G Colangelo, F M Ciurana, L C Bianchet, R J Sewell, M W Mitchell, Nature. 543Colangelo, G., Ciurana, F. M., Bianchet, L. C., Sewell, R. J. & Mitchell, M. W. Simultaneous tracking of spin angle and amplitude beyond classical limits. Nature 543, 525-528 (2017). Quantum technology: the second quantum revolution. A G J Macfarlane, J P Dowling, G J Milburn, Philos. Trans. R. Soc. A. 361MacFarlane, A. G. J., Dowling, J. P. & Milburn, G. J. Quantum technology: the second quantum revolution. Philos. Trans. R. Soc. A 361, 1655-1674 (2003). Quantum estimation for quantum technology. M G A Paris, Int. J. Quantum Inf. 07Paris, M. G. A. Quantum estimation for quantum technology. Int. J. Quantum Inf. 07, 125-137 (2009). Photonic quantum technologies. J L O&apos;brien, A Furusawa, J Vuckovic, Nat. Photonics. 3O'Brien, J. L., Furusawa, A. & Vuckovic, J. Photonic quantum technologies. Nat. Photonics 3, 687-695 (2009). Probabilistic and Statistical Aspects of Quantum Theory. A S Holevo, SpringerDordrecht2nd ed. Publications of the Scuola Normale Superiore MonographsHolevo, A. S. Probabilistic and Statistical Aspects of Quantum Theory; 2nd ed. Publications of the Scuola Normale Superiore Monographs (Springer, Dordrecht, 2011). H L Trees, Estimation, and Modulation Theory. John Wiley & SonsTrees, H. L. V. Detection, Estimation, and Modulation Theory (John Wiley & Sons, 2004). Quantum measurements on finite dimensional systems: relabeling and mixing. Quantum Inf. E Haapasalo, T Heinosaari, J.-P Pellonpää, Process. 11Haapasalo, E., Heinosaari, T. & Pellonpää, J.-P. Quantum measurements on finite dimensional systems: relabeling and mixing. Quantum Inf. Process. 11, 1751-1763 (2012). R Rockafellar, Convex analysis. Princeton university pressRockafellar, R. T. Convex analysis (Princeton university press, 1972). Convex Optimization. S Boyd, L Vandenberghe, Cambridge University PressNew York, NY, USABoyd, S. & Vandenberghe, L. Convex Optimization. (Cambridge University Press, New York, NY, USA, 2004). Decomposition of any quantum measurement into extremals. G Sentís, B Gendra, S D Bartlett, A C Doherty, J. Phys. A. 46375302Sentís, G., Gendra, B., Bartlett, S. D. & Doherty, A. C. Decomposition of any quantum measurement into extremals. J. Phys. A 46, 375302 (2013). Quantum estimation of unknown parameters. E Martnez-Vargas, C Pineda, F Leyvraz, P Barberis-Blostein, Phys. Rev. A. 9512136Martnez-Vargas, E., Pineda, C., Leyvraz, F. & Barberis-Blostein, P. Quantum estimation of unknown parameters. Phys. Rev. A 95, 012136 (2017). Statistical decision theory for quantum systems. A Holevo, J. Multivar. Anal. 3Holevo, A. Statistical decision theory for quantum systems. J. Multivar. Anal. 3, 337-394 (1973). H Cramér, Mathematical Methods of Statistics. Princeton University PressCramér, H. Mathematical Methods of Statistics (Princeton University Press, 1945). Information and the accuracy attainable in the estimation of statistical parameters. C R Rao, Bull. Calcutta Math. Soc. 37Rao, C. R. Information and the accuracy attainable in the estimation of statistical parameters. Bull. Calcutta Math. Soc. 37, 81-91 (1945). On the Mathematical Foundations of Theoretical Statistics. R A Fisher, Philos. Trans. R. Soc. A. 222Fisher, R. A. On the Mathematical Foundations of Theoretical Statistics. Philos. Trans. R. Soc. A 222, 309-368 (1922). Statistical distance and the geometry of quantum states. S Braunstein, C Caves, Phys. Rev. Lett. 72Braunstein, S. & Caves, C. Statistical distance and the geometry of quantum states. Phys. Rev. Lett. 72, 3439-3443 (1994). Bayesian inference in physics. U Von Toussaint, Rev. Mod. Phys. 83von Toussaint, U. Bayesian inference in physics. Rev. Mod. Phys. 83, 943-999 (2011). Classical randomness in quantum measurements. G M D&apos; Ariano, P L Presti, P Perinotti, J. Phys. A. 385979D' Ariano, G. M., Presti, P. L. & Perinotti, P. Classical randomness in quantum measurements. J. Phys. A 38, 5979 (2005). Random sampling of extremal povms. E Martnez-Vargas, Martnez-Vargas, E. Random sampling of extremal povms. GitHub, https://github.com/estebanmv/Random-Sampling-Extremal- POVMs (2019). M Nielsen, I Chuang, Quantum Computation and Quantum Information. Cambridge Series on Information and the Natural Sciences. Cambridge University PressNielsen, M. & Chuang, I. Quantum Computation and Quantum Information. Cambridge Series on Information and the Natural Sciences (Cambridge University Press, 2000). Random Matrices, second edn. M L Mehta, Academic PressSan Diego, CaliforniaMehta, M. L. Random Matrices, second edn. (Academic Press, San Diego, California, 1991). Completely Bounded Maps and Operator Algebras. V Paulsen, Cambridge Studies in Advanced Mathematics. Cambridge University PressPaulsen, V. Completely Bounded Maps and Operator Algebras. Cambridge Studies in Advanced Mathematics (Cambridge University Press, 2003). Fisher information in quantum statistics. O E Barndorff-Nielsen, R D Gill, J. Phys. A. 334481Barndorff-Nielsen, O. E. & Gill, R. D. Fisher information in quantum statistics. J. Phys. A 33, 4481 (2000).
[ "https://github.com/estebanmv/Random-Sampling-Extremal-" ]
[ "Convex Hull Approximation of Nearly Optimal Lasso Solutions", "Convex Hull Approximation of Nearly Optimal Lasso Solutions" ]
[ "Satoshi Hara \nOsaka University\n\n", "Takanori Maehara [email protected] \nOsaka University\n\n" ]
[ "Osaka University\n", "Osaka University\n" ]
[]
In an ordinary feature selection procedure, a set of important features is obtained by solving an optimization problem such as the Lasso regression problem, and we expect that the obtained features explain the data well. In this study, instead of the single optimal solution, we consider finding a set of diverse yet nearly optimal solutions. To this end, we formulate the problem as finding a small number of solutions such that the convex hull of these solutions approximates the set of nearly optimal solutions. The proposed algorithm consists of two steps: First, we randomly sample the extreme points of the set of nearly optimal solutions. Then, we select a small number of points using a greedy algorithm. The experimental results indicate that the proposed algorithm can approximate the solution set well. The results also indicate that we can obtain Lasso solutions with a large diversity.
10.1007/978-3-030-29911-8_27
[ "https://arxiv.org/pdf/1810.05992v1.pdf" ]
53,189,386
1810.05992
d30aa7146942dc1c74c8b782113d2595febbf346
Convex Hull Approximation of Nearly Optimal Lasso Solutions Satoshi Hara Osaka University Takanori Maehara [email protected] Osaka University Convex Hull Approximation of Nearly Optimal Lasso Solutions In an ordinary feature selection procedure, a set of important features is obtained by solving an optimization problem such as the Lasso regression problem, and we expect that the obtained features explain the data well. In this study, instead of the single optimal solution, we consider finding a set of diverse yet nearly optimal solutions. To this end, we formulate the problem as finding a small number of solutions such that the convex hull of these solutions approximates the set of nearly optimal solutions. The proposed algorithm consists of two steps: First, we randomly sample the extreme points of the set of nearly optimal solutions. Then, we select a small number of points using a greedy algorithm. The experimental results indicate that the proposed algorithm can approximate the solution set well. The results also indicate that we can obtain Lasso solutions with a large diversity. Introduction Background and Motivation Feature selection is a procedure for finding a small set of relevant features from a dataset. It simplifies the model to make them easier to understand, and enhances the generalization performance; thus it plays an important role in data mining and machine learning [Guyon and Elisseeff, 2003]. One of the most commonly used feature selection methods is the Lasso regression [Tibshirani, 1996, Chen et al., 2001. Suppose that we have n observations of p dimensional vectors x 1 , . . . , x n ∈ R p , and the corresponding responses y 1 , . . . , y n ∈ R. Then, the Lasso regression seeks a feature vector β * ∈ R p by minimizing the 1penalized squared loss function The Lasso regression and its variants have many desirable properties; in particular, the sparsity of the solution helps users to understand which features are important for their tasks. Hence, they are considered to be one of the most basic approaches for the cases, e.g., when models are used to support user decision making where the sparsity allows users to check whether or not the models are reliable; and, when users are interested in finding interesting mechanisms underlying the data where the sparsity enables users to identify important features and get insights of the data [Guyon and Elisseeff, 2003]. L(β) = 1 2n Xβ − y 2 2 + λ β 1 ,(1. To further strengthen those advantages of the Lasso, Hara and Maehara [2017] proposed enumerating all (essentially different) the Lasso solutions in their increasing order of the objective values. With the enumeration, one can find more reliable models from the enumerated solutions, or one can gain more insights of the data Maehara, 2017, Hara andIshihata, 2018]. In this study, we aim at finding diverse solutions instead of the exhaustive enumeration. Hara and Maehara [2017] have observed that in real-world applications, there are too many nearly optimal solutions to enumerate them exhaustively. Typically, if there are some highly correlated features, the enumeration algorithm outputs all their combinations as nearly optimal solutions; thus, there are exponentially many nearly optimal solutions. Obviously, checking all those similar solutions is too exhausting for the users, which makes the existing enumeration method less practical. To overcome this practical limitation, we consider finding diverse solutions as the representative of the nearly optimal solutions, which enables users to check "overview" of the solutions. Contribution In this study, we propose a novel formulation to find diverse yet nearly optimal Lasso solutions. Instead of the previous enumeration approach, we directly work on the set of nearly optimal solutions, defined by B(ν) = {β ∈ R p : L(β) ≤ ν} (1.2) where ν ∈ R is a threshold slightly greater than the optimal objective value ν * = L(β * ) of the Lasso regression. We summarize B(ν) by a small number of points Q ⊂ B(ν) in the sense that the convex hull of Q approximates B(ν). We call this approach convex hull approximation. Section 3 describes the mathematical formulation of our approach. We illustrate this approach in the following example with Figure 1. Example 1.1. Let us consider the two-dimensional Lasso regression problem with the following loss function L(β 1 , β 2 ) = 1 2 1 1 1 1 + β 1 β 2 − 1 1 2 2 + β 1 β 2 1 (1.3) where is a sufficiently small parameter, e.g., = 1/40. Then, the optimal value ν * is approximately 3/4, and the corresponding optimal solution β * is approximately (0, 1/2), as shown in the green point in Figure 1. Now, we consider the nearly optimal solution set B(ν) for the threshold ν = ν * + . The boundary of this set is illustrated in the dashed line in Figure 1. Even if ν − ν * is very small, since the observations X is highly correlated, B(ν) contains essentially different solution, e.g., β = (1/2, 0). We approximate B(ν) by the convex hull of a few finite points Q ⊂ B(ν). In this case, by taking the corner points of B(ν), we can approximate this set well by the four points as shown by the blue line in Figure 1. We note that the diversity of Q is implicitly enforced because diverse Q is desirable for a good approximation of B(ν); we therefore do not need to add the diversity constraint such as DPP [Kulesza et al., 2012] explicitly. This problem will be solved numerically in Section 5. We propose an algorithm to construct a good convex hull approximation of B(ν). The algorithm consists of two steps. First, it samples sufficiently many extreme points of B(ν) by solving Lasso regressions multiple times. Second, we select a small subset Q from the sampled points to yield a compact summarization. The detailed description of our algorithm is given in Section 4. We conducted numerical experiments to evaluate the effectiveness of the proposed method. Specifically, we evaluated three aspects of the method, namely, the approximation performance, computational efficiency, and the diversity of the found solutions. The results are shown in Section 5. For simplicity, we describe only the method for Lasso regression in the manuscript; however it can be easily adopted to the other models such as the sparse logistic regression [Lee et al., 2006] and elastic-net [Zou and Hastie, 2005]. Preliminaries A set C ⊂ R p is convex if for any β 1 , β 2 ∈ C and α ∈ [0, 1], (1 − α)β 1 + αβ 2 ∈ C. For a set P ⊂ R p , its convex hull, conv(P ), is the smallest convex set containing P . Let C be a convex set. A point β ∈ C is an extreme point of C if β = (1 − α)β 1 + αβ 2 for some β 1 , β 2 ∈ C and α ∈ (0, 1) implies β = β 1 = β 2 . The set of extreme points of C is denoted by ext(C). The Klein-Milman theorem shows the fundamental relation between the extreme points and the convex hull. Theorem 2.1 (Klein-Milman Theorem; see Barvinok [2002]). Let C be a compact convex set. Then conv(ext(C)) = C. For two sets C, C ⊂ R p , the Hausdorff distance between these sets is defined by d(C, C ) = max{ sup β∈C inf β ∈C β − β 2 , sup β ∈C inf β∈C β − β 2 }. (2.1) The Hausdorff distance forms a metric on the non-empty compact sets. The computation of Hausdorff distance is NP-hard (more strongly, it is W[1]-hard) in general [König, 2014]. A function L : R p → R is convex if the epigraph Epi(L) = {(β, ν) ∈ R p ×R : ν ≥ L(β)} is convex. For a convex function L, the level set B(ν) = {β ∈ R p : L(β) ≤ ν} is convex for all ν ∈ R. Formulation In this section, we formulate our convex hull approximation problem mathematically. We assume that X has no zero column (otherwise, we can remove the zero column and the corresponding feature from the model). Recall that the Lasso loss function L : R p → R in (1.1) is convex; therefore, the set of nearly optimal solutions B(ν) in (1.2) forms a closed convex set. Moreover, since X has no zero column, B(ν) is compact. Our goal is to summarize B(ν). By the Klein-Milman theorem (Theorem 2.1), B(ν) can be reconstructed from the extreme points of B(ν) as B(ν) = conv(ext(B(ν))); therefore, it is natural to output the extreme points ext(B(ν)) as a summary of B(ν). If ν = ν * , this approach corresponds to enumerating the vertices of B(ν * ), which forms a polyhedron [Tibshirani and others, 2013]; therefore, we can use the existing algorithm to enumerate the vertices of a polyhedron developed in Computational Geometry [Fukuda et al., 1997] as in Pantazis et al. [2017]. However, if ν > ν * , B(ν) has a piecewise smooth boundary, as shown in Figure 1; therefore, there are continuously many extreme points of B(ν), which cannot be enumerated. 1 Therefore, we select a finite number of points Q ⊂ ext(B(ν)) as a "representative" of the extreme points such that conv(Q) well approximates B(ν). We measure the quality of the approximation by the Hausdorff distance (2.1). To summarize the above discussion, we pose the following problem. Algorithm 1 Proposed algorithm 1: Sample sufficiently many points S ⊂ B(ν) by Algorithm 2. 2: Select a few points Q ⊆ S by Algorithm 3. Problem 3.1. We are given a loss function L : R p → R and a threshold ν ∈ R. Let B(ν) = {β ∈ R p : L(β) ≤ ν}. Find a point set Q ⊂ B(ν) such that (1) d(conv(Q), B(ν)) is small, and (2) |Q| is small. The problem of approximating a convex set by a polyhedron has a long history in convex geometry (see Bronstein [2008] for a recent survey). Asymptotically, for any compact convex set with a smooth boundary, the required number of points to obtain an approximation is Θ( and Ivanov, 1975, Gruber, 1993. √ p/ (p−1)/2 ) [Bronstein Therefore, in the worst case, we may need exponentially many points to have a reasonable approximation. On the other hand, if we focus on the non-asymptotic , we have a chance to obtain a simple representation. One intuitive situation is that the polytope B(ν * ) has a small number of vertices, as in Figure 1. In such a case, by taking the vertices as Q, we can obtain an O( ) approximation for B(ν) when ν = ν * + O( ). Therefore, below, we assume that B(ν) admits a small number of representatives and construct an algorithm to find such representatives. Algorithm In this section, we propose a method to compute a convex hull approximation conv(Q) of B(ν). Since conv(Q) ⊆ B(ν) and the sets are compact, the Hausdorff distance between conv(Q) and B(ν) is given by d(conv(Q), B(ν)) = max β∈B(ν) min β ∈conv(Q) β − β 2 . (4.1) A natural approach is to minimize this quantity by a greedy algorithm that successively selects the maximizer β ∈ B(ν) of (4.1) and then adds it to Q. However, this approach is impractical, because the optimization problem (4.1) is a convex maximization problem. 2 To overcome this difficulty, we use a random sampling approximation for B(ν). We first sample sufficiently many points S from ext(B(ν)) and then regard conv(S) as an approximation of B(ν). Once this approximation is constructed, the maximum in (4.1) can be obtained by a simple linear search. Therefore, this reduces our problem to a simple subset selection problem. The overall procedure of our algorithm is shown in Algorithm 1. It consists of two steps: random sampling step and subset selection step. Below, we describe each step. Solve (4.2) to obtain an extreme point β(d) and add it to S 5: end for 6: Return S 4.1 Sampling Extreme Points (Algorithm 2) Figure 1 suggests that selecting corner points of B(ν) as Q is desirable to obtain a good approximation of B(ν). Here, to obtain a good candidate of Q, we consider a sampling algorithm that samples the corner points of B(ν). First, we select a uniformly random direction d ∈ R p . Then, we find the extreme point β ∈ B(ν) by solving the following problem max{d β : β ∈ B(ν)}. ( 4.2) We solve this problem by using the Lagrange dual with binary search as follows. With the Lagrange duality, we obtain the following equivalent problem min τ ≥0 D(τ |d) := max β d β − τ (L(β) − ν) . (4.3) Since the optimal solution β(d) of (4.2) satisfies L(β(d)) = ν, we seek τ by using a binary search 3 so that L(β(d)) = ν to hold. It should be noted that the proposed sampling algorithm can be completely parallelized. Properties of Sampling The solution to the problem (4.2) tends to be sparse because of the 1 term in L(β), which indicates that we can sample a corner point of B(ν) in the direction of d, such as the ones in Figure 1. More precisely, the proposed algorithm samples each extreme point with probability proportional to the volume of the normal cone of each point. Because the corner points have positive volumes, the algorithm samples corner points with high probabilities. Greedy Subset Selection (Algorithm 3) Next, we select a small subset Q ⊆ S from the sampled points S ⊂ R p that do not lose the approximation quality. We use the farthest point selection method, proposed in Blum et al. [2016]. 4 In this procedure, we start from any point β 1 ∈ S. Then, we iteratively select the point β j ∈ S by solving the sample-approximated version of (4.1), i.e., the farthest point from the β j ∈ argmax β∈S min β ∈conv({β1,...,βj−1}) β − β 2 . (4.4) This procedure has the following theoretical guarantee. Thus, if the number of samples |S| are sufficiently large such that d(conv(S), B(ν)) ≤ , the algorithm finds a convex hull approximation with O( 1/3 ) error. Below, we describe how to implement this procedure. First, the distance from β to the convex hull of β 1 , . . . , β k is computed by solving the following problem: min α β − j α j β j 2 s.t. j α j = 1, α j ≥ 0. (4.5) This problem is a convex quadratic programming problem, which can be solved efficiently by using the interior point method [Achache, 2006]. To implement the greedy algorithm, we have to evaluate the distance from each point to the current convex hull. However, this procedure can be expensive when |S| is large as we need to solve the problem (4.5) many times. For efficient computation, we need to avoid evaluating the distance as much as possible. We observe that, if we add a new point β j to the current convex hull, the distances from other points to the convex hull decrease monotonically. Therefore, we can use the lazy update technique [Minoux, 1978] to accelerate the procedure as follows. We maintain the points S by a heap data structure whose keys are the upper bounds of the distance to the convex hull. First, we select an arbitrary point β 1 , and then initialize the key of β ∈ S by d(β 1 , β). For each step, we select the point β j ∈ S from the heap such that β j has the largest distance upper bound. Then, we recompute the distance d(β j , conv(Q)) by solving the quadratic program (4.5) and update the key of β j . If it still has the largest distance upper bound, it is the farthest point; therefore, we select β j as the j-th point β j . Otherwise, we repeat this procedure until we find the farthest point. See Algorithm 3 for the detail. Experiments We evaluate the three aspects of the proposed algorithm, namely, the approximation performance, computational efficiency, and the diversity of the found solutions. First, we visualize the results of the algorithm by using a low dimensional synthetic data (Section 5.1). Then, we evaluate the approximation performance and computational efficiency by using a larger dimensional synthetic data (Sections 5.2 and 5.3). Finally, we evaluate the diversity of the obtained solutions by using real-world datasets (Section 5.4 and 5.5). Sample Approximation of Hausdorff Distance for Evaluation We evaluate the approximation performance by the Hausdorff distance between the obtained convex hull and B(ν). However, the exact Hausdorff distance cannot be computed since it requires solving a convex maximization problem. We therefore adopt the sample approximation of Hausdorff distance, which is derived as follows: 1. Sample M extreme points Q * by using Algorithm 2. 2. Define the sample approximation of B(ν) by the convex hull conv(Q * ). 3. Measure the Hausdorff distance d(conv(Q), conv(Q * )) as an approximation of d(conv(Q), B(ν)). Implementations The codes were implemented in Python 3.6. In Algorithm 2, to solve the problem (4.3), we used enet coordinate descent gram function in scikit-learn. In Algorithm 3, we selected the first point β 1 ∈ S as β 1 = argmax β∈S β − β * 2 . To compute the projection (4.5), we used CVXOPT library. The experiments were conducted on a system with an Intel Xeon E5-1650 3.6GHz CPU and 64GB RAM, running 64-bit Ubuntu 16.04. Visual Demonstration For visual demonstration, we consider two examples: Example 1.1 in Section 1 and Example 5.1 defined below. Example 5.1. Consider the three-dimensional Lasso regression problem with the following loss function where is a sufficiently small parameter. Then, the optimal value ν * is approximately 5/6, and the corresponding optimal solution β * is approximately (0, 0, 2/3). We set = 1/40, and define the set B(ν) by ν = 5/6 + . L(β) = 1 2   1 1 1 1 1 + 1 1 1 1 + 2     β 1 β 2 β 3   −   1 1 1   2 2 + β 1 β 2 β 3 1 In Example 1.1, because of the correlation between the two features, there exists a nearly optimal solution β = (1/2, 0) apart from the optimal solution β * = (0, 1/2). The objective of the proposed method is therefore to find a convex hull that covers these solutions. Similarly, in Example 5.1, three features are highly correlated. The objective is to find a convex hull that covers nearly optimal solutions such as β = (2/3, 0, 0), (0, 2/3, 0), and (0, 0, 2/3). show that the proposed method successfully approximated B(ν) by using a few points. Indeed, as shown in Figure 3, the approximation errors converged to almost zeros indicating that B(ν) is well-approximated with convex hulls. Approximation Performance We now turn to exhaustive experiments to verify the performance of the proposed algorithm in general settings. Specifically, we show that the proposed algorithm can approximate B(ν) well, even in higher dimensions. We generate higher dimensional data by x ∼ N (0 p , Σ), y = x β + ε, ε ∼ N (0, 0.01), (5.2) where Σ ij = exp(−0.1|i − j|), and β i = 10/p if mod (i − 1, 10) = 0 and β i = 0 otherwise. Because of the correlations induced by Σ, the neighboring features in x are highly correlated, which indicates that there may exist several nearly optimal β. We set the number of observations n to be p = 2n, and the regularization parameter λ to be 0.1. We also define the set B(ν) by setting ν = 1.01L(β * ). Figure 4 is the result for p = 100. The figure shows that the proposed algorithm can approximate B(ν) well. In the figure, there are two important observations. First, as the number of samplings M increases, the Hausdorff distance decreases, indicating that the approximation performance improves. This result is intuitive in that a many greater number of candidate points lead to a better approximation. Second, the choice of the number of samplings M is not that fatal in practice. The result shows that the difference between the Hausdorff distances for M = 200 and for M = 10, 000 is subtle. It also shows that the the Hausdorff distance for M = 1, 000 and for M = 10, 000 are almost identical for larger K. This indicates that we do not have to sample many points in practice. Computational Efficiency Next, we evaluate the computational efficiency by using the same setting (5.2) used in the previous section. Table 1 shows the runtimes of the proposed method for p = 100 and p = 1, 000 by fixing K = 100. The computational time for the sampling step increases as the number of samples M and the dimension p increases. Since the approximation performance does not improve so much as the number of samples M increases as observed in the previous experiment, it is helpful in practice to use a moderate number of samples. The computational time for the greedy selection step also increases as the number of samples M increases; however, interestingly, it decreases as the dimension p increases. This reason is understood by observing the number of distance evaluations as follows. Figure 5 shows the number of distance evaluations in the greedy selection step in each K. This shows that the redundant distance computation is significantly reduced by the lazy update technique; therefore Algorithm 3 only performs a few distance evaluation in each iteration. In particular, for p = 1, 000, the saturation is very sharp; thus the computational cost is significantly reduced. This may be because, in highdimensional problems, adding one point to the current convex hull does not change the distance to the remaining points, and hence the lazy update helps to avoid most of distance evaluations. Diversity of Solutions One of the practical advantages of the proposed method is that it can find nearly optimal solutions with a large diversity. This is a favorable property when one is interested in finding several possible explanations for a given data, which is usually the case in data mining. Setup Here, we verify the diversity of the found solutions on the 20 Newsgroups data. 5 The results on other datasets can be found in the next section. In this experiment, we consider classifying the documents between the two categories ibm.pc.hardware and mac.hardware. As a feature vector x, we used tf-idf weighted bag-of-words expression, with stop words removed. The dataset comprised n = 1, 168 samples with p = 11, 648 words. Our objective is to find discriminative words that are relevant to the classification of the documents. Because the task is binary classification with y ∈ {−1, +1}, instead of the squared objective, we use the Lasso logistic regression with the objective function given as L(β) = 1 n n i=1 log 1 + exp(−y i x i β) + λ β 1 . (5.3) We implemented the solver for the problem (4.3) by modifying liblinear [Fan et al., 2008]. In the experiment, we set the regularization parameter λ to be 0.001. Baseline Methods We compared the solution diversity of the proposed method with the two baselines in Hara and Maehara [2017]. The first baseline simply enumerates the optimal solutions with different supports in the ascending order of the objective function value (5.3). We refer to this method as Enumeration. The second baseline employs a heuristics to skip similar solutions during the enumeration. It can therefore improve the diversity of the enumerated solutions. We refer to this heuristic method as Heuristic. Note that we did not adopt the method of Pantazis et al. [2017] as the baseline because it enumerates only the sub-support of the Lasso global solution: it cannot find solutions apart from the global solution. Result With each method, we found 500 nearly optimal β, and summarized the result in Figure 6. For the proposed method, we defined B(ν) by ν = 1.05L(β * ), and set the number of samplings M to be 10,000. To draw the figure, we used PCA and projected found solutions to the subspace where the variance of the solutions of Enumeration is maximum. The figure shows the clear advantage of the proposed method in that it covers a large solution region compared to the other two baselines. While the result indicates that Heuristic successfully improved the diversity of the found solutions compared to Enumeration, its diversity is still inferior to the ones of the proposed method. We also note that the proposed method found 889 words in total within the 500 models. This is contrastive to Enumeration and Heuristic where they found only 39 and 63 words, respectively, which is more than ten times less than the proposed method. Table 2 shows some representative words found in 20 Newsgroups data. As the word "apple" is strongly related to the documents in mac.hardware, it is found by all the methods. However, although "macs" and "macintosh" are also relevant to mac.hardware, "mac" is overlooked by Enumeration, and "macintosh" is found only by the proposed method. This result also suggests that the proposed method can induce a large diversity and it can avoid overlooking informative features. We note that the Lasso global solution attained 81% test accuracy, while the found 500 solutions attained from 77% to 83% test accuracies. This result indicates that the proposed method could find solutions with almost equal qualities while inducing solution diversities. Results on Other Datasets Here, we present the results on some of libsvm datasets 6 : we used cputime for regression (1.1), and australian, german.numer, ionosphere, sonar, and splice for binary classification (5.3). In the experiments, we searched for 50 near optimal solutions using the proposed method where we set ν = 1.05L(β * ) and the number of samplings M = 1, 000. We also enumerated 50 solutions using the two baselines, Enumeration and Heuristics. The results are shown in Figure 7. For regression, we set ρ = 0.1, and for binary classification, we set ρ = 0.01, so that the solutions to be sufficiently sparse. In the figures, similar to the results in Section 5.4, we have projected the solutions into two dimensional space using PCA. The figures show the clear advantage of the proposed method in that it can find solutions with large diversities compared to the exhaustive enumerations. Conclusion In this study, we considered a convex hull approximation problem that seeks a small number of points such that their convex hull approximates the nearly optimal solution set to the Lasso regression problem. We propose an algorithm to solve this problem. The algorithm first approximates the nearly optimal solution set by using the convex hull of sufficiently many points. Then, it selects a few relevant points to approximate the convex hull. The experimental results indicate that the proposed method can find diverse yet nearly optimal solutions efficiently. Figure 1 : 1Illustration of our approach. The triangle shows the optimal solution B(ν * ) = {β * }, the dashed line shows the boundary of the nearly optimal solutions B(ν), and the squares with the solid line show the convex hull approximation conv(Q) of B(ν). Theorem 4. 1 ( 1Blum et al. [2016]). Let S ⊂ R p be a finite set enclosed in the unit ball. Suppose that there exists a finite set Q * ⊂ S of size k * such that d(conv(Q * ), conv(S)) ≤ . Then, the greedy algorithm finds a set Q ⊂ S of size k * / 2/3 with d(conv(Q), conv(S)) = O( 1/3 ). Figure 2 : 2Visual demonstrations of the proposed method in low-dimensions -(a) Result for Example 1.1, where B(ν) is indicated by the shaded regions, (b) Results for Example 5.1. Figure 2 2and 3 show the results of the proposed method for the examples. Here, we set the number of samples M in Algorithm 2 to be 50, and the number of greedy point selection selection K = |Q| in Algorithm 3 to be four and six, respectively. Figures 2(a) and 2(b) Figure 3 : 3Approximation errors of B(ν), measured by the Hausdorff distance (M = 1, 000). Figure 4 : 4# of selected points K vs. Hausdorff distance (M = 100, 000) Figure 5 : 5# of distance evaluations in Algorithm 3 Figure 6 : 6Found 500 solutions in 20 Newsgroups data, shown in 2D using PCA. Figure 7 : 7Found 50 solutions, shown in 2D using PCA Algorithm 3 Select points 1: Select β 1 ∈ S arbitrary and let Q = {β 1 } 2: Initialize a heap data structure H by H[β] ← d(β, Q) for all β ∈ S \ Q 3: while |Q| < K do Let β ∈ H be the point that has the largest H[β]Add β to Q and remove β from the heap4: 5: Update H[β] ← d(β, Q) 6: if H[β] is still the largest point then 7: 8: end if 9: end while 10: Output Q convex hull is taken as Table 1 : 1Runtime (in sec.) of the proposed algorithm (Sampling: Algorithm 2, Greedy: Algo- rithm 3) for selecting K = 100 vertices over the numbers of samplings M = 1, 000, 10, 000, and 100, 000. p = 100 p = 1, 000 M Sampling Greedy Sampling Greedy 1,000 2.891 34.87 46.54 17.99 10,000 27.80 178.5 2466 66.95 100,000 279.1 1548 4586 379.9 Table 2 : 2Representative words found in 20 Newsgroups dataEnumeration Heuristic Proposed "apple" "macs" "macintosh" Pantazis et al. [2017] also consider the near optimal solutions. However, they focused only on the subset of B(ν) spanned by the support of the Lasso global solution. We do not take this approach since it cannot handle a global structure of B(ν). In our preliminary study, we implemented the projected gradient method to find the farthest point β ∈ B(ν). However, it was slow, and often converged to poor local maximal solutions. Since we have no upper bound of the search range, we actually use the exponential search that successively doubles the search range[Bentley and Yao, 1976].4 Blum et al. [2016] called this procedure Greedy Clustering. http://qwone.com/˜jason/20Newsgroups/ https://www.csie.ntu.edu.tw/˜cjlin/libsvmtools/datasets/ A new primal-dual path-following method for convex quadratic programming. Mohamed Achache, Computational & Applied Mathematics. 251Mohamed Achache. A new primal-dual path-following method for convex quadratic programming. Computational & Applied Mathematics, 25(1):97-110, 2006. A course in convexity. Alexander Barvinok, American Mathematical Society54Providence, RIAlexander Barvinok. A course in convexity, volume 54. American Mathematical Society Providence, RI, 2002. An almost optimal algorithm for unbounded searching. Jon Louis Bentley, Andrew Chi-Chih Yao, Information Processing Letters. 53Jon Louis Bentley and Andrew Chi-Chih Yao. An almost optimal algorithm for un- bounded searching. Information Processing Letters, 5(3):82-87, 1976. Sparse approximation via generating point sets. Avrim Blum, Sariel Har-Peled, Benjamin Raichel, Proceedings of the Twenty-Seventh Annual ACM-SIAM Symposium on Discrete Algorithms. the Twenty-Seventh Annual ACM-SIAM Symposium on Discrete AlgorithmsAvrim Blum, Sariel Har-Peled, and Benjamin Raichel. Sparse approximation via generating point sets. In Proceedings of the Twenty-Seventh Annual ACM-SIAM Sym- posium on Discrete Algorithms, pages 548-557. Society for Industrial and Applied Mathematics, 2016. The approximation of convex sets by polyhedra. L D Efim M Bronstein, Ivanov, Siberian Mathematical Journal. 165Efim M Bronstein and L. D. Ivanov. The approximation of convex sets by polyhedra. Siberian Mathematical Journal, 16(5):852-853, 1975. Approximation of convex sets by polytopes. M Efim, Bronstein, Journal of Mathematical Sciences. 1536Efim M Bronstein. Approximation of convex sets by polytopes. Journal of Mathe- matical Sciences, 153(6):727-762, 2008. Atomic decomposition by basis pursuit. David L Scott Shaobing Chen, Michael A Donoho, Saunders, SIAM review. 431Scott Shaobing Chen, David L Donoho, and Michael A Saunders. Atomic decompo- sition by basis pursuit. SIAM review, 43(1):129-159, 2001. LIBLINEAR: A library for large linear classification. Kai-Wei Rong-En Fan, Cho-Jui Chang, Xiang-Rui Hsieh, Chih-Jen Wang, Lin, Journal of Machine Learning Research. 9Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang-Rui Wang, and Chih-Jen Lin. LIBLINEAR: A library for large linear classification. Journal of Machine Learning Research, 9:1871-1874, 2008. Analysis of backtrack algorithms for listing all vertices and all faces of a convex polyhedron. Komei Fukuda, M Thomas, Francois Liebling, Margot, Computational Geometry. 81Komei Fukuda, Thomas M Liebling, and Francois Margot. Analysis of backtrack algorithms for listing all vertices and all faces of a convex polyhedron. Computational Geometry, 8(1):1-12, 1997. Aspects of approximation of convex bodies. M Peter, Gruber, Handbook of Convex Geometry, Part A. ElsevierPeter M Gruber. Aspects of approximation of convex bodies. In Handbook of Convex Geometry, Part A, pages 319-345. Elsevier, 1993. An introduction to variable and feature selection. Isabelle Guyon, André Elisseeff, Journal of Machine Learning Research. 3Isabelle Guyon and André Elisseeff. An introduction to variable and feature selection. Journal of Machine Learning Research, 3(Mar):1157-1182, 2003. Approximate and exact enumeration of rule models. Satoshi Hara, Masakazu Ishihata, Proceedings of the 32nd AAAI Conference on Artificial Intelligence. the 32nd AAAI Conference on Artificial IntelligenceSatoshi Hara and Masakazu Ishihata. Approximate and exact enumeration of rule models. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence, pages 3157-3164, 2018. Enumerate lasso solutions for feature selection. Satoshi Hara, Takanori Maehara, Proceedings of the 31st AAAI Conference on Artificial Intelligence. the 31st AAAI Conference on Artificial IntelligenceSatoshi Hara and Takanori Maehara. Enumerate lasso solutions for feature selection. In Proceedings of the 31st AAAI Conference on Artificial Intelligence, pages 1985- 1991, 2017. Computational aspects of the hausdorff distance in unbounded dimension. Stefan König, arXiv:1401.1434arXiv preprintStefan König. Computational aspects of the hausdorff distance in unbounded dimen- sion. arXiv preprint arXiv:1401.1434, 2014. Determinantal point processes for machine learning. Foundations and Trends R in Machine Learning. Alex Kulesza, Ben Taskar, 5Alex Kulesza, Ben Taskar, et al. Determinantal point processes for machine learning. Foundations and Trends R in Machine Learning, 5(2-3):123-286, 2012. Efficient l1 regularized logistic regression. Su-In Lee, Honglak Lee, Pieter Abbeel, Andrew Y Ng, Proceedings of the 21st National Conference on Artificial Intelligence. the 21st National Conference on Artificial IntelligenceSu-In Lee, Honglak Lee, Pieter Abbeel, and Andrew Y. Ng. Efficient l1 regularized logistic regression. In Proceedings of the 21st National Conference on Artificial In- telligence, pages 1-9, 2006. Accelerated greedy algorithms for maximizing submodular set functions. Michel Minoux, Optimization Techniques. SpringerMichel Minoux. Accelerated greedy algorithms for maximizing submodular set func- tions. In Optimization Techniques, pages 234-243. Springer, 1978. Enumerating multiple equivalent lasso solutions. Yannis Pantazis, Vincenzo Lagani, Ioannis Tsamardinos, arXiv:1710.04995arXiv preprintYannis Pantazis, Vincenzo Lagani, and Ioannis Tsamardinos. Enumerating multiple equivalent lasso solutions. arXiv preprint arXiv:1710.04995, 2017. The lasso problem and uniqueness. J Ryan, Tibshirani, Electronic Journal of Statistics. 7Ryan J Tibshirani et al. The lasso problem and uniqueness. Electronic Journal of Statistics, 7:1456-1490, 2013. Regression shrinkage and selection via the lasso. Robert Tibshirani, Journal of the Royal Statistical Society. Series B (Methodological). Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), pages 267-288, 1996. Regularization and variable selection via the elastic net. Hui Zou, Trevor Hastie, Journal of the Royal Statistical Society: Series B (Statistical Methodology). 672Hui Zou and Trevor Hastie. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(2):301-320, 2005.
[]
[ "Bright solitary waves in a Bose-Einstein condensate and their interactions", "Bright solitary waves in a Bose-Einstein condensate and their interactions" ]
[ "K Kärkkäinen \nMathematical Physics\nLund Institute of Technology\nP.O. Box 118SE-22100LundSweden\n", "A D Jackson \nNiels Bohr Institute\nBlegdamsvej 17DK-2100Copenhagen ØDenmark\n", "G M Kavoulakis \nTechnological Education Institute of Crete\nP.O. Box 1939GR-71004HeraklionGreece\n" ]
[ "Mathematical Physics\nLund Institute of Technology\nP.O. Box 118SE-22100LundSweden", "Niels Bohr Institute\nBlegdamsvej 17DK-2100Copenhagen ØDenmark", "Technological Education Institute of Crete\nP.O. Box 1939GR-71004HeraklionGreece" ]
[]
We examine the dynamics of two bright solitary waves with a negative nonlinear term. The observed repulsion between two solitary waves -when these are in an antisymmetric combination -is attributed to conservation laws. Slight breaking of parity, in combination with weak relaxation of energy, leads the two solitary waves to merge. The effective repulsion between solitary waves requires certain nearly ideal conditions and is thus fragile.
10.1103/physreva.78.033610
[ "https://arxiv.org/pdf/0801.2364v1.pdf" ]
14,375,828
0801.2364
c313d4ce3594321e2e993d25f9785e05b663b198
Bright solitary waves in a Bose-Einstein condensate and their interactions 15 Jan 2008 (Dated: February 2, 2008) K Kärkkäinen Mathematical Physics Lund Institute of Technology P.O. Box 118SE-22100LundSweden A D Jackson Niels Bohr Institute Blegdamsvej 17DK-2100Copenhagen ØDenmark G M Kavoulakis Technological Education Institute of Crete P.O. Box 1939GR-71004HeraklionGreece Bright solitary waves in a Bose-Einstein condensate and their interactions 15 Jan 2008 (Dated: February 2, 2008)PACS numbers: 0375-b, 0375Lm We examine the dynamics of two bright solitary waves with a negative nonlinear term. The observed repulsion between two solitary waves -when these are in an antisymmetric combination -is attributed to conservation laws. Slight breaking of parity, in combination with weak relaxation of energy, leads the two solitary waves to merge. The effective repulsion between solitary waves requires certain nearly ideal conditions and is thus fragile. I. INTRODUCTION One of the many interesting features of Bose-Einstein condensed atoms is that they can support solitary waves, in particular when these are confined in elongated traps. Under typical conditions, these gases are very dilute and are described by the familiar Gross-Pitaevskii equation, a nonlinear Schrödinger equation with an additional term to describe the external trapping potential. It is well known that the nonlinear Schrödinger equation (with no external potential) supports solitonic solutions through the interplay between the nonlinear term and dispersion. In the presence of an external trapping potential, the Gross-Pitaevskii equation becomes nonintegrable. In elongated quasi-one-dimensional traps, it is reasonable to approximate the three-dimensional solution of the Gross-Pitaevskii equation by separating longitudinal and transverse degrees of freedom [1]. The resulting effective one-dimensional nonlinear equation has a nonlinear term that is not necessarily quadratic [1,2]. Still, such nonlinear equations support solitary-wave solutions, which must be found numerically. Solitary waves have been created and observed in trapped gases of atoms [3,4,5,6]. In the initial experiments [3,4] the effective interaction between the atoms was repulsive. In this case, the solitary waves are localized depressions in the density, which are known as "grey" solitary waves. These waves move with a velocity less than the speed of sound. When the minimum of the density (at the center of the wave) becomes zero, they do not move at all and thus become "dark". More recently, the two experiments of Refs. [5,6] considered the case of an effective attraction between the atoms and observed "bright" solitary waves, i.e., blob(s) of atoms which preserve their shape and distinct identity. Strecker et al. [5] created an initial state of many separate solitary waves. While these independent waves were seen to oscillate in the weak harmonic potential in the longitudinal direction, they did not merge to form one solitary wave. In other words, they behaved as if the effective interaction between two of these waves were repulsive. Numerous theoretical studies have been motivated by the experiments of Refs. [5,6], see e.g., Refs. [7,8,9]. Reference [7] offered an explanation for the observed effective repulsion between solitary waves. As argued there, the experiments had been performed in a manner that gave rise to a phase difference of π between adjacent solitary waves. According to an older study [10], solitary waves with a phase difference equal to π indeed repel each other. In the present study we use a toroidal trap [11] as a model for examining the time evolution of a system that initially has two solitary waves using numerical solutions to the corresponding time-dependent one-dimensional Gross-Pitaevskii equation. Remarkably, such toroidal traps have been designed [12], and very recently persistent currents have been created and observed in such traps [13]. The basic conclusion of our study is that the effective repulsion between solitary waves is due to conservation laws and thus fragile. In what follows, we first present our model in Sec II. In Sec. III we examine the dynamics of the gas in the case of weak dissipation, starting with perfectly symmetric/antisymmetric initial conditions and with no external potential along the torus. We observe that the symmetric configuration of two blobs merges on a short time scale; the blobs in the initially antisymmetric configuration remain distinct and separated. Using these results as "reference" plots, we examine in Sec. IV the effect of a weak random potential on perfectly symmetric/antisymmetric initial conditions. We also examine in Sec. V the time evolution of states that deviate slightly from perfect symmetry/antisymmetry in the absence of any random potential. In both cases, the symmetric (or nearly symmetric) initial configuration shows essentially the same behavior as the reference symmetric system. On the other hand, the antisymmetric configuration with the addition of an extra weak random potential and the nearly antisymmetric configuration with no external potential both lead to a merger of the two blobs after a moderate transient time. In the antisymmetric case, the final state is strongly influenced by weak deviations from the "ideal" case. Finally, in Sec.VI we discuss our results. II. MODEL We consider a tight toroidal trap and use the meanfield approximation. Tight confinement along the cross section of the torus allows us to assume that the transverse degrees of freedom are frozen, and thus the corresponding time-dependent order parameter Ψ(θ, t) satisfies the (one-dimensional) equation ih ∂Ψ ∂t =h 2 2M R 2 [− ∂ 2 Ψ ∂θ 2 + g|Ψ| 2 Ψ + V (θ)Ψ],(1) where g = 8πN aR/S, and V (θ) is the external potential measured in units of E 0 =h 2 /(2M R 2 ). Here, M is the atomic mass, R is the radius of the torus, N is the atom number, a is the scattering length (which is taken to be negative), and S is the cross section of the torus. The total length of the torus is chosen to be 16π in our simulations. As shown in Refs. [14,15], below a critical (negative) value of the parameter g, there is an instability from a state of homogeneous density to a state with localized density that breaks the rotational invariance of the Hamiltonian. This localized state corresponds to a solitary wave, and the critical value of g is g c = −π for the parameters chosen here. We adopt the value of FIG. 2: Snapshots of |Ψ(θ, t)| (higher curve in each panel) and of φ(θ, t) (lower curve in each panel) of the order parameter Ψ = |Ψ(θ, t)|e iφ(θ,t) , for the symmetric initial configuration, α = 1, for t/t0 = 0, 10, 50, 100, and 150. The axes are the same as in Fig. 1. In all the above graphs there is no external potential, V = 0. We add an extra term on the left side of the above equation to model dissipation and write (i − γ)h ∂Ψ ∂t =h 2 2M R 2 [− ∂ 2 Ψ ∂θ 2 + g|Ψ| 2 Ψ + V (θ)Ψ]. (2) The real positive dimensionless parameter γ describes the "strength" of dissipation. Since we solve an initial value problem, we also need to specify the initial condition. This is Ψ(θ, t = 0) = 1 √ 1 + α 2 [ψ(θ − θ 0 ) + αψ(θ + θ 0 )]. (3) Here ψ(θ) = λ/ cosh(λθ), with λ = 3/2, is a static, well localized blob. We choose θ 0 = 2π/5 so that the two blobs are reasonably distinct but still have a small overlap, as shown in the graphs of Fig. 1, for α = ±1. ,t) , for the symmetric initial condition, for t/t0 = 0, 10, 50, 100, and 150, for a weak random potential (shown in Fig. 14), with a symmetric initial configuration, α = 1. The axes are the same as in Fig. 1. show the corresponding energy of the gas as function of time. III. TIME EVOLUTION OF THE "IDEAL" SITUATION To understand the effects of a weak random potential and the effect of slight asymmetries in the initial condition (to be considered in Secs. IV and V), it is instructive to start with the situation where there is no external potential, V (θ) = 0 and an initial configuration which is either perfectly symmetric (α = 1), or perfectly antisymmetric (α = −1), i.e., Ψ(θ, t = 0) = 1 √ 2 [ψ(θ − θ 0 ) ± ψ(θ + θ 0 )].(4) We also fix the value of the dissipative parameter equal to γ = 0.05. Figures 2 and 8 show snapshots of |Ψ(θ, t)| as well as the phase φ(θ, t) of the order parameter Ψ(θ, t) for the symmetric and the antisymmetric case, respectively. The snapshots shown in Fig. 2 correspond to t/t 0 = 0, 10, 50, 100, and 150, and in Fig. 8 to t/t 0 = 0, 10, 100, 300, and 400. Here t 0 = E 0 /h = [h/(2M R 2 )] −1 is the unit of time. Figures 3 and 9 show the energy of the system as function of time for 0 ≤ t/t 0 ≤ 150, and for 0 ≤ t/t 0 ≤ 400, respectively. As seen in these graphs, the symmetric configuration (Fig. 2) merges quickly into one soliton and as time increases, it eventually approaches the equilibrium solution. On the other hand, the two blobs do not merge in the antisymmetric case (Fig. 8). This is a direct consequence of the fact that the initial configuration has a node at θ = 0. Because of the symmetry between θ and −θ, Ψ(θ = 0, t) must be zero for all times t > 0. As a result, the two blobs never merge as a simple consequence of parity conservation. The parity operator commutes with the Hamiltonian, and parity is therefore a conserved quantity. Only numerical errors can eventually lead to a single-soliton profile (with lower energy). That this does not happen provides a check on the accuracy of our numerics. ,t) , for the antisymmetric initial configuration, α = −1, for t/t0 = 0, 10, 100, 300, and 400. The axes are the same as in Fig. 1. In all the above graphs there is no external potential, V = 0. IV. EFFECT OF THE RANDOM POTENTIAL ON THE TIME EVOLUTION Using Figs. 2 and 8 as "reference plots", we may now examine the effect of a weak, symmetry breaking random potential V (θ). This potential is chosen to consist of ten steps of equal widths with a height that is a (uniformly distributed) random number and varies between 0 and 0.01. Figure 14 shows the specific random potential chosen. In this case we start with perfectly symmetric/antisymmetric configurations, α = ±1. The time evolution of the symmetric configuration shown in Figs. 4 and 5 is almost identical with that of Figs. 2 and 3, i.e., the case considered in the previous section with V = 0. The two blobs merge rather rapidly. The antisymmetric case shown in Figs. 10 and 11 is of greater interest. Here, after a relatively short time, the system passes through a "quasi-equilibrium" configuration, seen as the plateau in the plot of energy versus time in Fig. 11. During this time interval, there are two localized blobs. However, parity is no longer a conserved FIG. 10: Snapshots of |Ψ(θ, t)| (higher curve in each panel) and of φ(θ, t) (lower curve in each panel) of the order parameter Ψ = |Ψ(θ, t)|e iφ(θ,t) , for the antisymmetric initial condition, for t/t0 = 0, 10, 100, 300, and 400, for a weak random potential (shown in Fig. 14), with an antisymmetric initial condition, α = −1. The axes are the same as in Fig. 1. quantity in this case. There is no symmetry in the system to preserve the node that was built into the initial conditions. As a result, the two blobs eventually merge into one in contrast to the results of Fig. 8. In other words, the apparent repulsion of the two solitary waves is not present at sufficiently large t. V. EFFECT OF SLIGHT ASYMMETRIES IN THE INITIAL CONFIGURATION In another set of runs, we set the random potential to zero and select slightly asymmetric initial configuration, with α = ±1.01. Our initial condition is thus not a parity eigenstate and our calculations show that again the two initially distinct blobs merge after a characteristic timescale. The qualitative features of this calculation are the same as in the case of a random potential described in the previous section. In the case of an almost symmetric initial configuration, α = 1.01 shown in Fig. 6, the two separate blobs merge rapidly, very much as in Figs. 2 and 4. On the other hand, the almost antisymmetric case, α = −1.01 shown in Figs. 12 and 13, shows a plateau in the energy and a period of "quasi-equilibrium" during which the two blobs have relatively well-determined shape and location. Eventually, however, the two blobs merge again into a single solitary wave, much as in Fig. 10 but unlike Fig. 8. VI. DISCUSSION AND CONCLUSIONS According to the results of our study, the observed repulsion between bright solitary waves in the experiment of Ref. [5] implies that the conservation laws were not substantially violated during the time interval investigated. More precisely, it suggests that deviations from axial symmetry in the trapping potential must have been small, the initial configuration was very close to a (neg-ative) parity eigenstate, and that dissipation must have been weak. It is instructive to estimate the timescale, t 0 , for our study. If one considers a value of R equal to the longitudinal size of an elongated trap, R ∼ 0.1 mm, then t 0 ∼ 10 sec, which is a rather long timescale for these experiments. Therefore, it seems likely that the characteristic timescale over which the experiment of Ref. [5] was performed was significantly smaller than the timescale required to see the separate blobs merge. Higher temperatures would enhance the dissipation in the gas and would decrease the characteristic time that is required for the blobs to merge. To the extent that the deviations from axial symmetry in the trapping potential and the antisymmetry in the initial configuration considered here are representative of the actual experimental situation, our results support the explanation offered in Ref. [7]. Direct experimental determinations of these quantities and of the strength of dissipation would thus be welcome. It would also be of interest to investigate the long time stability (or instability) of the configurations observed in Ref. [5]. The questions examined here may also have important consequences on possible technological applications. For example, propagation of such solitary waves in waveguides may serve as signals that transfer energy or information. Therefore, understanding and possibly controlling the way that such waves interact with each other may be important. Recent experimental progress in building quasi-one-dimensional and toroidal traps should make such experiments easier to perform and worth investigating. FIG. 1 : 1Plot of |Ψ(θ, t = 0)| and of φ(θ, t = 0) for the symmetric (higher) and for the antisymmetric (lower) case, α = ±1 in Eq. (3). FIG. 3 : 3The energy of the system in units of E0 = h 2 /(2M R 2 ), versus time for 0 ≤ t/t0 ≤ 150 corresponding toFig. 2. g = −4 < g c in all simulations below. Therefore, the lowest energy state is a single solitary wave. Figures 2, 4, 6, 8, 10 and 12 show snapshots of |Ψ(θ, t = 0)| and of φ(θ, t = 0) for the order parameter Ψ(θ, t) = |Φ(θ, t)|e iφ(θ,t) . Figs. 3, 5, 7, 9, 11, and 13 FIG. 4: Snapshots of |Ψ(θ, t)| (higher curve in each panel) and of φ(θ, t) (lower curve in each panel) of the order parameter Ψ = |Ψ(θ, t)|e iφ(θ FIG. 5 : 5The energy of the system in units of E0 = h 2 /(2M R 2 ), versus time for 0 ≤ t/t0 ≤ 150 corresponding toFig. 4. FIG. 6 :FIG. 7 : 67Snapshots of |Ψ(θ, t)| (higher curve in each panel) and of φ(θ, t) (lower curve in each panel) of the order parameter Ψ = |Ψ(θ, t)|e iφ(θ,t) , for the almost symmetric initial condition, α = 1.01, for t/t0 = 0, 10, 50, 100, and 150. The axes are the same as inFig. 1. In all the above graphs there is no external potential, The energy of the system in units of E0 = h 2 /(2M R 2 ), versus time for 0 ≤ t/t0 ≤ 150 corresponding toFig. 6. FIG. 8 : 8Snapshots of |Ψ(θ, t)| (higher curve in each panel) and of φ(θ, t) (lower curve in each panel) of the order parameter Ψ = |Ψ(θ, t)|e iφ(θ FIG. 9 : 9The energy of the system in units of E0 = h 2 /(2M R 2 ), versus time for 0 ≤ t/t0 ≤ 400 corresponding toFig. 8. FIG. 11 : 11The energy of the system in units of E0 = h 2 /(2M R 2 ), versus time for 0 ≤ t/t0 ≤ 400 corresponding toFig. 10. FIG. 12 :FIG. 13 : 1213Snapshots of |Ψ(θ, t)| (higher curve in each panel) and of φ(θ, t) (lower curve in each panel) of the order parameter Ψ = |Ψ(θ, t)|e iφ(θ,t) , for the almost antisymmetric initial condition, α = −1.01, for t/t0 = 0, 10, 100, 300, and 400. The axes are the same as inFig. 1. In all the above graphs there is no external potential, The energy of the system in units of E0 = h 2 /(2M R 2 ), versus time for 0 ≤ t/t0 ≤ 400 corresponding toFig. 12. FIG. 14 : 14The random potential in units of E0 =h 2 /(2M R 2 ) as function of the angle θ that is used in the specific calculations shown in Figs. 4 and 10. AcknowledgmentsWe thank MagnusÖgren for useful discussions. . A D Jackson, G M Kavoulakis, C J Pethick, Phys. Rev. A. 582417A. D. Jackson, G. M. Kavoulakis, and C. J. Pethick, Phys. Rev. A 58, 2417 (1998). . L Salasnich, A Parola, L Reatto, Phys. Rev. A. 6543614L. Salasnich, A. Parola, and L. Reatto, Phys. Rev. A 65, 043614 (2002). . S Burger, K Bongs, S Dettmer, W Ertmer, K Sengstock, A Sanpera, G V Shlyapnikov, M Lewenstein, Phys. Rev. Lett. 835198S. Burger, K. Bongs, S. Dettmer, W. Ertmer, K. Sen- gstock, A. Sanpera, G. V. Shlyapnikov, and M. Lewen- stein, Phys. Rev. Lett. 83, 5198 (1999). . J Denschlag, J E Simsarian, D L Feder, Charles W Clark, L A Collins, J Cubizolles, L Deng, E W Hagley, K Helmerson, W P Reinhardt, S L Rolston, B I Schneider, W D Phillips, Science. 28797J. Denschlag, J. E. Simsarian, D. L. Feder, Charles W. Clark, L. A. Collins, J. Cubizolles, L. Deng, E. W. Ha- gley, K. Helmerson, W. P. Reinhardt, S. L. Rolston, B. I. Schneider, and W. D. Phillips, Science 287, 97 (2000). . K E Strecker, G B Partridge, A G Truscott, R G Hulet, Nature. 417150K. E. Strecker, G. B. Partridge, A. G. Truscott, and R. G. Hulet, Nature 417, 150 (2002). . L Khaykovich, F Schreck, G Ferrari, T Bourdel, J Cubizolles, L D Carr, Y Castin, C Salomon, Science. 2961290L. Khaykovich, F. Schreck, G. Ferrari, T. Bourdel, J. Cu- bizolles, L. D. Carr, Y. Castin, and C. Salomon, Science 296, 1290 (2002). . U Khawaja, H T C Stoof, R G Hulet, K E Strecker, G B Partridge, Phys. Rev. Lett. 89200404U. Al Khawaja, H. T. C. Stoof, R. G. Hulet, K. E. Strecker, and G. B. Partridge Phys. Rev. Lett. 89, 200404 (2002). . L D Carr, Y Castin, Phys. Rev. A. 6663602L. D. Carr and Y. Castin, Phys. Rev. A 66, 063602 (2002). . L Salasnich, A Parola, L Reatto, Phys. Rev. A. 6643603L. Salasnich, A. Parola, and L. Reatto, Phys. Rev. A, 66, 043603 (2002). . J P Gordon, Optics Lett. 8596J. P. Gordon, Optics Lett. 8, 596 (1983). . L D Carr, C W Clark, W P Reinhardt, Phys. Rev. A. 6263611L. D. Carr, C. W. Clark, and W. P. Reinhardt, Phys. Rev. A 62, 063611 (2000). . S Gupta, K W Murch, K L Moore, T P Purdy, D M Stamper-Kurn, Phys. Rev. Lett. 95143201S. Gupta, K. W. Murch, K. L. Moore, T. P. Purdy, and D. M. Stamper-Kurn, Phys. Rev. Lett. 95, 143201 (2005). . C Ryu, M F Andersen, P Clade, K Vasant Natarajan, W D Helmerson, Phillips, Phys. Rev. Lett. 99260401C. Ryu, M. F. Andersen, P. Clade, Vasant Natarajan, K. Helmerson, W. D. Phillips, Phys. Rev. Lett. 99, 260401 (2007). . Rina Kanamoto, Hiroki Saito, Masahito Ueda, Phys. Rev. A. 6843619Rina Kanamoto, Hiroki Saito, and Masahito Ueda, Phys. Rev. A 68, 043619 (2003). . G M Kavoulakis, Phys. Rev. A. 6711601G. M. Kavoulakis, Phys. Rev. A 67, 011601(R) (2003).
[]
[ "Thermodynamics of Black Holes in Rastall Gravity Thermodynamics of Black Holes in Rastall Gravity 2", "Thermodynamics of Black Holes in Rastall Gravity Thermodynamics of Black Holes in Rastall Gravity 2" ]
[ "Iarley P Lobo [email protected]§[email protected] \nDepartamento de Física\nUniversidade Federal da Paraíba\nCaixa Postal\n5008, 58051-970João PessoaCEP, PBBrazil\n", "‡ ", "H Moradpour \nResearch Institute for Astronomy and Astrophysics of Maragha (RIAAM)\nP.O. Box 55134-441MaraghaIran\n", "§ ", "J P Morais Graça \nDepartamento de Física\nUniversidade Federal da Paraíba\nCaixa Postal\n5008, 58051-970João PessoaCEP, PBBrazil\n", "I G Salako \nInstitut de Mathématiques et de Sciences Physiques (IMSP)\n01 BP 613Porto-NovoBénin\n" ]
[ "Departamento de Física\nUniversidade Federal da Paraíba\nCaixa Postal\n5008, 58051-970João PessoaCEP, PBBrazil", "Research Institute for Astronomy and Astrophysics of Maragha (RIAAM)\nP.O. Box 55134-441MaraghaIran", "Departamento de Física\nUniversidade Federal da Paraíba\nCaixa Postal\n5008, 58051-970João PessoaCEP, PBBrazil", "Institut de Mathématiques et de Sciences Physiques (IMSP)\n01 BP 613Porto-NovoBénin" ]
[]
A promising theory in modifying general relativity by violating the ordinary energy-momentum conservation law in curved spacetime is the Rastall theory of gravity. In this theory, geometry and matter fields are coupled to each other in a non-minimal way. Here, we study thermodynamic properties of some black hole solutions in this framework, and compare our results with those of general relativity. We demonstrate how the presence of these matter sources amplifies effects caused by the Rastall parameter in thermodynamic quantities. Our investigation also shows that black holes with radius smaller than a certain amount (≡ r 0 ) have negative heat capacity in the Rastall framework. In fact, it is a lower bound for the possible values of horizon radius satisfied by stable black holes. ‡ iarley
10.1142/s0218271818500694
[ "https://arxiv.org/pdf/1710.04612v2.pdf" ]
119,486,835
1710.04612
788ab78ff932e8b2bd55ebe2cf46626f7016ff40
Thermodynamics of Black Holes in Rastall Gravity Thermodynamics of Black Holes in Rastall Gravity 2 9 Mar 2018 Iarley P Lobo [email protected]§[email protected] Departamento de Física Universidade Federal da Paraíba Caixa Postal 5008, 58051-970João PessoaCEP, PBBrazil ‡ H Moradpour Research Institute for Astronomy and Astrophysics of Maragha (RIAAM) P.O. Box 55134-441MaraghaIran § J P Morais Graça Departamento de Física Universidade Federal da Paraíba Caixa Postal 5008, 58051-970João PessoaCEP, PBBrazil I G Salako Institut de Mathématiques et de Sciences Physiques (IMSP) 01 BP 613Porto-NovoBénin Thermodynamics of Black Holes in Rastall Gravity Thermodynamics of Black Holes in Rastall Gravity 2 9 Mar 2018 A promising theory in modifying general relativity by violating the ordinary energy-momentum conservation law in curved spacetime is the Rastall theory of gravity. In this theory, geometry and matter fields are coupled to each other in a non-minimal way. Here, we study thermodynamic properties of some black hole solutions in this framework, and compare our results with those of general relativity. We demonstrate how the presence of these matter sources amplifies effects caused by the Rastall parameter in thermodynamic quantities. Our investigation also shows that black holes with radius smaller than a certain amount (≡ r 0 ) have negative heat capacity in the Rastall framework. In fact, it is a lower bound for the possible values of horizon radius satisfied by stable black holes. ‡ iarley Introduction One of the big puzzles in science is the fact that our universe is going through a phase of accelerated expansion. A possible explanation for this is the presence of a dark energy field, one with constant positive energy density and negative pressure, that can provide a kind of matter that allows such acceleration. Despite the fact that the nature of the dark sectors of the cosmos is currently unknown, standard cosmology is very successful in describing the cosmos history. One of these models, based on the presence of a cosmological constant term in Einstein's gravity, is called ΛCDM, or Λ Cold Dark Matter, where Λ is the cosmological constant and plays the role of dark energy. The most promising explanation for the existence of a cosmological constant is the vacuum energy of elementary particles, but calculations tell us that such vacuum energy is more than one hundred orders of magnitude higher than the measured value for Λ. Another possibility is that the negative pressure is generated by some peculiar kind of perfect fluid, where the proportion between the pressure and energy density is between −1 and −1/3. If this perfect fluid is generated by a scalar field, it is generally called quintessence, and it is considered as a hypothetical form of dark energy. Therefore, if this is the case, we must consider that our universe is pervaded by such fluid, and we must study strong gravity objects such as black holes in contact with them. This is the idea presented by Kiselev in [1], and has been generalized to the Rastall model of gravity [2] in Ref. [3]. The idea behind Rastall model is that our laws of conservation, such as conservation of mass/energy, has been probed only in the flat or weak-field arena of spacetime [2]. A new generalization of this theory has recently been proposed, introducing the coupling between matter and gravitational fields in a non-minimal way as an origin for the accelerating phase of the universe [4]. Based on Rastall's argument, the necessity that the covariant derivative of the energy-momentum tensor to be zero can be relaxed, allowing one to add new terms to the Einstein's equation. In fact, it has recently been shown that the divergence of the energy-momentum tensor can be non-zero in a curved spacetime [5]. For this theory, several exact solutions has been obtained, both for astrophysical [3,[6][7][8][9][10][11][12][13] and cosmological scenarios [14][15][16][17][18][19][20]. Comparing thermodynamic quantities and properties of black holes in Rastall gravity with their counterparts in general relativity helps us to be more familiar with the nature of a non-minimal coupling between geometry and matter fields introduced in the Rastall hypothesis. Besides, thermodynamics properties of Kiselev solutions in the general relativity framework have been studied by some authors [21][22][23][24][25][26]. In fact, thermodynamic properties of diverse black holes have extensively been studied in various theories of gravity . Hence, our aim in this paper is to study the thermodynamic quantities and properties of black holes surrounded by a prefect fluid in the Rastall framework. An appealing property that we found concerns the coupling of the Rastall parameter with the energy density of the surrounding fields: they couple in such a way that the fields densities work as an amplifier to Rastall-like deformations, which could help us to experimentally distinguish between this formalism and general relativity. This paper is organized as follows. In the next section, we address general remarks on black hole solutions surrounded by a perfect fluid in general relativity and the Rastall theory as well as the thermodynamic quantities of black holes in the Rastall framework. In sections 3 and 4, energy and pressure of solutions surrounded by a quintessence field and cosmological constant, as promising approaches to describe the current accelerating universe, respectively, are studied in details in the Einstein and Rastall frameworks. The case of phantom field is also investigated in section 5. In section 6, we will study the possibility of the occurrence of phase transitions in Rastall black holes. The last section is devoted to a summary and concluding remarks. A black hole surrounded by a perfect fluid in Rastall gravity; general remarks In this paper we will work with a Schwarzschild-like metric, given by ds 2 = −f (r)dt 2 + 1 f (r) dr 2 + r 2 dΩ 2 ,(1) with dΩ 2 = dθ 2 + sin 2 (θ)dφ 2 , along with a general spherically symmetric energymomentum tensor. The general expression for time and spatial components of such tensor is given by T t t = A(r), T t i = 0, and T i j = C(r)r j r i + B(r)δ i j .(2) For general relativity, Kiselev has shown [1] that if the background is filled by a source with pressure p(r) and energy density ρ(r), related to each other by a state parameter ω q ≡ p(r) ρ(r) , then f (r) = 1 − 2M K r − N K r η K(3) in which M K and N K are constants of integration and η K = −1 − 3ω q .(4) Black holes surrounded by a perfect fluid in Rastall gravity Rastall gravity is a theory where the total energy-momentum tensor is not conserved, but its covariant derivative is proportional to the derivative of the Ricci scalar [2]. This means that, in a flat spacetime or as a first approximation of a weak gravitational field, all the known laws of conservation are valid. We should stress that such laws has been tested only in the weak regime of gravity, and it is known that gravity can produce particles via quantum effects, thus breaking some of these laws [2]. Rastall hypothesis can be written as ∇ µ T µν = λ∇ ν R,(5) where λ is the Rastall parameter, and general relativity is recovered in the limit λ → 0. From (5), we can write the modified equations of gravity as H µ ν ≡ G µ ν + λκδ µ ν R = κT µ ν ,(6) where κ is Rastall's gravitational constant. To find solutions to these field equations, one should solve the set of equations (6) for some energy-momentum tensor. Taking the trace of equation (6), we have R(4λκ − 1) = κT,(7) which means that in vacuum one must have R = 0 or κλ = 1/4. As the latter option is not allowed (see [2,52]), the former should be the case, and we must have that all vacuum solutions in GR are also solutions for Rastall gravity. Applying Kiselev's approach [1] to the field equations, and considering ρ(r) = Ar β , where both A and β are constants, Heydarzade and Darabi [3] found out β = − 3(1 + ω q ) − 12κλ(1 + ω q ) 1 − 3κλ(1 + ω q ) ,(8) and A = 3N(1 − 4κλ)(κλ(1 + ω q ) − ω q ) κ(1 − 3κλ(1 + ω q )) 2 ,(9) where N is an integration constant, related with the surrounding field. Hence, the metric function is given by f (r) = 1 − 2M r − Nr η ,(10) with η = − 1 + 3ω q − 6κλ(1 + ω q ) 1 − 3κλ(1 + ω q ) ,(11) where M is another integration constant, representing the black hole mass. For λ → 0 we recover the solution found by Kiselev [1] in the framework of general relativity. It is worthwhile mentioning that similar solutions can also be obtained in Rastall framework for other situations [10,11]. For each choice of equation of state, we can find the metric, as given by (10), along with the constant A, given by (9). To preserve the weak energy condition, i.e., ρ > 0, we must have A > 0, and this will define the sign of the parameter N as dependent on the parameter κλ. For a full discussion on each case, see [3]. Thermodynamic quantities of black holes in Rastall Gravity To work on the thermodynamic aspects of the Rastall model of gravity, we must define classical thermodynamic quantities, such as energy and entropy. These are local quantities, but for general relativity there is no straightforward way (it is even senseless) to define the local energy of a gravitational field configuration. For the total energy, the most accepted ones are the ADM energy at spatial infinity [53], and the Bondi-Sachs [54,55] energy at null infinity, both describing an isolated system in an asymptotically flat spacetime. But its local counterpart should be chosen as a useful quantity, defined for the interior of some well-defined boundary, that goes to one of the well-accepted values of energy as the boundary goes to infinity. Some useful definitions for this quasi-local notion of energy exist in literature [66], and in this paper we will use the results obtained in [52], that uses a generalized Misner-Sharp definition of energy [56], as a suitable definition for energy [67][68][69][70][71][72][73][74][75][76][77][78][79] to find the corresponding entropy related with the geometry of the spacetime in Rastall gravity. The unified first law of thermodynamics (UFL) is defined as [80] dE ≡ AΨ a dx a + W dV, (12) in which, Ψ a = T b a ∂ b r + W ∂ a r is the energy supply vector, and W = − h ab T ab 2 denotes the work density. Moreover, h ab = diag(−f (r), 1 f (r) ) for metric (1), and in fact, it is the metric on two dimensional hypersurface (t, r). UFL is compatible with the generalization of the Misner-Sharp mass [80], and therefore, one can use the above equation in order to find the generalized Misner-Sharp mass in the gravitational theory under investigation [52,80]. Defining γ ≡ λκ, the Newtonian limit leads to [52] κ = 4γ − 1 6γ − 1 8π ,(13) and applying the Unified first law of thermodynamics to the horizon of metric (1), one can get the Misner-Sharp mass content of black holes in Rastall gravity as + E = 6γ − 1 2(4γ − 1) [(1 − 2γ)r H + γr 2 H f ′ (r H )],(14) where ′ means derivative with respect to the coordinate r, and r H denotes the horizon radius. In order to obtain the system pressure, one can use the r − r component of the Rastall field equations (6) to obtain [52] P (r H ) = 6γ − 1 (4γ − 1)8π 1 r H [r H f ′ (r H ) − 1] − γ r 2 H [r 2 H f ′′ (r H ) + 4r H f ′ (r H ) − 2] .(15) In addition, bearing the first law of thermodynamics (dE = T dS − P dV ) in mind, and using Eqs. (14) and (15), it has been shown that the entropy of black hole is [52] S = 1 + 2γ 4γ − 1 S o .(16) + For a general derivation of thermodynamic quantities in Rastall gravity, see [52]. Here, S o = A/4 is the well-known Bekenstein entropy. We can note that, as γ → 0, one recovers the formulas valid in general relativity [52]. It is useful to note here that since the γ = 1 4 case is not allowed in this theory [2,52], the singularity of the above relations at this value of γ is not worrying. Thermodynamics of a black hole surrounded by a quintessence field in Rastall gravity A quintessence field, with state parameter ω q ranging between −1 < ω q < −1/3, may be responsible for the observed accelerated expansion of the universe [1,57] (for a review, see [58]). Here, we will consider the case of a black hole surrounded by the quintessence field with ω q = −2/3, such that f (r) = 1 − 2M r − N q r 1+2γ 1−γ ,(17) which reproduces the solution of Ref. [1] for γ = 0. Applying this state parameter to Eq. (14), the Misner-Sharp mass content confined in the horizon is E q = (1 − 6γ) 8   r H (1 − γ) 2 − N q γ(2 + γ)r 2+γ 1−γ H (γ − 1)(γ − 1 4 )   ,(18) which is equal to the Schwarzschild case E = r H /2, for γ = N q = 0. We plotted E q as a function of the horizon radius (r H ) for different values of the Rastall parameter γ in Fig. (1). One can see that for small positive values of γ, the energy grows until a maximum value is reached, then diminishes indefinitely and eventually becomes negative from a finite value of r H . Such behavior is absent in general relativity, where a linear energy growth is observed. The coupling between the Rastall parameter γ and the quintessence energy density parameter N q is essential for this kind of behavior. Instead, for negative small values of γ, Rastall gravity furnishes an ever positive contribution to the Misner-Sharp mass. Considering deviations of GR like γ = ±0.003, γ = ±0.005 and N q = 0.01, we plotted it just up to r H ∼ 10 km, because this is the order of magnitude of the detected BHs by LIGO [59][60][61][62]. In these cases, the energy content within the horizon is increased in comparison to GR for a fixed value of the horizon. The energy density of the quintessence fluid is proportional to N q (as can be seen in [3]) and for GR the formula for the Misner-Sharp mass does not depends on N q , therefore the higher the quintessence energy density, the higher is the difference between GR and Rastall gravity. The pressure at the horizon is found from (15) as P q = − N q 8π (2 + γ)(1 − 6γ) (1 − γ) 2 r 4γ−1 1−γ H .(19) The quintessence fluid is characterized by presenting a negative pressure (for a positive energy density). This way, it is expected that its presence might be described by the negativity of the system's pressure. This is a feature presented both in GR and Rastall gravity, however its dependence with the horizon r H is affected by the Rastall parameter. For the cases that we are considering, its qualitative behavior is preserved (as can be seen in Fig. (2)), i.e., the pressure gets suppressed by the BH horizon's size. It should be noted that if the Rastall parameter is such that (2 + γ)(1 − 6γ) < 0, its coupling with N q inverts the sign of the pressure to be positive. The pressure is negative for −2 < γ < 1/6 and is positive for γ < −2 ∪ γ > 1/6. And also the modulus of the pressure grows with the horizon for 1/4 < γ < 1 and decreases for γ < 1/4 ∪ γ > 1 (which is the present case). Thermodynamics of a black hole surrounded by a cosmological constant in Rastall gravity Consider a cosmological constant surrounding the BH, i.e., ω c = −1, that may also presumably drive the accelerated expansion of the universe [63]. The metric function is the same for GR and Rastall gravity (it does not depend on γ), and is given by f (r) = 1 − 2M r − N c r 2 .(20) In fact, it is a solution obtained from other considerations [10,11]. The Misner-Sharp mass confined in the horizon is E c = 1 − 6γ 2 − 8γ 1 − γ − 3γN c r 2 H r H .(21) As can be seen form Eq.(21), even thought the metric does not depend on γ, due to the modified field equations of this theory, the Misner-Sharp mass is deformed. The preservation of the metric function is responsible for avoiding a possible γ-dependence in the power of r H . However, as in the previous case, an important extra contribution arises from the coupling between the energy density N c and γ. For the same reasons of the previous section, and for the same set of parameters, we depict E c as a function of r H in Fig. (3). As can be seen, a similar qualitative behavior is found independently on the fluid under consideration. From Eq. (21), for N c > 0, the Misner-Sharp mass is a concave function of r H for 0 < γ < 1/6 ∪ γ > 1/4, and is a convex one for γ < 0 ∪ 1/6 < γ < 1/4. The pressure at the horizon is a constant P c = − 3 8π (1 − 6γ)N c .(22) It becomes positive for γ > 1/6, whenever N C > 0. It is also apparent that a negative pressure is obtainable for γ > 1/6 if N C < 0. Thermodynamics of a black hole surrounded by a phantom field in Rastall gravity Another interesting fluid that we analyze consists in the so called phantom field [57,64] with a super-negative equation of state ω p < −1. For our purposes, we consider ω p = −4/3. Thus the metric function reads f (r) = 1 − 2M r − N p r 3−2γ 1+γ .(23) The Misner-Sharp mass confined in the horizon is E p = (6γ − 1) 8   r H (1 − γ 2 ) + N p γ(γ − 4)r 4−γ 1+γ H (γ + 1)(γ − 1 4 )   .(24) Qualitatively, it behaves similarly to the quintessence case, however with a mass growth governed by an approximately fourth power law, instead of the approximately squared one of the quintessence field, as is depicted in Fig. (4). The pressure at the horizon is P p = −N p 8π (4 − γ)(1 − 6γ) (1 + γ) 2 r 1−4γ 1+γ H .(25) It becomes more negative with the growth of r H . Also, the distinction between the various values of the Rastall parameter γ becomes more explicit with the increasing of the horizon, as can be seen in Fig. (5). The pressure is positive for 1/6 < γ < 4 and is negative for γ < 1/6 ∪ γ > 4. Also, the modulus of the pressure decreases with the horizon for γ < −1 ∪ γ > 1/4 and increases for −1 < γ < 1/4 (which is the present case). Phase transition in black holes The possibility of occurrence of phase transition [65] for black holes has been studied in various theories of gravity to get more information on the thermodynamic features of black holes . Here, we are going to study the phase transitions for black holes in Rastall gravity by focusing on Kiselev counterpart solutions [1] in Rastall gravity [3]. Indeed, these solutions are generally more than a generalization of the Kiselev solutions to the Rastall framework, and can also be valid in some other situations [10,11]. For the spherically symmetric static metric (1), the radius of event horizon (r H ) can be found by solving the f (r H ) = 0 equation. Bearing Eq. (10) in mind, one can easily see that for N > 0 and N < 0, the η = 2 case recovers the de-Sitter (dS) and anti de-Sitter (AdS) universes, respectively. Moreover, the Reissner-Nordström (RN) universe can be obtained by taking into account the η = −2 case [10,11]. In fact, if a black hole is surrounded by a radiation source (ω q = 1 3 ), then independently of the value of the parameter γ, we have η = −2 [10,11]. Now, since S = 1 + 2γ 4γ−1 πr 2 H , we have r H = α √ S, where α = 4γ−1 π(6γ−1) , combined with Eq. (10) to get f ′ (r H ) → f ′ (S) = M ′ S − N ′ S (η−1)/2 .(26) in which N ′ ≡ ηα η−1 N and M ′ ≡ 2M α 2 . Since the Misner-Sharp definition of energy is fully compatible with the unified first law of thermodynamics [52], we take it as the total thermodynamic energy of system. Therefore, using Eq. (14) and defining α ′ ≡ (1−2γ) 2πα = (1 − 2γ) 6γ−1 4π(4γ−1) , we easily reach at E(S) = α ′ √ S + γ 2π (M ′ − N ′ S η+1 2 ).(27) The Hawking temperature and the heat capacity can be found as T (S) = dE dS =α −ÑS η 2 √ S ,(28) and C = T dS dT = 2S(α −ÑS η 2 ) (α −ÑS η 2 )(η − 1) − ηα ,(29)whereα = α ′ 2 andÑ = γN ′ (η+1) 4π , respectively. Hence, heat capacity diverges at S 0 = Sm (1−η) 2 η , where S m ≡ (α N ) 2 η , meaning that there can be a second order phase transition at this point [65]. In fact, by bearing Eq. (11) in mind, one can see that the value of η, and therefore, the possibility of occurrence of a phase transition depends on the values of γ and ω q . As a check, we can easily see that the results of considering the Schwarzschild metric can be obtained by applying both the γ → 0 and η → 0 limits to the above results. It is interesting to note here that ifα = 0 and η > 1 2 , then T → 0 for S → 0. This means that the second law of thermodynamics is satisfied by this case. Moreover, N should be negative to meet the S > 0 condition, meaning that this case requires N ≤ 0. Besides, heat capacity is positive only if η > 1. Additionally, since S 0 = 0, there is only one phase with E = γ 2π M ′ − 2Ñ η+1 S η+1 2 . This way, if η = 2 and γ = 1 2 , we have α ′ =α = 0 and ω q = −1. Moreover, Eq. (16) implies S = 2S 0 in this situation. To clarify the behavior of this case, energy, temperature and heat capacity have been plotted in Fig. (6). Now, for theα N < 0 case parallel toα > 0, sinceÑ is negative, the temperature is positive everywhere, and T (S → 0) → ∞, meaning that the second law of thermodynamics is not satisfied. While for η < 1, the temperature drops to zero for S ≫ 1. If we have η > 1, then the temperature have a minimum located at S = S 0 for that T (S 0 ) = ηα (η−1) √ S 0 , and it increases as a function of S for S > S 0 . It is worthwhile mentioning that, for η = 1, there is no singularity in the behavior of heat capacity, and T (S ≫ 1) ≈ −Ñ . Temperature, energy and heat capacity have been plotted in Figs. (7) and (8), respectively. Here, we only focus on the M ′ = 2π γ case combined with the definitions of M ′ , α, γ and Eq. (13) to reach at M = λ 8πG . In Fig. (7), we show that while the temperature is positive for S < S 0 , its changes are very expressive. Besides, as it is apparent from Fig. (8), the heat capacity is negative for S < S 0 , meaning that it is an unstable phase [51]. Therefore, black holes of radius r H < r 0 ≡ α √ S 0 are unstable and, in fact, r 0 is a lower bound for the radius of a stable black holes in this approach. Based on Eq. (28), ifα N > 0 (or equallyα < 0), then T = 0 for S = S m ≡ (α N ) 2 η , and thus the system can obtain negative temperatures [65]. In fact, this situation is very similar to a system in which magnetic dipoles are located in the direction of the external magnetic field B [65]. In Figs. (9) and (10), temperature, heat capacity and energy have been plotted for η = 1 2 , leading to S 0 = 16S m . In this manner, both the heat capacity and temperature are negative while 0 < S < S m . They will simultaneously obtain their positive values for S m < S < S 0 . For S > S 0 , although temperature is positive, heat capacity is again negative, thus signalling an unstable state [51,65]. Finally, it should again be noted that, unlike Refs. , we used the Misner-Sharp energy, in full agreement with the unified first law of thermodynamics [52], as the thermodynamic potential in our calculations. Considerations We studied some thermodynamic properties of black holes in Rastall gravity. The Misner-Sharp mass of Rastall black holes has been used in our approach so to be compatible with the unified first law of thermodynamics. Our investigation shows that the difference between the Misner-Sharp mass of Rastall black holes and their counterparts in Einstein's gravity will be decreased by reducing the size of black hole. The behavior of the thermodynamic pressure of black holes has also been studied showing that a non-minimal coupling between geometry and matter fields in the Rastall way can lead to notable effects on the pressure of the system. We finally investigated the possibility of occurrence of phase transitions for Rastall black holes. Like general relativity [51], a lower bound for the horizon radius (r 0 ) was obtained, indicating that the heat capacity of black holes with radius smaller than r 0 is negative. This means that such black holes are unstable. Figure 1 . 1Misner-Sharp mass for quintessence field E q as a function of the horizon r H , in solar mass units. We considered the cases of γ = −0.005 (blue, dashed line), γ = −0.003 (blue, dash-dotted line), GR (black, solid line), γ = 0.003 (red, dashdotted, thick line) and γ = 0.005 (red, dashed, thick line). We used N q = 0.01 and M ⊙ G/c 2 ≈ 1.48 km is the solar mass in length units. Figure 2 . 2The pressure for quintessence field P q as a function of the horizon r H . We considered the cases of γ = −0.005 (blue, dashed line), γ = −0.003 (blue, dash-dotted line), GR (black, solid line), γ = 0.003 (red, dash-dotted, thick line) and γ = 0.005 (red, dashed, thick line). We used N q = 0.01. Figure 3 . 3Misner-Sharp mass for quintessence field E q as a function of the horizon r H , in solar mass units. We considered the cases of γ = −0.005 (blue, dashed line), γ = −0.003 (blue, dash-dotted line), GR (black, solid line), γ = 0.003 (red, dashdotted, thick line) and γ = 0.005 (red, dashed, thick line). We used N q = 10 −6 and M ⊙ G/c 2 ≈ 1.48 km is the solar mass in length units. Figure 4 . 4Misner-Sharp mass for phantom field E p as a function of the horizon r H , in solar mass units. We considered the cases of γ = −0.005 (blue, dashed line), γ = −0.003 (blue, dash-dotted line), GR (black, solid line), γ = 0.003 (red, dashdotted, thick line) and γ = 0.005 (red, dashed, thick line). We used N p = 10 −10 . Figure 5 . 5The pressure for phantom field P q as a function of the horizon r H . We considered the cases of γ = −0.005 (blue, dashed line), γ = −0.003 (blue, dash-dotted line), GR (black, solid line), γ = 0.003 (red, dash-dotted, thick line) and γ = 0.005 (red, dashed, thick line). We used N p = 10 −10 . Figure 6 . 6Temperature and heat capacity for η = 2,Ñ = −5 andα = 0. For the energy curve, M ′ = 4π and γ = 1 2 compatible withα = 0 leading to ω q = −1. Figure 7 . 7Temperature and Energy for M ′ = 2π γ ,Ñ = −5 andα = 1 while η = 3. Figure 8 . 8Heat capacity forÑ = −5,α = 1 and η = 3. There is a second order phase transition located at S 0 = ( Figure 9 . 9Temperature and heat capacity forÑ = −5 andα = −1 while η = 1 2 and thus S m = ( 1 5 ) 4 . Figure 10 . 10Energy for M ′ = 2π γ ,Ñ = −5 andα = −1 while η = 1 2 . . V V Kiselev, Class. Quant. Grav. 201187V. V. Kiselev, Class. Quant. Grav. 20, 1187 (2003). . P , Phys. Rev. D. 63357P. Rastall, Phys. Rev. D 6, 3357 (1972). . Y Heydarzade, F Darabi, Phys. Lett. B. 771365Y. Heydarzade, F. Darabi, Phys. Lett. B 771, 365 (2017). . H Moradpour, Y Heydarzade, F Darabi, I G Salako, Eur. Phys. J. C. 77259H. Moradpour, Y. Heydarzade, F. Darabi, I. G. Salako, Eur. Phys. J. C 77, 259 (2017). . T Josset, A Perez, Phys. Rev. Lett. 11821102T. Josset, A. Perez, Phys. Rev. Lett. 118, 021102 (2017). . M S Ma, R Zhao, Eur. Phys. J. C. 77629M. S. Ma, R. Zhao, Eur. Phys. J. C 77, 629 (2017) . E R Bezerra De Mello, J C Fabris, B Hartmann, Class. Quant. Grav. 32885009E. R. Bezerra de Mello, J. C. Fabris, B. Hartmann, Class. Quant. Grav. 32, no. 8, 085009 (2015). . A M Oliveira, H E S Velten, J C Fabris, L Casarini, Phys. Rev. D. 92444020A. M. Oliveira, H. E. S. Velten, J. C. Fabris and L. Casarini, Phys. Rev. D 92, no. 4, 044020 (2015). . K A Bronnikov, J C Fabris, O F Piattella, E C Santos, Gen. Rel. Grav. 4812162K. A. Bronnikov, J. C. Fabris, O. F. Piattella and E. C. Santos, Gen. Rel. Grav. 48, no. 12, 162 (2016). . A M Oliveira, H E S Velten, J C Fabris, L Casarini, Phys. Rev. D. 93124020A. M. Oliveira, H. E. S. Velten, J. C. Fabris, L. Casarini, Phys. Rev. D 93, 124020 (2016). . Y Heydarzade, H Moradpour, F Darabi, Can Jour, Phys, Y. Heydarzade, H. Moradpour and F. Darabi, Can .Jour. Phys. . I Licata, H Moradpour, C Corda, Int. Jour. Geo. Meth. Mod. Phys. 141730003I. Licata, H. Moradpour, C. Corda, Int. Jour. Geo. Meth. Mod. Phys. Vol. 14, 1730003 (2017). . E Spallucci, A Smailagic, arXiv:1709.05795E. Spallucci, A. Smailagic, arXiv:1709.05795. . M Capone, V F Cardone, M L Ruggiero, Nuovo Cim. B. 1251133M. Capone, V. F. Cardone and M. L. Ruggiero, Nuovo Cim. B 125, 1133 (2011). . C E M Batista, M H Daouda, J C Fabris, O F Piattella, D C Rodrigues, Phys. Rev. D. 8584008C. E. M. Batista, M. H. Daouda, J. C. Fabris, O. F. Piattella and D. C. Rodrigues, Phys. Rev. D 85, 084008 (2012). . G F Silva, O F Piattella, J C Fabris, L Casarini, T O Barbosa, Grav. Cosmol. 19156G. F. Silva, O. F. Piattella, J. C. Fabris, L. Casarini and T. O. Barbosa, Grav. Cosmol. 19, 156 (2013). . A F Santos, S C Ulhoa, Mod. Phys. Lett. A. 30091550039A. F. Santos and S. C. Ulhoa, Mod. Phys. Lett. A 30, no. 09, 1550039 (2015). . H Moradpour, Phys. Lett. B. 757187H. Moradpour, Phys. Lett. B 757, 187 (2016). . F F Yuan, P Huang, Class. Quant. Grav. 34777001F. F. Yuan and P. Huang, Class. Quant. Grav. 34, no. 7, 077001 (2017). . Z Haghani, T Harko, S Shahidi, arXiv:1707.00939Z. Haghani, T. Harko, S. Shahidi, arXiv:1707.00939. . G Q Li, Phys. Lett. B. 735256G. Q. Li, Phys. Lett. B 735, 256 (2014). . B Majeed, M Jamil, P Pradhan, AHEP. 124910B. Majeed, M. Jamil, P. Pradhan, AHEP, 2015, 124910 (2015). . K Ghaderi, B Malakolkalami, Nuc. Phys. B. 90310K. Ghaderi, B. Malakolkalami, Nuc. Phys. B 903, 10 (2016). . K Ghaderi, B Malakolkalami, Astrophys. Space Sci. 361161K. Ghaderi, B. Malakolkalami, Astrophys. Space Sci. 361, 161 (2016). . K Ghaderi, B Malakolkalami, Astrophys. Space Sci. 362163K. Ghaderi, B. Malakolkalami, Astrophys. Space Sci. 362, 163 (2017). . Z Xu, X Hou, J Wang, arXiv:1610.05454Z. Xu, X. Hou, J. Wang, arXiv:1610.05454. . S W Hawking, D N Page, Commun. Math. Phys. 87577S. W. Hawking, D. N. Page, Commun. Math. Phys. 87, 577 (1983). . J Maldacena, Adv. Theor. Math. Phys. 2231J. Maldacena, Adv. Theor. Math. Phys. 2, 231 (1998). . E Witten, Adv. Theor. Math. Phys. 2505E. Witten, Adv. Theor. Math. Phys. 2, 505 (1998). . A Sahay, T Sarkar, G Sengupta, JHEP. 125A. Sahay, T. Sarkar, G. Sengupta, JHEP, 2010, 125 (2010). . R Banerjee, S K Modak, S Samanta, Eur. Phys. J. C. 70317R. Banerjee, S. K. Modak, S. Samanta, Eur. Phys. J. C, 70, 317 (2010). . R Banerjee, S K Modak, S Samanta, Phys. Rev. D. 8464024R. Banerjee, S. K. Modak, S. Samanta, Phys. Rev. D 84, 064024 (2011). . Q J Cao, Y X Chen, K N Shao, Phys. Rev. D. 8364015Q. J. Cao, Y. X. Chen, K. N. Shao, Phys. Rev. D 83, 064015 (2011). . R Banerjee, D Roychowdhury, Phys. Rev. D. 8544040R. Banerjee, D. Roychowdhury, Phys. Rev. D 85, 044040 (2012). . R Banerjee, S Ghosh, D Roychowdhury, Phys. Lett. B. 696156R. Banerjee, S. Ghosh, D. Roychowdhury, Phys. Lett. B 696, 156 (2011). . R Banerjee, D Roychowdhury, JHEP. R. Banerjee, D. Roychowdhury, JHEP, 2011, 4 (2011). . R Banerjee, S K Modak, D Roychowdhury, JHEP. 125R. Banerjee, S. K. Modak, D. Roychowdhury, JHEP, 2012, 125 (2012). . S W Wei, Y X Liu, Eur. Phys. Lett. 9920004S. W. Wei, Y. X. Liu, Eur. Phys. Lett, 99, 20004 (2012). . B R Majhi, D Roychowdhury, Class. Quantum. Grav. 29245012B. R. Majhi, D. Roychowdhury, Class. Quantum. Grav. 29, 245012 (2012). . W Kim, Y Kim, Phys. Lett. B. 718687W. Kim, Y. Kim, Phys. Lett. B 718, 687 (2012). . Y D Tsai, X N Wu, Y Yang, Phys. Rev. D. 8544005Y. D. Tsai, X. N. Wu, Y. Yang, Phys. Rev. D 85, 044005 (2012). . F Capela, G Nardini, Phys. Rev. D. 8624030F. Capela, G. Nardini, Phys. Rev. D 86, 024030, (2012). . D Kubiznak, R B Mann, JEHP. 33D. Kubiznak, R. B. Mann, JEHP, 2012, 33 (2012). . C Niu, Y Tian, X.-N Wu, Phys. Rev. D. 8524017C. Niu, Y. Tian, X.-N. Wu, Phys. Rev. D 85, 024017 (2012). . A Lala, D Roychowdhury, Phys. Rev. D. 8684027A. Lala, D. Roychowdhury, Phys. Rev. D 86, 084027 (2012). . A Lala, AHEP. 918490A. Lala, AHEP, 2013, 918490 (2013). . S W Wei, Y X Liu, Phys. Rev. D. 8744014S. W. Wei, Y. X. Liu, Phys. Rev. D 87, 044014 (2013). . M Eune, W Kim, S H Yi, JHEP. 20M. Eune, W. Kim, S. H. Yi, JHEP, 2013, 20 (2013). . M B J Poshteh, B Mirza, Z Sherkatghanad, Phys. Rev. D. 8824005M. B. J. Poshteh, B. Mirza, Z. Sherkatghanad, Phys. Rev. D 88, 024005 (2013). . J X Mo, W B Liu, Phys. Lett. B. 7273361J. X. Mo, W. B. Liu, Phys. Lett. B 727, 3361 (2013). . J X Mo, W B Liu, Ahep , 739454J. X. Mo, W. B. Liu, AHEP. 2014, 739454 (2014). . H Moradpour, I G Salako, Adv. High Energy Phys. 20163492796H. Moradpour, I. G. Salako, Adv. High Energy Phys. 2016, 3492796 (2016). . R L Arnowitt, S Deser, C W Misner, Phys. Rev. 1161322R. L. Arnowitt, S. Deser, C. W. Misner, Phys. Rev. 116, 1322 (1959). . H Bondi, M G J Van Der Burg, A W K Metzner, Proc. Roy. Soc. Lond. A. 26921H. Bondi, M. G. J. van der Burg, A. W. K. Metzner, Proc. Roy. Soc. Lond. A 269, 21 (1962). . R K Sachs, Proc. Roy. Soc. Lond. A. 270103R. K. Sachs, Proc. Roy. Soc. Lond. A 270, 103 (1962). . C W Misner, D H Sharp, Phys. Rev. 136571C. W. Misner, D. H. Sharp, Phys. Rev. 136, B571 (1964). . A Vikman, Phys. Rev. D. 7123515A. Vikman, Phys. Rev. D 71, 023515 (2005). . V Sahni, Lect. Notes Phys. 653141V. Sahni, Lect. Notes Phys. 653, 141 (2004). . B P Abbott, LIGO Scientific and Virgo CollaborationsPhys. Rev. Lett. 116661102B. P. Abbott et al. [LIGO Scientific and Virgo Collaborations], Phys. Rev. Lett. 116, no. 6, 061102 (2016). . B P Abbott, LIGO Scientific and Virgo CollaborationsPhys. Rev. Lett. 11624241103B. P. Abbott et al. [LIGO Scientific and Virgo Collaborations], Phys. Rev. Lett. 116, no. 24, 241103 (2016). . B P Abbott, LIGO Scientific ; VIRGO CollaborationsPhys. Rev. Lett. 11822221101B. P. Abbott et al. [LIGO Scientific and VIRGO Collaborations], Phys. Rev. Lett. 118, no. 22, 221101 (2017). . B P Abbott, LIGO Scientific and Virgo CollaborationsPhys. Rev. Lett. 11914141101B. P. Abbott et al. [LIGO Scientific and Virgo Collaborations], Phys. Rev. Lett. 119, no. 14, 141101 (2017). . S Perlmutter, Supernova Cosmology Project CollaborationAstrophys. J. 517565S. Perlmutter et al. [Supernova Cosmology Project Collaboration], Astrophys. J. 517, 565 (1999). . R R Caldwell, Phys. Lett. B. 54523R. R. Caldwell, Phys. Lett. B 545, 23 (2002). Statistical Mechanics (Third Edition) (30 Corporate Drive. R K Pathria, P D Beale, 400Burlington, MA 01803, USAR. K. Pathria, P. D. Beale, Statistical Mechanics (Third Edition) (30 Corporate Drive, Suite 400, Burlington, MA 01803, USA 2011). . L B Szabados, Living Rev. Rel. 74L. B. Szabados, Living Rev. Rel. 7, 4 (2004). . M Akbar, R G Cai, Phys. Lett. B. 648243M. Akbar, R. G. Cai, Phys. Lett. B 648, 243 (2007) . M Akbar, R G Cai, Phys. Lett. B. 6357M. Akbar, R. G. Cai, Phys. Lett. B 635, 7 (2006). . M Akbar, R G Cai, Phys. Rev. D. 7584003M. Akbar, R. G. Cai, Phys. Rev. D 75, 084003 (2007). . R G Cai, L M Cao, Phys. Rev. D. 7564008R. G. Cai, L. M. Cao, Phys. Rev. D 75, 064008 (2007). . R G Cai, L M Cao, Nucl. Phys. B. 785135R. G. Cai, L. M. Cao, Nucl. Phys. B 785, 135 (2007). . A Sheykhi, B Wang, R G Cai, Nucl. Phys. B. 7791A. Sheykhi, B. Wang, R. G. Cai, Nucl. Phys. B 779, 1 (2007). . A Sheykhi, B Wang, R G Cai, Phys. Rev. D. 7623515A. Sheykhi, B. Wang, R. G. Cai, Phys. Rev. D 76, 023515 (2007). . A Sheykhi, J. Cosmol. Astropart. Phys. 0519A. Sheykhi, J. Cosmol. Astropart. Phys. 05, 019 (2009). . A Sheykhi, Eur. Phys. J. C. 69265A. Sheykhi, Eur. Phys. J. C 69, 265 (2010). . A Sheykhi, Class. Quantum. Gravit. 2725007A. Sheykhi, Class. Quantum. Gravit. 27, 025007 (2010). . R G Cai, N Ohta, Phys. Rev. D. 8184061R. G. Cai, N. Ohta, Phys. Rev. D 81, 084061 (2010). . A Sheykhi, Phys. Rev. D. 8724022A. Sheykhi, Phys. Rev. D 87, 024022 (2013). . A Sheykhi, M H Dehghani, R Dehghani, Gen. Relativ. Gravit. 461679A. Sheykhi, M. H. Dehghani, R. Dehghani, Gen. Relativ. Gravit. 46, 1679 (2014). . R G Cai, L M Cao, Y P Hu, N Ohta, Phys. Rev. D. 80104016R. G. Cai, L. M. Cao, Y. P. Hu, N. Ohta, Phys. Rev. D 80, 104016 (2009).
[]
[ "PRIVACY PRESERVING DISTRIBUTED PROFILE MATCHING IN MOBILE SOCIAL NETWORK", "PRIVACY PRESERVING DISTRIBUTED PROFILE MATCHING IN MOBILE SOCIAL NETWORK" ]
[ "Rachid Chergui [email protected] \nFaculty of mathematics P.B. 32\nUSTHB\n16111El Alia, Bab EzzouarAlgeria\n" ]
[ "Faculty of mathematics P.B. 32\nUSTHB\n16111El Alia, Bab EzzouarAlgeria" ]
[]
In this document, a privacy-preserving distributed profile matching protocol is proposed in a particular network context called mobile social network. Such networks are often deployed in more or less hostile environments, requiring rigorous security mechanisms. In the same time, energy and computational resources are limited as these heterogeneous networks are frequently constituted by wireless components like tablets or mobile phones. This is why a new encryption algorithm having an high level of security while preserving resources is proposed in this paper. The approach is based on elliptic curve cryptography, more specifically on an almost completely homomorphic cryptosystem over a supersingular elliptic curve, leading to a secure and efficient preservation of privacy in distributed profile matching. * The research is partially supported by ATN laboratory of USTHB University.
null
[ "https://arxiv.org/pdf/1502.06993v1.pdf" ]
7,770,952
1502.06993
667b1aa568fc6d8d98f3297ab9ef3ef864cc02ad
PRIVACY PRESERVING DISTRIBUTED PROFILE MATCHING IN MOBILE SOCIAL NETWORK 24 Feb 2015 Rachid Chergui [email protected] Faculty of mathematics P.B. 32 USTHB 16111El Alia, Bab EzzouarAlgeria PRIVACY PRESERVING DISTRIBUTED PROFILE MATCHING IN MOBILE SOCIAL NETWORK 24 Feb 2015arXiv:1502.06993v1 [cs.CR] In this document, a privacy-preserving distributed profile matching protocol is proposed in a particular network context called mobile social network. Such networks are often deployed in more or less hostile environments, requiring rigorous security mechanisms. In the same time, energy and computational resources are limited as these heterogeneous networks are frequently constituted by wireless components like tablets or mobile phones. This is why a new encryption algorithm having an high level of security while preserving resources is proposed in this paper. The approach is based on elliptic curve cryptography, more specifically on an almost completely homomorphic cryptosystem over a supersingular elliptic curve, leading to a secure and efficient preservation of privacy in distributed profile matching. * The research is partially supported by ATN laboratory of USTHB University. Introduction Social networking websites, like Facebook [6] with its 900 million active users or Google+ [7], are of widespread use in our connected and globalized world. A major trend of these social networks is to attempt to provide instant and real-time access to for users, whatever their location and the connected device they use. This sensible demand from users has led to the development of mobile social networking (MSN) software like Foursquare [9] and Gowalla [8], in which individuals with similar interests are connected together and converse with one another through either tablets or mobile phone. In that approach, mobile apps use existing social networks to create native communities and promote discovery, leading to an improvement of web-based social networks using mobile features and accessibility. Making new connections according to personal preferences is a crucial service in MSN, where the initiating user can find matching users within physical proximity of him/her. In existing systems for such services, usually all the users directly publish their complete profiles for others to search. However, in many applications, the usersâĂŹ personal profiles may contain sensitive information that they do not want to make public. Authors of [10] have presented FindU, a first privacy-preserving personal profile matching scheme, designed for mobile social networks. In FindU, an initiating user can find from a group of users the one whose profile best matches with his/her; to limit the risk of privacy exposure, only necessary and minimal information about the private attributes of the participating users is exchanged. They speak about a Blind and Permute (BP) protocol. Several increasing levels of user privacy are defined, with decreasing amounts of exchanged profile information. Authors of this document propose to use a different encryption scheme into the BP algorithm. This new scheme can provide a similar level of security while reducing drastically the computation and communication costs, which is critical in the MSN context. In BP algorithm, encryption over ciphertexts is required. The original method proposed in [10] achieves this requirement using a cryptosystem [12] that needs a lot of resources, which is quite incompatible with the constraints related to MSNs. Contrarily, the scheme proposed here is based on elliptic curve cryptography [15], which leads to smaller keys and cryptograms, low cost computations and shorter communication messages, reducing largely by doing so the batteries consumptions. The remainder of this document is organized as follows. In Section 2, related works in the field of privacy-preserving profile matching are proposed. Then, in Section 3, we give recall the FindU protocol with related definitions. We give the protocol BP in Section 4. We construct the homomorphism encryption in Section 5 and we use it in Section 6 with performance analysis in Section 7. Section 8 conclude this work. Related Works The methods used in the field of privacy-preserving distributed profile matching are usually classified into three main categories according to the cryptographic tools they use. In protocols based on oblivious polynomial evaluation, client and a server compute the intersection of the sets corresponding to their profiles, such that the client gets the result while server learns nothing. Homomorphic encryption that allows operations over cipher texts is used to evaluate a polynomial that represents clientaĂŹs input obviously. This method has been originally proposed in [3], through the FNP scheme. Other examples lying in the same category can be found, for instance, in [4] and [5]. These methods are however impracticable in MSNs because they do not achieve linear computational complexity. Protocols based on oblivious pseudorandom functions consist of two parties that securely compute a pseudorandom function, where one of them holds the key while the other provides the input (set elements). The objective is a secure set intersection. Suppose two parties with private sets wish to learn the intersection set without revealing anything else. Let P 1 and P 2 be two parties that have input X and Y respectively and F a pseudorandom function, while k is a key for F belonging to P 1 . P 2 compute {F k (y)} y∈Y and P 1 compute {F k (x)} x∈X and send the results to P 2 . Thus, P 2 compare which elements appear in both sets to learn the intersection [2]. The complexity of this method is smaller than the first. The last category consists of protocols based on so-called commutative encryption. An encryption scheme E k (Âů) is said to have the commutative property when, for all keys k 1 and k 2 , we have: E k1 (E k2 (x)) = E k2 (E k1 (x)). For instance, the well known RSA encryption scheme has this commutative property. The main idea when considering privacy-preserving profile matching is thus to use the commutative encryption as a keyed one-way hash function, to generate a mapping for each element x such that no party knows the key [1]. A commonly related disadvantage of this method is that it often provide a weaker security [10]. Authors of [10] have presented a privacy-preserving profile matching called FindU. FindU is a symmetric protocol , i.e., the output is known at the same time by all parties. The characteristics of this scheme is further detailed in the next section. The FindU Protocol Problem Definition In mobile social networks, devices are wirelessly connected (using wireless interfaces such as bluetooth or wifi), thus resources are limited and a certain level of security is required. Authors of FindU algorithm suppose that the connexion is established under public key cryptosystem, where keys are distributed over parties securely. Then, when a party launches a matching, BP algorithm assure sharing a secret securely. Let us define these stages more precisely. The system consists of N users (parties) denoted as P 1 , ..., P N , each possessing a portable device. We denote the initiation party (initiator) as P 1 . P 1 launches the matching process and its goal is to find one party that best matches with it, from the rest of the parties P 2 , ..., P N that are called candidates. Each party P i 's profile consists of a set of attributes S i , which can be strings up to a certain length. P 1 defines a matching query to be a subset of S 1 (in the following we use S 1 to denote the query set unless specified). Also, we denote n = |S 1 | and m = |S i |, i > 1, assuming that each candidate has the same set length for the sake of simplicity. Let us now introduce the following definitions. Definition 1. The match of the set S i , i ∈ {2, âĂę, N }, is by definition the cardinality of S 1 S i . Definition 2. The best match P i * is defined as the party having the maximum intersection set size with P 1 . P 1 will first find out P i * via the proposed protocol. Then they will decide whether to connect with other based on their actual intersection set. Adversary Models If a party obtains one or more (partial or full) attribute sets without the explicit consents from these users, we said he has achieved an user profiling. In that context, the two following levels of security can be defined [10]. • Honest-but-Curious (HBC) adversary. In this model, the attacker tries to learn more information than what is allowed, by inferring from the results while honestly following the protocol. • Malicious adversary. The attacker tries here to learn more information than allowed by deviating from the protocol run. Design Goals Here we intend to develop the design goals of FindU scheme. One of the main goals is to defend against profiling attack defined in the previous section. We let the user choose his level of security requirement that we discuss in the next section. By definition, the party P 1 search among all parties the best that match with him, and at the end, the output of the algorithm will contain the intersection set between his set query at the profile set of all other parties. By launching FindU, and adversary may obtains all those informations. Thus, we let the user choose his privacy level. The main security goal is to thwart user profiling attack. Since the users may have different privacy requirement, and as it takes different amount of effort in protocol run to achieve them, we hereby define three levels of privacy where a higher level leaks less information to the adversary. Note that, by default, all of the following include letting P 1 and the best match P i * learn the intersection set between them at the end of a protocol run. • Privacy level 1 (PL-1). When the protocol ends, P 1 and each candidate P i , 1 < i ≤ N , mutually learn the intersection set between them, that is, I 1,i = S 1 ∩ S i . An adversary A should learn nothing beyond what can be derived from the above outputs and private inputs. If we assume the adversary has unbounded computing power, PL-1 actually corresponds to unconditional security for all the parties under the HBC model . Obviously, in PL-1, P 1 can obtain all candidates' intersection sets just in one protocol run, thus it reveals too much user information to the attacker, if he assume the role of P 1 . Therefore we define privacy level 2 in the following. • Privacy level 2 (PL-2). When the protocol ends, P 1 and each candidate P i , 1 < i ≤ N , mutually learn the size of their intersection set: m 1,i = |S 1 ∩ S i |. In addition, the best match P i * is allowed to know m 1,i values of other P i s. The adversary A should learn nothing beyond what can be derived from the above outputs and its private inputs. nothing more than what can be derived from the outputs and its private inputs. In PL-3, we can require that P 1 only contacts the best match P i * , such that it only obtains the intersection set I 1,i with the best match. In this way, A will need at least N − 1 protocol runs to know all other user's exact information, such that A's profiling capability is much limited Authors of FindU suggest that the protocol should be lightweight and practical, i.e., being enough efficient in computation and communication to be used in MSN. This is why we suggest to introduce homomorphism encryption into the FindU protocol. Readers are referred to [10] for a complete decryption of FindU. In order to achieve PL-2, authors introduce homomorphism encryption over cypher-text. For our part, to reduce largely the energy consumption, we suggest to use elliptic curve based encryption. The Blind and Permute Protocol (BP), part of the FindU system, is presented in the next section, whereas the proposed improvement is detailed in Section 5. Blind and Permute Protocol (BP) The input to BP protocol is a sequence S = (s 1 , ..., s n ) of integer values that is componentwise additively split between A who has S ′ = (s ′ 1 , ..., s ′ n ) and B who has S ′′ = (s ′′ 1 , ..., s ′′ n ), suchthatS = S ′ + S ′′ [12], where + stands for the vectorial addition of integers. The output is a sequenceŜ obtained from S by: 1. permuting the entries of S according to a random permutation π that is known to neither A nor B, 2. modifying the additive split of the entries of S so that neither A nor B can use their share of it to gain any information about π. We seek a protocol that does this in linear computation and communication complexity. Observe that it suffices to give a protocol that does half of the job: It blinds and permutes for A according to a random permutation chosen by B. Then we can use such protocol a second time with the roles A and B reversed, resulting in a permutation that is the composition of two random permutations: one chosen by B and unknown to A, another chosen by A and unknown to B. The protocol where B chooses the permutation is given next. [12] whose performance is compared to our scheme in section 7). n random numbers r 1 , ..., r n , and for every i ∈ 1 A computes and sends E A (s ′ 1 ), ..., E A (s ′ n ) to B (here E is the cryptosystem defined in B selects v ′ 1 + v ′′ 1 , ..., v ′ n + v ′′ n is a permuted version of S (permuted according to π B ). 4. A decrypts the n items E A (v ′ 1 ), ..., E A (v ′ n ) received from B, obtaining the sequence v ′ 1 , ..., v ′ n . In the FindU algorithm (advanced version), BP permit achieving PL-2 level of security. Homomorphism Encryption We use elliptic curves based cryptography to construct homomorphism encryption function. Operation over Elliptic Curves Addition and Multiplication Elliptic curve cryptography (ECC) is an approach to public-key cryptography based on the algebraic structure of elliptic curve over finite fields [13]. Elliptic curves used in cryptography are typically defined over two types of finite fields: prime fields F p , where p is a large prime number, and binary extension fields F 2 m [14]. In our paper, we focus on elliptic curves over F p . Let p > 3, then an elliptic curve over F p is defined by cubic equation y 2 = x 3 + ax + b as the set Σ = {(x, y) ∈ F p × F p | y 2 ≡ x 3 + ax + b (mod p)} where a, b ∈ F p are constants such that 4a 3 +27b 2 = 0 (mod p). An elliptic curve over F p consists of the set of all pairs of affine coordinates (x, y) for x, y ∈ F p that satisfy an equation of the above form and an infinity point O. The point addition and its special case, point doubling over Σ, is defined as follows (the arithmetic operations are defined in F p [16]). Let P = (x 1 , y 1 ) and Q = (x 2 , y 2 ) be two points of Σ. Then: P + Q = O if x 2 = x 1 and y 2 = −y 1 , (x 3 , y 3 ) otherwise. where: • x 3 = λ 2 − x 1 − x 2 , • y 3 = λ × (x 1 − x 3 ) − y 1 , • λ = (y 2 − y 1 ) × (x 2 − x 1 ) −1 if P = Q, (3x 2 1 + a) × (2y −1 1 if P = Q. Finally, we define P + Q = O + P = P, ∀P ∈ Σ, which leads to an abelian group (σ, +). The multiplication n × P means P + P + ... + P n times, and −P is the symmetric of P for the group law + defined above, for all P ∈ Σ. Public/Private Keys Generation with ECC In this section we show how we can generate the public and private keys for encryption, following the cryptosystem proposed by Boneh et al. [15]. Let t > 0 be an integer called "security parameterâĂŹâĂŹ. To generate public and private keys, first of all, two t − bits prime numbers must be computed. Therefore, a cryptographic pseudorandom generator can be used to obtain two vectors of t bits, q 1 and q 2 . Then, a Miller-Rabin test can be applied for testing the primality or not of q 1 and q 2 . We denote by n the product of q 1 and q 2 , n = q 1 × q 2 , and by l the smallest positive integer such that p = l × n − 1. l is a prime number while p = 2 (mod 3). In order to find the private and public keys, we define a group H, which presents the points of the super-singular elliptic curve y 2 = x 3 + 1 defined over F p . It consists of p + 1 = n × l points, and thus has a subgroup of order n, we call it G. In another step, we compute g and u as two generators of G and h = q 2 × u. Then, following [16], the public key will be presented by (n, G, g, h) and the private key by q 1 . Encryption and Decryption After the private/public keys generation, we proceed now to the encryption and decryption phases: • Encryption: Assuming that our message space consists of integers in the set 0, 1, ..., T , where T < q 2 , and m the (integer) message to encrypt. First, a random positive integer is picked from te interval [0, n − 1]. Then, the cypher-text is defined by C = m × g + r × h ∈ G, in which + and × refer to the additive and multiplication laws defined previously. • Decryption: once the message C arrived to destination, to decrypt it, we use the private key q 1 and the discrete logarithm of base q 1 × g as follows: m = log q1×g q 1 × C Homomorphic Properties As we have mentioned before, our approach ensures easy encryption/decryption without any need of extra resources. This will be proved in the next section. Moreover, our approach supports homomorphic properties, which gives us the ability to execute operations on values even though they have been encrypted. Indeed, it allows N additions and one multiplication directly on cryptograms. As the product operation will not be used in the profile matching, we will not detail it in this section Addition aver cypher-texts are done as follows: let m 1 and m 2 be two messages and C 1 , C 2 their cypher-text respectively. Then the sum of C 1 and C 2 , let call C, is represented by C = C 1 + C 2 + r × h where r is an integer randomly chosen in [0, n − 1] and h = q 2 × u as presented in the previous section. This sum operation guarantees that the decryption value of C is the sum m 1 + m 2 . The modified version of BP Protocol We rewrite the protocol BP with our novel cryptosystem with E meaning the novel algorithm. Performance Analysis The experimental results presented in [13] compare the performance comparison between RSA and ECC. For the same level of security, say level one, a device operating over RSA need a key of 472 bits while over ECC we need only a key of 46 bits. In [12], authors give a performance analysis between a cryptosystem based on Composite Degree Residuosity Classes CDRC, which is the scheme that is proposed in the BP algorithm. First, RSA is better then CDRC in term of computational complexity. CDRC offer a security level equivalent to Class [n] while RSA is equivalent to RSA[n, F 4 ] and we have [12] RSA[n, F 4 ] ⇒ Class[n] On the other hand, for the same key size, CDRC require 5120 elementary operations for encryption while RSA need only 17 operations. All those results prove the efficiency of ECC in term of performance. Conclusion and Future Work An homomorphic encryption scheme that enhances the performance of the FindU algorithm has been proposed in this document. Achieving the PL-3 security level is the main open problem not yet resolved. In future work, homomorphic encryption will be investigated in order to solve this issue. • Privacy level 3 (PL-3). At the end of the protocol, P 1 and each P i should only learn the ranks of each value m 1,i , 1 < i ≤ N . A should learn 1. A computes and sends EA (s ′ 1 ), ..., E A (s ′ n ) to B. 2. B selects n random numbers r 1 , ..., r n , and for every i ∈ 1, ..., n he computes E A (−r i ) and add it with the E A (s ′ i ) he received in the first step, thereby obtaining E A (s ′ i − r i ). 3. B generates a random permutation π B and applies it to the sequence of E A (s ′ i − r i )'s computed in the previous step, obtaining a sequence of the form E A (v ′ 1 ), ..., E A (v ′ n ) that he sends to A. He also applies π B to the sequence s ′′ 1 + r 1 , ..., s ′′ n + r n , obtaining a sequence v ′′ 1 , ..., v ′′ n . Note that the sequencev ′ 1 + v ′′ 1 , ..., v ′ n + v ′′ n is a permuted version of S (permuted according to π B ). 4. A decrypts the n items E A (v ′ 1 ), ..., E A (v ′ n ) received from B, obtaining the sequence v ′ 1 , ..., v ′n Information sharing across private databases. Rakesh Agrawal, Alexandre Evfimievski, Ramakrishnan Srikant, Proceedings of the 2003 ACM SIGMOD international conference on Management of data. the 2003 ACM SIGMOD international conference on Management of dataAgrawal, Rakesh and Evfimievski, Alexandre and Srikant, Ramakrishnan, Information sharing across private databases, Proceedings of the 2003 ACM SIGMOD international conference on Management of data, no. 12, pp. 86- 97, 2003. Efficient protocols for set intersection and pattern matching with security against malicious and covert adversaries. Carmit Hazay, Yehuda Lindell, Proceedings of the 5th conference on Theory of cryptography. the 5th conference on Theory of cryptographyHazay, Carmit and Lindell, Yehuda, Efficient protocols for set intersection and pattern matching with security against malicious and covert adver- saries, Proceedings of the 5th conference on Theory of cryptography, no. 21, pp. 155-175, 2008. Benny Pinkas, Efficient Private Matching and Set Intersection, EUROCRYPT. Michael J Freedman, Kobbi Nissim, Michael J. Freedman, Kobbi Nissim, Benny Pinkas, Efficient Private Matching and Set Intersection, EUROCRYPT, no.3027 , pp. 1-19, 2004. Kissner, Song Lea, Dawn Privacy-Preserving Set Operations, Advances in Cryptology -CRYPTO 2005. ,Kissner, Lea and Song, Dawn Privacy-Preserving Set Operations, Ad- vances in Cryptology -CRYPTO 2005, no. 3621, pp. 241-257, 2005. Distributed Private Matching and Set Operations , Information Security Practice and Experience. Qingsong Ye, Huaxiong Wang, Josef Pieprzyk, Ye, Qingsong and Wang, Huaxiong and Pieprzyk, Josef Distributed Private Matching and Set Operations , Information Security Practice and Experi- ence, no. 4991, pp. 347-360, 2008. . Mark Zuckerberg, Eduardo Saverin, Dustin Moskovitz, Chris Hughes, Facebook, Zuckerberg, Mark and Saverin, Eduardo and Moskovitz, Dustin and Hughes, Chris Facebook , 2012. . Sergey Brin, Larry Page, Google+, Brin, Sergey and Page, Larry Google+ , 2012. . Josh Williams, Raymond , Scott Gowalla, Williams, Josh and Raymond, Scott Gowalla , 2012. . Dennis Crowley, Naveen Selvadurai, Foursquare, Crowley, Dennis and Selvadurai, Naveen Foursquare ,2012. Ming Li User, Centric Security and Privacy Mechanisms in Untrusted Networking and Computing Environments, Worcester Polytechnic Institute. Ming Li User-Centric Security and Privacy Mechanisms in Untrusted Networking and Computing Environments, Worcester Polytechnic Insti- tute,2011. Efficient Privacy-Preserving k-Nearest Neighbor Search. Yinian Qi, Mikhail J Atallah, Proceedings of the 2008 The 28th International Conference on Distributed Computing Systems. the 2008 The 28th International Conference on Distributed Computing SystemsQi, Yinian and Atallah, Mikhail J Efficient Privacy-Preserving k-Nearest Neighbor Search , Proceedings of the 2008 The 28th International Confer- ence on Distributed Computing Systems, no.9 , pp.311-319, 2008. Public-key cryptosystems based on composite degree residuosity classes. Pascal Paillier, Proceedings of the 17th international conference on Theory and application of cryptographic techniques. the 17th international conference on Theory and application of cryptographic techniquesPaillier, Pascal, Public-key cryptosystems based on composite degree resid- uosity classes , Proceedings of the 17th international conference on Theory and application of cryptographic techniques, no.16 , pp. 223-233, 1999. Secure Data Aggregation in Wireless Sensor Networks. Jacques Bahi, Christophe Guyeux, Abdallah Makhoul, Homomorphism versus Watermarking Approach ADHOCNETS 2010, 2nd Int. Conf. on Ad Hoc Networks. Bahi, Jacques and Guyeux, Christophe and Makhoul, Abdallah, Secure Data Aggregation in Wireless Sensor Networks. Homomorphism versus Wa- termarking Approach ADHOCNETS 2010, 2nd Int. Conf. on Ad Hoc Net- works, no.49 , pp. 344-358, 2010. Secure encrypted-data aggregation for wireless sensor networks. R C C Cheung, N J Telle, W Luk, P Y K Cheung, R.C.C. Cheung and N.J. Telle and W. Luk and P.Y.K. Cheung, Secure encrypted-data aggregation for wireless sensor networks , no.13 , pp. 1048- 1059, 2005. Evaluating 2-DNF Formulas on Ciphertexts. Dan Boneh, Goh, Eu-Jin, Kobbi Nissim, Boneh, Dan and Goh, Eu-Jin and Nissim, Kobbi, Evaluating 2-DNF For- mulas on Ciphertexts , no.13 , pp. 325-341, 2005. Guide to Elliptic Curve Cryptography Springer. D Hankerson, A Menezes, S Vanstone, D. Hankerson and A. Menezes and S. Vanstone, Guide to Elliptic Curve Cryptography Springer, 2004.
[]
[ "Extraction of energy from an extremal rotating electrovacuum black hole: Particle collisions in the equatorial plane", "Extraction of energy from an extremal rotating electrovacuum black hole: Particle collisions in the equatorial plane" ]
[ "Filip Hejda ", "José P S Lemos ", "Oleg B Zaslavskii ", "\nInstitute of Physics\nCEICO\nCzech Academy of Sciences\nNa Slovance\n1999/2182 21Prague 8\n", "\nDepartamento de Física\nDepartamento de Física\nCentro de Astrofísica e Gravitação -CENTRA\nCzech Republic and Centro de Astrofísica e Gravitação -CENTRA\nInstituto Superior Técnico -IST\nUniversidade de Lisboa -UL\nAvenida Rovisco Pais 11049-001LisboaPortugal\n", "\nDepartment of Physics and Technology\nKharkov V. N\nInstituto Superior Técnico -IST\nUniversidade de Lisboa -UL\nAvenida Rovisco Pais 11049-001LisboaPortugal\n", "\nInstitute of Mathematics and Mechanics\nKarazin National University\n4 Svoboda Square61022KharkovUkraine\n", "\nKazan Federal University\n18 Kremlyovskaya Street420008KazanRussia\n" ]
[ "Institute of Physics\nCEICO\nCzech Academy of Sciences\nNa Slovance\n1999/2182 21Prague 8", "Departamento de Física\nDepartamento de Física\nCentro de Astrofísica e Gravitação -CENTRA\nCzech Republic and Centro de Astrofísica e Gravitação -CENTRA\nInstituto Superior Técnico -IST\nUniversidade de Lisboa -UL\nAvenida Rovisco Pais 11049-001LisboaPortugal", "Department of Physics and Technology\nKharkov V. N\nInstituto Superior Técnico -IST\nUniversidade de Lisboa -UL\nAvenida Rovisco Pais 11049-001LisboaPortugal", "Institute of Mathematics and Mechanics\nKarazin National University\n4 Svoboda Square61022KharkovUkraine", "Kazan Federal University\n18 Kremlyovskaya Street420008KazanRussia" ]
[]
The collisional Penrose process received much attention when Bañados, Silk and West (BSW) pointed out the possibility of test-particle collisions with arbitrarily high center-of-mass energy in the vicinity of the horizon of an extremally rotating black hole. However, the energy that can be extracted from the black hole in this promising, if simplified, scenario, called the BSW effect, turned out to be subject to unconditional upper bounds. And although such bounds were not found for the electrostatic variant of the process, this version is also astrophysically unfeasible, since it requires a maximally charged black hole. In order to deal with these deficiencies, we revisit the unified version of the BSW effect concerning collisions of charged particles in the equatorial plane of a rotating electrovacuum black hole spacetime. Performing a general analysis of energy extraction through this process, we explain in detail how the seemingly incompatible limiting cases arise. Furthermore, we demonstrate that the unconditional upper bounds on the extracted energy are absent for arbitrarily small values of the black hole electric charge. Therefore, our setup represents an intriguing simplified model for possible highly energetic processes happening around astrophysical black holes, which may spin fast but can have only a tiny electric charge induced via interaction with an external magnetic field.I. INTRODUCTIONPenrose [1] proposed a mechanism to extract energy from a rotating vacuum black hole through a test particle disintegration in its vicinity; one fragment can escape with more energy than the energy the original particle had, if the other fragment falls inside the black hole and reduces slightly its angular momentum[2]. However, serious doubts about the practical relevance of this original variant of the Penrose process were raised early on[3,4]. A major obstacle is the fact that the fragments need to have relative velocity of more than half the speed of light, which is very restrictive. Nevertheless, such issues can be resolved by considering more general variants of the process. A key ingredient for one of the remedies was provided by Wald[5], who realized that a rotating black hole in an external magnetic field can become charged due to selective charge accretion. Using Wald's weak-field solution as background, particle disintegration into oppositely charged fragments was considered, and it was shown that the requirement of high relative velocity can be circumvented [6]. This generalization, or revival, as the authors put it, of the Penrose process can be also understood as a crossover between its original variant and its electrostatic version described for nonrotating black holes[7].Another way of fixing the shortcomings of the original Penrose process is to consider particle collisions instead of decays. The required high relative velocity of the final particles can then arise naturally as a result of a high-energy collision. Interestingly, it has been noted that the relative Lorentz factor can in fact diverge in some cases, if the * [email protected][email protected][email protected] collision point is taken toward the horizon. In particular, this happens for a collision between an orbiting and an infalling particle in the case of an extremal black hole [8] and also for a collision between a radially outgoing and a radially incoming particle[9]. Neither of these options seems very realistic, as both involve particles confined to the vicinity of the horizon. However, whereas the latter one generically requires a white hole horizon as explained in [10], the former one has viable variants. Notably, Bañados, Silk, and West discovered its modification with both particles coming from rest at infinity[11]. This BSW effect requires a fine-tuned particle, called critical particle, which can only asymptotically approach the horizon radius, as if approaching an orbit (see, e.g., the discussion in Section IV B in[12]). For more types of near-horizon high-energy collisional processes involving orbiting particles, see, e.g.,[13,14]. A broad overview of the collisional Penrose process and the BSW effect covering many additional aspects can be found in the work of Schnittman[15].Since the BSW effect has been derived in the test particle approximation and it relies on fine tuning and extremality of the black hole, there has been an actual concern that it may get suppressed in more realistic circumstances [16] (see also[17]for a review). But, surprisingly, it turned out that the energy extraction is quite unsatisfactory even with all the simplifying assumptions in place. Namely, it has been established almost simultaneously both by numerical[18]and analytical means[19](see also[20]) that there is an unconditional upper bound on the extracted energy despite the center-of-mass collision energy being unbounded. Remarkably enough, Schnittman [21] discovered a scenario that is more favorable for energy extraction than the BSW effect with its precise fine tuning. A nearly critical particle, i.e, a particle with imperfect fine tuning, can turn from incoming to outgoing motion in the radial direction before colliding with another particle in the vicinity of the black hole, which is advantageous. Nevertheless, as further clarified by additional analytical studies[22,23], the enhancement only consists in replacing one unconditional upper bound on the extracted energy with another, higher one. Let us note that for vacuum spacetimes, such limitations can be overcome, if one considers more general objects than black holes, e.g., naked singularities, see[24,25]. For different ways to examine the original BSW effect with improved realism, see[26,27].Similarly to the original Penrose process, an electrostatic variant of the BSW effect exists for maximally charged, and so nonrotating, black holes[28]; it requires fine-tuned charged particles. Surprisingly, no unconditional upper bounds on the extracted energy were found in this case[29]. Given this, it is natural to ask whether similar results can be obtained in a more realistic situation with arbitrarily small black hole charge. One such possibility is the simple case of charged particles moving along the axis of symmetry of a rotating electrovacuum black hole, which was considered in[30]. Although it was confirmed that there is no upper bound on the extracted energy regardless of how small the black hole charge might be, several caveats were found to make this setup unfeasible for microscopic particles. This motivates us to turn to the more complicated case of collisions of charged particles in the equatorial plane of a rotating electrovacuum black hole. Such a crossover between the original version and the electrostatic variant of the BSW effect has been considered in[12], yet concerning only what happens before the particle collision, i.e., the approach phase of the process. In the present paper, we shall study energy extraction in this setup; let us emphasize that the key innovation in our discussion here consists in taking into account the simultaneous influence of rotation and electric charge. Our main purpose is to show that in this case there is no unconditional upper bound on the extracted energy whenever both the black hole and the escaping particles are charged. In our analysis, we draw on some additional works[31][32][33][34][35].In the context of this paper, let us mention that black holes with nonnegligible electric charge have recently seen renewed interest, as they can play a role in the mechanism behind fast radio bursts (FRBs). For example, it was suggested that a merger of black holes, at least one of which has enough electric charge, can produce FRBs due to the rapidly changing magnetic dipole moment[36]. A further study[37]investigated the possible role of magnetospheric instability in producing FRBs, both for isolated Kerr-Newman black holes and for binaries. Yet another, more conventional model describes how a FRB can result from a prompt discharge of a metastable collapsed state of a Kerr-Newman black hole[38]. Formation of Kerr-Newman black holes through collapse of rotating and magnetized neutron stars has been systematically studied in[39].The paper is organized as follows. In Sec. II, we describe the properties of motion of charged test particles around an electrovacuum black hole and classify the types of motion near an extremal horizon. We also give formulas for the collision energy in the center-of-mass frame, which diverges in the horizon limit when one of the particles is critical. In Sec. III, we discuss restrictions on the parameters of critical particles that can be involved in near-horizon high-energy collisions. In particular, we determine possible bounds on these parameters. In Sec. IV, we perform a full analysis for energy extraction. We consider different kinematic regimes, in which particles can be produced in near-horizon high-energy collisions, and determine which ones allow the particles to escape. Then, we study bounds on parameters of the escaping particles; we put emphasis on situations in which the energy of escaping particles is not bounded. The results are derived using a general metric form, which makes them valid also for dirty black holes, i.e., those surrounded by matter. Additionally, we explain how the previously known limiting cases can be derived from the general case. In Sec. V, we apply the general results to the Kerr-Newman solution so that we can highlight the whole method using relevant figures. In Sec. VI, we conclude.
10.1103/physrevd.105.024014
[ "https://arxiv.org/pdf/2109.04477v2.pdf" ]
237,485,403
2109.04477
8bb5bfd7fef3753e70280952d72cb9fd865cf966
Extraction of energy from an extremal rotating electrovacuum black hole: Particle collisions in the equatorial plane 27 Feb 2022 Filip Hejda José P S Lemos Oleg B Zaslavskii Institute of Physics CEICO Czech Academy of Sciences Na Slovance 1999/2182 21Prague 8 Departamento de Física Departamento de Física Centro de Astrofísica e Gravitação -CENTRA Czech Republic and Centro de Astrofísica e Gravitação -CENTRA Instituto Superior Técnico -IST Universidade de Lisboa -UL Avenida Rovisco Pais 11049-001LisboaPortugal Department of Physics and Technology Kharkov V. N Instituto Superior Técnico -IST Universidade de Lisboa -UL Avenida Rovisco Pais 11049-001LisboaPortugal Institute of Mathematics and Mechanics Karazin National University 4 Svoboda Square61022KharkovUkraine Kazan Federal University 18 Kremlyovskaya Street420008KazanRussia Extraction of energy from an extremal rotating electrovacuum black hole: Particle collisions in the equatorial plane 27 Feb 2022arXiv:2109.04477v2 [gr-qc] The collisional Penrose process received much attention when Bañados, Silk and West (BSW) pointed out the possibility of test-particle collisions with arbitrarily high center-of-mass energy in the vicinity of the horizon of an extremally rotating black hole. However, the energy that can be extracted from the black hole in this promising, if simplified, scenario, called the BSW effect, turned out to be subject to unconditional upper bounds. And although such bounds were not found for the electrostatic variant of the process, this version is also astrophysically unfeasible, since it requires a maximally charged black hole. In order to deal with these deficiencies, we revisit the unified version of the BSW effect concerning collisions of charged particles in the equatorial plane of a rotating electrovacuum black hole spacetime. Performing a general analysis of energy extraction through this process, we explain in detail how the seemingly incompatible limiting cases arise. Furthermore, we demonstrate that the unconditional upper bounds on the extracted energy are absent for arbitrarily small values of the black hole electric charge. Therefore, our setup represents an intriguing simplified model for possible highly energetic processes happening around astrophysical black holes, which may spin fast but can have only a tiny electric charge induced via interaction with an external magnetic field.I. INTRODUCTIONPenrose [1] proposed a mechanism to extract energy from a rotating vacuum black hole through a test particle disintegration in its vicinity; one fragment can escape with more energy than the energy the original particle had, if the other fragment falls inside the black hole and reduces slightly its angular momentum[2]. However, serious doubts about the practical relevance of this original variant of the Penrose process were raised early on[3,4]. A major obstacle is the fact that the fragments need to have relative velocity of more than half the speed of light, which is very restrictive. Nevertheless, such issues can be resolved by considering more general variants of the process. A key ingredient for one of the remedies was provided by Wald[5], who realized that a rotating black hole in an external magnetic field can become charged due to selective charge accretion. Using Wald's weak-field solution as background, particle disintegration into oppositely charged fragments was considered, and it was shown that the requirement of high relative velocity can be circumvented [6]. This generalization, or revival, as the authors put it, of the Penrose process can be also understood as a crossover between its original variant and its electrostatic version described for nonrotating black holes[7].Another way of fixing the shortcomings of the original Penrose process is to consider particle collisions instead of decays. The required high relative velocity of the final particles can then arise naturally as a result of a high-energy collision. Interestingly, it has been noted that the relative Lorentz factor can in fact diverge in some cases, if the * [email protected][email protected][email protected] collision point is taken toward the horizon. In particular, this happens for a collision between an orbiting and an infalling particle in the case of an extremal black hole [8] and also for a collision between a radially outgoing and a radially incoming particle[9]. Neither of these options seems very realistic, as both involve particles confined to the vicinity of the horizon. However, whereas the latter one generically requires a white hole horizon as explained in [10], the former one has viable variants. Notably, Bañados, Silk, and West discovered its modification with both particles coming from rest at infinity[11]. This BSW effect requires a fine-tuned particle, called critical particle, which can only asymptotically approach the horizon radius, as if approaching an orbit (see, e.g., the discussion in Section IV B in[12]). For more types of near-horizon high-energy collisional processes involving orbiting particles, see, e.g.,[13,14]. A broad overview of the collisional Penrose process and the BSW effect covering many additional aspects can be found in the work of Schnittman[15].Since the BSW effect has been derived in the test particle approximation and it relies on fine tuning and extremality of the black hole, there has been an actual concern that it may get suppressed in more realistic circumstances [16] (see also[17]for a review). But, surprisingly, it turned out that the energy extraction is quite unsatisfactory even with all the simplifying assumptions in place. Namely, it has been established almost simultaneously both by numerical[18]and analytical means[19](see also[20]) that there is an unconditional upper bound on the extracted energy despite the center-of-mass collision energy being unbounded. Remarkably enough, Schnittman [21] discovered a scenario that is more favorable for energy extraction than the BSW effect with its precise fine tuning. A nearly critical particle, i.e, a particle with imperfect fine tuning, can turn from incoming to outgoing motion in the radial direction before colliding with another particle in the vicinity of the black hole, which is advantageous. Nevertheless, as further clarified by additional analytical studies[22,23], the enhancement only consists in replacing one unconditional upper bound on the extracted energy with another, higher one. Let us note that for vacuum spacetimes, such limitations can be overcome, if one considers more general objects than black holes, e.g., naked singularities, see[24,25]. For different ways to examine the original BSW effect with improved realism, see[26,27].Similarly to the original Penrose process, an electrostatic variant of the BSW effect exists for maximally charged, and so nonrotating, black holes[28]; it requires fine-tuned charged particles. Surprisingly, no unconditional upper bounds on the extracted energy were found in this case[29]. Given this, it is natural to ask whether similar results can be obtained in a more realistic situation with arbitrarily small black hole charge. One such possibility is the simple case of charged particles moving along the axis of symmetry of a rotating electrovacuum black hole, which was considered in[30]. Although it was confirmed that there is no upper bound on the extracted energy regardless of how small the black hole charge might be, several caveats were found to make this setup unfeasible for microscopic particles. This motivates us to turn to the more complicated case of collisions of charged particles in the equatorial plane of a rotating electrovacuum black hole. Such a crossover between the original version and the electrostatic variant of the BSW effect has been considered in[12], yet concerning only what happens before the particle collision, i.e., the approach phase of the process. In the present paper, we shall study energy extraction in this setup; let us emphasize that the key innovation in our discussion here consists in taking into account the simultaneous influence of rotation and electric charge. Our main purpose is to show that in this case there is no unconditional upper bound on the extracted energy whenever both the black hole and the escaping particles are charged. In our analysis, we draw on some additional works[31][32][33][34][35].In the context of this paper, let us mention that black holes with nonnegligible electric charge have recently seen renewed interest, as they can play a role in the mechanism behind fast radio bursts (FRBs). For example, it was suggested that a merger of black holes, at least one of which has enough electric charge, can produce FRBs due to the rapidly changing magnetic dipole moment[36]. A further study[37]investigated the possible role of magnetospheric instability in producing FRBs, both for isolated Kerr-Newman black holes and for binaries. Yet another, more conventional model describes how a FRB can result from a prompt discharge of a metastable collapsed state of a Kerr-Newman black hole[38]. Formation of Kerr-Newman black holes through collapse of rotating and magnetized neutron stars has been systematically studied in[39].The paper is organized as follows. In Sec. II, we describe the properties of motion of charged test particles around an electrovacuum black hole and classify the types of motion near an extremal horizon. We also give formulas for the collision energy in the center-of-mass frame, which diverges in the horizon limit when one of the particles is critical. In Sec. III, we discuss restrictions on the parameters of critical particles that can be involved in near-horizon high-energy collisions. In particular, we determine possible bounds on these parameters. In Sec. IV, we perform a full analysis for energy extraction. We consider different kinematic regimes, in which particles can be produced in near-horizon high-energy collisions, and determine which ones allow the particles to escape. Then, we study bounds on parameters of the escaping particles; we put emphasis on situations in which the energy of escaping particles is not bounded. The results are derived using a general metric form, which makes them valid also for dirty black holes, i.e., those surrounded by matter. Additionally, we explain how the previously known limiting cases can be derived from the general case. In Sec. V, we apply the general results to the Kerr-Newman solution so that we can highlight the whole method using relevant figures. In Sec. VI, we conclude. II. MOTION AND COLLISIONS OF CHARGED TEST PARTICLES A. Spacetime metric and electromagnetic potential We shall consider a general stationary, axially symmetric spacetime representing an isolated black hole, with metric g in coordinates (t, ϕ, r, ϑ) given by g = −N 2 dt 2 + g ϕϕ (dϕ − ω dt) 2 + g rr dr 2 + g ϑϑ dϑ 2 . (1) Here, N 2 is the lapse function; g ϕϕ , g rr , g ϑϑ are the respective metric potentials; and ω is the dragging potential. We assume g ϕϕ > 0, and also that the product N √ g rr > 0 is finite and nonvanishing even for N → 0. Let us further assume that our spacetime is permeated by a Maxwell field obeying the same symmetry as the metric (1). We fix the gauge for its potential A to manifest this symmetry, namely, A = A t dt + A ϕ dϕ, or rearranging, A = −φ dt + A ϕ (dϕ − ω dt) .(2) The component φ = −A t − ωA ϕ(3) is called the generalized electrostatic potential. B. General equations of equatorial motion Let us now consider the motion of test particles with rest mass m and electric charge q in the spacetime defined in Eq. (1). Because of the two symmetries that we assumed, there exist two quantities that are conserved during the electrogeodesic motion. They are the energy E and the axial angular momentum L z of the test particle. We also assume that the metric (1) and the electromagnetic field are symmetric with respect to the reflections ϑ → π − ϑ. Then, we can consider motion confined to the invariant hypersurface ϑ = π 2 , the equatorial plane. For equatorial particles, L z is the total angular momentum; hence, we can drop the subscript and write L ≡ L z . The energy and the axial angular momentum are given by E = −p t − qA t , L = p ϕ + qA ϕ ,(4) where p t and p ϕ are the time and azimuthal components of the particle's 4-momentum p α . Defining two auxiliary functions X and Z by X = E − ωL − qφ , Z = X 2 − N 2 m 2 + (L − qA ϕ ) 2 g ϕϕ ,(5) we can write the contravariant components of the particle's 4-momentum p α in a compact form: p t = X N 2 , p ϕ = ωX N 2 + L − qA ϕ g ϕϕ , p r = σZ N √ g rr .(6) The parameter σ has values σ = ±1 which determine the direction of the radial motion. In order for the motion to be allowed, the quantity Z has to be real. Outside of the black hole, where N 2 > 0, the condition Z 2 > 0 can be equivalently stated as |X | N m 2 + (L−qAϕ) 2 gϕϕ . It can be seen that there are two disjoint domains of allowed motion, one with X > 0 and the other with X < 0. These two domains touch for N → 0, where X → 0 becomes possible. However, to preserve causality, we need to enforce p t > 0, and thus we restrict to the X > 0 domain. Then, the requirement for the motion to be allowed becomes X N m 2 + (L − qA ϕ ) 2 g ϕϕ .(7) The lower bound, i.e., the equality of Eq. (7), X = N m 2 + (L−qAϕ) 2 gϕϕ , is the condition for a turning point. The number of relevant parameters can be reduced depending on whether the particle in question is massive or massless. Kinematics of massive particles is determined by three parameters: specific energy ε ≡ E m , specific angular momentum l ≡ L m , and specific chargeq ≡ q m . Kinematics of massless particles, which are electrically neutral, e.g., photons, is characterized solely by the impact parameter defined as b ≡ L E . Based on the distinction between massive and massless, additional features of the motion, like the existence of circular orbits, can be deduced. Effective potentials are frequently employed, both for massive particles and for massless particles (see [12] and references therein for the massive particle case). C. Near-horizon expansions We wish to study collisions of particles near the black hole horizon, where N → 0. Let us denote the values of the various quantities on the outer black hole horizon by a subscript or a superscript H depending on the convenience. As we consider solely equatorial motion, all quantities in the following are understood to be evaluated at ϑ = π 2 , which will not be marked explicitly for brevity. For example, by A H ϕ , we mean the value of A ϕ on the horizon at ϑ = π 2 . In the vicinity of the horizon, we can perform expansions in variable (r − r H ) ≪ r H . We are interested in extremal black holes; the horizon located at r H is understood to be degenerate hereafter. Let us expand the dragging potential ω and the generalized electrostatic potential φ in first order as follows, ω = ω H +ω (r − r H ) + . . . , φ = φ H +φ (r − r H ) + . . . ,(8) respectively, and whereω = ∂ ω ∂r r=rH andφ = ∂ φ ∂r r=rH . For extremal black holes, we can also renormalize the lapse function as N 2 = (r − r H ) 2 N 2 , which leads, in particular, to N 2 H = 1 2 ∂ 2 N 2 ∂r 2 r=rH .(9) Finally, let us introduce a new set of constants X H , χ, and λ, which are preserved throughout the motion and useful to describe the kinematics of particles close to r H . They are defined in terms of E, L, q as follows: X H = E − ω H L − qφ H , χ = −ωL − qφ , λ ≡ p H ϕ = L − qA H ϕ .(10) Note that the parameter λ is now defined in a slightly different way than in the previous paper [12] on charged particle collisions. The formulas given here can be recast into the convention used in [12] by putting λ → −mλA H ϕ . Two of the new parameters, namely, X H and χ, are expansion coefficients of the function X given in Eq. (5), i.e., X ≈ X H + χ (r − r H ) + . . .(11) The parameters E, L, q can be expressed in terms of the new ones through inverse relations E = X H + ω Hφ −ωφ H λ + χA H t φ +ωA H ϕ , L =φ λ − χA H φ φ +ωA H ϕ , q = −ω λ + χ φ +ωA H ϕ ,(12) which all contain the same expression in the denominator. Hence, when it vanishes, i.e., φ +ωA H ϕ = 0 ,(13) there is clearly a problem with the definitions given in Eq. (10). Indeed, if Eq. (13) holds, χ and λ become proportional to each other, χ = −ωλ, and thus the variables X H , χ, λ no longer span the whole parameter space. When this degeneracy happens, we can use X H , λ, q as our alternative set of parameters. Then, the inverse relations to express E, L become E = X H + ω H λ − qA H t L = λ + qA H ϕ .(14) The behavior of particles close to the horizon radius r H depends significantly on the value of X H . For particles with X H < 0, the condition (7) is necessarily violated near the horizon, and thus the particles cannot get arbitrarily close to r H . On the other hand, particles with X H 0 can exist arbitrarily close to r H . Let us discuss these types now. D. Types of particles close to rH Usual (subcritical) particles Particles with X H > 0 are bound to fall into the black hole if they move inward and get near the horizon. In our discussion, we will refer to those particles as usual particles. Let us emphasize that we will not consider outgoing usual particles in the vicinity of r H , since it can be shown that such particles cannot be produced in (generic) nearhorizon collisions; see [31]. In our analysis, we exclude the white hole region from which outgoing usual particles could naturally emerge. For usual particles approaching r H , the function Z of Eq. (5) can be expanded in terms of N 2 , and consequently of X , as follows: Z ≈ X − N 2 2X m 2 + (L − qA ϕ ) 2 g ϕϕ + . . .(15) Critical particles, especially class I critical particles We can also consider particles with X H = 0, which are called critical. They are fine-tuned to be on the verge between not being able to reach the horizon and falling into the black hole. Here, we use the local notion of critical particles. For asymptotically flat spacetimes, it is also possible to define the critical particles globally, such that they are on the edge of being able to approach the horizon from infinity; see [11]. By the definition given in Eq. (10), condition X H = 0 can be understood also as a constraint for parameters E, L, q: E − ω H L − qφ H = 0 .(16) The expansion around r H of the function Z introduced in Eq. (5) looks rather different for critical particles, Z ≈ χ 2 − N 2 H m 2 + λ 2 g H ϕϕ (r − r H ) + . . .(17) Let us emphasize that with X H = 0 the causality condition p t > 0 necessarily implies χ > 0. It can be shown that critical particles cannot approach the horizon unless the black hole is extremal (see, e.g., [12,32] and references therein). Harada and Kimura [34] distinguished several subtypes of critical particles, out of which we consider chiefly the class I critical particles. The approximate trajectory of an incoming class I critical particle near r H has the form r = r H 1 + exp − τ τ relax , where τ is the proper time and τ relax is a positive constant; see [12] for details. Since critical particles of any type can never reach r H , any collisional process involving them will thus happen at some radius r C > r H . Therefore, it makes sense to consider particles that behave approximately as critical at a given radius. Nearly critical particles A particle will behave approximately as critical at a radius r C , if the zeroth-order term in the expansion of X is of comparable magnitude as the first-order term. To quantify this, let us define a formal expansion, X H ≈ −C (r C − r H ) − D (r C − r H ) 2 + . . . ,(18) where the minus sign in front of the terms follows the usual convention. C, D, and so on, are constants that are needed for consistency of momentum conservation law at each expansion order. However, here, we are interested only in the first order, and so the constant C is enough for our purposes. For nearly critical particles, the expansion (11) evaluated at r C can be recast using Eq. (18) as X ≈ (χ − C) (r C − r H ) + . . .(19) Similarly, the expansion of function Z defined in Eq. (5) reads for them Z ≈ (χ − C) 2 − N 2 H m 2 + λ 2 g H ϕϕ (r C − r H ) + . . .(20) Nearly critical particles with C > 0 cannot fall into the black hole, and they must have a turning point at some radius smaller than r C . Therefore, it makes sense to study collisional processes near the horizon involving also outgoing nearly critical particles. Furthermore, for particles with χ ≫ C > 0, we can neglect C and treat them as precisely critical around r C . Thus, we can consider outgoing critical particles, too. Class II critical particles and class II nearly critical particles There exist values of parameters of critical particles or nearly critical particles, for which the leading-order coefficient in the expansion (17) (or (20)) vanishes. The new leading order then becomes Z ∼ (r C − r H ) 3 2(21) or higher. These are the so-called class II critical particles or nearly critical particles. Kinematics of class II critical particles represents an interesting theoretical issue, which, however, involves technical complications; cf. [12]. Moreover, since class II critical particles require fine tuning of two parameters, instead of just one, they are much less important for practical considerations. Thus, we will mostly omit details regarding class II critical particles in the following. E. BSW effect and its Schnittman variant We have seen that in the near-horizon region of an extremal black hole two distinct types of motion do coexist. Whereas usual particles with X H > 0 cross r H and fall into the black hole, critical particles with X H = 0 can only approach r H asymptotically. This leads to a divergent relative Lorentz factor due to the relative velocity between the two types of motion approaching the speed of light. Hence, the expression for the collision energy in the center-of-mass frame, E 2 CM = m 2 1 + m 2 2 − 2g µν p µ 1 p ν 2 ,(22) will be dominated by the scalar-product term, if we consider near-horizon collisions between critical and usual particles. In particular, inserting (6) for a critical particle labeled 1 and for a usual particle labeled 2 and using Eqs. (11), (15), and (17), we find that the leading-order contribution of Eq. (22) is E 2 CM ≈ X H 2 r C − r H 2 N 2 H χ 1 + σ 1 χ 2 1 − N 2 H m 2 1 + λ 2 1 g H ϕϕ ,(23) see [12] for details. A process with an incoming critical particle (σ 1 = −1) is called BSW type after Bañados, Silk, and West [11], whereas the one with reflected (nearly) critical particle (σ 1 = +1) is called Schnittman type [21]. Note that the usual particle is always incoming, i.e., σ 2 = −1. We used the aforementioned approximation χ 1 ≫ C 1 > 0 for the Schnittman process. III. KINEMATICS OF PARTICLES BEFORE COLLISION A. Admissible region in the parameter space General considerations Critical particles are the key ingredient of certain high-energy collisional processes in extremal black hole spacetimes. Nevertheless, the parameters of critical particles that can act in such processes are restricted, since the requirement of Eq. (7) must be fulfilled all the way from the point of inception to the point of collision. Let us disregard the concern about where the critical particle originated and focus instead on the point of collision at radius r C . Since we want r C very close to r H , the minimum requirement is that there must be some neighborhood of r H , where condition (7) is satisfied. Using a linear approximation in r − r H , we get the following inequality, χ > N H m 2 + λ 2 g H ϕϕ ,(24) which defines the admissible region of parameters. Conversely, for parameters satisfying the inequality opposite to Eq. (24), condition (7) will be violated in some neighborhood of r H . Now, the equality (25) corresponds to the breakdown of the linear approximation of Eq. (7). Comparing with Eq. (17), we see that Eq. (25) also implies the critical particles to be class II. Higher-order expansion terms are needed to decide whether motion of class II critical particles is allowed close to r H . (We note that such higher-order kinematic restrictions were worked out in Sections IV E and V B in [12] and that additional information on this subject can be found in Sec. II D and footnote 2 in [30] and in Sec. VII in [33].) Now, let us consider the physical interpretation of the admissible region of parameters. In particular, we would like to distinguish different variants of the collisional processes corresponding to the previously known limiting cases. For extremal vacuum black holes, only critical particles corotating with the black hole can participate in the high-energy collisions, whereas for the nonrotating extremal black holes, the critical particles need to have the same sign of charge as the black hole. In order to identify counterparts of these limiting variants, which we will call centrifugal mechanism and electrostatic mechanism, we need to assess how to define the direction in which a charged particle orbits. χ = N H m 2 + λ 2 g H ϕϕ The momentum component p ϕ determines the direction of motion in ϕ with respect to a locally nonrotating observer (cf. [3]). For uncharged particles, p ϕ = L is constant, and thus the distinction is universal and unambiguous. Nevertheless, for charged particles, p ϕ depends on r through the qA ϕ term. Therefore, we essentially need to compare values of p ϕ at some reference radius. A first straightforward choice would be to use λ ≡ p H ϕ ; see Eq. (10). However, it is clear from Eq. (25) that one can find points with any value of λ in the admissible region (whereas values of χ in the admissible region are bounded from below by χ N H m). Apart from the degenerate case (13), no kinematic restriction on λ is thus possible. Hence, basing the definition of the centrifugal mechanism on λ would lead to a trivial result. A second possible choice is L. What is the justification to use L? We shall consider a region of our spacetime, where the influence of the dragging and of the magnetic field is insignificant, for example, a far zone of an asymptotically flat spacetime. More precisely, let us consider a region where ω and A ϕ are negligible, and thus p ϕ ≈ p ϕ g ϕϕ ≈ L. Then, it readily follows that in such a region particles with L = 0 move along trajectories of (approximately) constant ϕ and, conversely, particles with different signs of L orbit in different directions therein. Hence, we can say that L uniquely distinguishes the direction of motion in ϕ of a particle before it came under the influence of the dragging and of the magnetic field near the black hole. We conclude that we need to view the admissible region through the parameters L and q for physical interpretation. Similarly to [12], let us focus on Eq. (25) of the border of the admissible region. Substituting relations (10) for χ and λ into Eq. (25) does not generally lead to a single-valued functional dependence between q, L. We circumvent this issue by plugging the condition (25) into relations (12), which yields parametric expressions for the border: q = −ω λ + N H m 2 + λ 2 g H ϕφ φ +ωA H ϕ (26) L =φ λ − N H A H ϕ m 2 + λ 2 g H ϕφ φ +ωA H ϕ (27) E = ω Hφ −ωφ H λ + N H A H t m 2 + λ 2 g H ϕφ φ +ωA H ϕ .(28) Since we are dealing with critical particles, the three expressions are not independent. In [12], different possibilities were distinguished by studying restrictions on signs of q and L in the admissible region; in particular, the centrifugal mechanism was identified as the case when only the sign of L is restricted, and the electrostatic mechanism was identified as the case when only the sign of q is restricted. Here, we employ a complementary, deeper approach and determine the precise bounds on q, L, and E. Bounds on parameters Bounds on values of q, L, and E in the admissible region will appear as extrema of expressions (26)-(28) with respect to λ. Let us start with the electric charge q. From Eq. (26), we find that a stationary point can occur at the following value of λ: λ = −m g H ϕϕω N 2 H − g H ϕϕω 2 .(29) Due to the square root in the denominator, we need to distinguish three possibilities. Case 1a: If Eq. (29) is imaginary, Eq. (26) will take all real values, and hence there is no bound on q. Case 1b: If Eq. (29) is real, it will correspond to an extremum of Eq. (26) with value q b = − m φ +ωA H ϕ N 2 H − g H ϕϕω 2 ,(30) which serves as a bound for q. For Eqs. (27) and (28), the following values of L and E will be implied by Eq. (29): L = − m φ +ωA H ϕ N 2 H A H ϕ + g H ϕϕωφ N 2 H − g H ϕϕω 2 ,(31)E = m φ +ωA H ϕ N 2 H A H t − g H ϕϕω ω Hφ −ωφ H N 2 H − g H ϕϕω 2 .(32) Looking at the |λ| → ∞ behavior of Eq. (26), one can deduce that 27), we find that a value of λ for a candidate stationary point is φ +ωA H ϕ < 0(33)λ = m g H ϕϕφ sgn A H ϕ N 2 H A H ϕ 2 − g H ϕϕφ 2 .(34)L b = − m sgn A H φ φ +ωA H ϕ N 2 H A H ϕ 2 − g H ϕϕφ 2 ,(35) which serves as a bound for L. For Eqs. (26) and (28), the following values of q and E will be implied by Eq. (34): q = − m sgn A H φ φ +ωA H ϕ N 2 H A H ϕ + g H ϕϕωφ N 2 H A H ϕ 2 − g H ϕϕφ 2 ,(36)E = m sgn A H φ φ +ωA H ϕ N 2 H A H t A H ϕ + g H ϕϕφ ω Hφ −ωφ H N 2 H A H ϕ 2 − g H ϕϕφ 2 .(37) From the |λ| → ∞ behavior of Eq. (27), we can infer that Eq. (35) is a lower bound, if A H φ φ +ωA H ϕ < 0 .(38) When the opposite inequality is satisfied, Eq. (35) is an upper bound. Case 2c: If Eq. (34) is undefined due to the expression in the denominator being zero, the values of L in the admissible region will be bounded by L b = 0, and this value cannot be reached for a finite value of other parameters on the border. Combining the possibilities together, we can conclude that cases 1a2b and 1a2c correspond to the centrifugal mechanism, whereas variants 1b2a and 1c2a correspond to the electrostatic mechanism. Case 1a2a signifies the coexistence of both. Note that the combination of signs of q and L leading to χ < 0 is excluded in any case. The other possible combinations, i.e., 1b2b and 1c2c, do not correspond to any simpler limiting cases. Let us turn to the energy E to finish the discussion of bounds on parameters. From Eq. (28), we find that a value of λ for a candidate stationary point is λ = −m g H ϕϕ ω Hφ −ωφ H sgn A H t N 2 H A H t 2 − g H ϕϕ ω Hφ −ωφ H 2 .(39) Unlike in the previous two cases, this value can be adjusted using the available gauge freedom. Consequently, we can choose Eq. (39) to be real, as explained below. Furthermore, it turns out that we can also choose the corresponding stationary point of Eq. (28) to be a minimum. Its value is E min = m sgn A H t φ +ωA H ϕ N 2 H A H t 2 − g H ϕϕ ω Hφ −ωφ H 2 ,(40) and for Eqs. (26) and (27), the following values of q and L are implied by Eq. (39): q = − m sgn A H t φ +ωA H ϕ N 2 H A H t − g H ϕϕω ω Hφ −ωφ H N 2 H A H t 2 − g H ϕϕ ω Hφ −ωφ H 2 ,(41)L = − m sgn A H t φ +ωA H ϕ N 2 H A H t A H ϕ + g H ϕϕφ ω Hφ −ωφ H N 2 H A H t 2 − g H ϕϕ ω Hφ −ωφ H 2 .(42) What are the requirements in order to have a lower bound on E in the admissible region, and is it always possible to make these requirements satisfied simultaneously? First, we have to impose the condition N 2 H A H t 2 − g H ϕϕ ω Hφ −ωφ H 2 > 0(43) to make Eq. (39) real. By checking the |λ| → ∞ behavior of Eq. (28), we can see that we must also require A H t φ +ωA H ϕ > 0 ,(44) in order for Eq. (40) to be a lower bound. Next, recalling (3), one can observe that the combinations A H t ≡ −φ H −ω H A H ϕ and ω Hφ −ωφ H are linearly independent, except for the degenerate case when Eq. (13) holds, which has to be treated separately anyway. Therefore, there is always a way to choose values of φ H and ω H that make any of the conditions given in Eqs. (43) and (44) satisfied or violated. Additional remarks Above, we have identified points on the border of the admissible region where a minimal or maximal value of one of the parameters q, L, and E is reached. Values of all the parameters at such points are proportional to the particle's mass. This illustrates the fact that only a reduced set of parameters is needed to describe particles' kinematics. In particular, for massive critical particles, two parameters are sufficient. These can be eitherχ = χ m andλ = λ m or any two ofq, l, and ε. Therefore, we can understand the admissible region given in Eq. (24) as an area in a two-dimensional parameter space. Its border given in Eq. (25) can be viewed as a curve therein, namely, a branch of a hyperbola with axesχ = 0 andλ = 0 and with its vertex onλ = 0. Let us note that in variablesq and l the asymptotes of this hyperbola can be expressed as l = −q g H ϕϕφ ± N H A H ϕ g H ϕϕω ∓ N H ; (45) see Eq. (43) in [12]. Considering the tilde parameters, i.e., the parameters normalized to unit rest mass, one excludes a priori critical photons. However, this is not a big issue, since they have trivial kinematics. Indeed, all critical photons share the same single value of impact parameter b cr = 1 ωH . Therefore, the parameter space of critical photons is effectively zero dimensional, and their kinematics depends only on the properties of the spacetime itself. When are the critical photons able to approach r H ? The expansion given in Eq. (17) reads for those critical photons Z ≈ |L| ω 2 − N 2 H g H ϕϕ (r − r H ) + . . .(46) Here, the expression under the square root is proportional to the one in Eq. (29) with a negative factor. Therefore, critical photons can be involved in the high-energy collisional processes close to r H only in the case 1a. Finally, let us clarify the link between bounds on q, L, E and restrictions on signs of those parameters. Starting with q, we can observe that condition (33) also determines the sign of Eq. (30). Thus, if Eq. (30) is a lower bound, its value is positive, whereas if it is an upper bound, its value is negative. Therefore, whenever values of q in the admissible region are bounded by Eq. (30), they must also all have the same sign. An identical relation holds between Eq. (35) and (38). Last, the gauge condition (44), which we use to enforce a lower bound on energy, also implies that the bound given in Eq. (40) has a positive value. Hence, dividing by m and rearranging the sign factors, we can express the (possible) bounds on values ofq, l, and ε in the admissible region as follows: −q sgn φ +ωA H ϕ > 1 φ +ωA H ϕ N 2 H − g H ϕϕω 2 ,(47)−l sgn φ +ωA H ϕ A H ϕ > 1 φ +ωA H ϕ N 2 H A H ϕ 2 − g H ϕϕφ 2 ,(48)ε > 1 φ +ωA H ϕ N 2 H A H t 2 − g H ϕϕ ω Hφ −ωφ H 2 . (49) B. Degenerate case Let us now explore the previously excluded case when the degeneracy condition (13) is satisfied. As we noted in Sec. II C, this means that the variables χ and λ become proportional, namely, χ = −ωλ. Because of this, Eq. (25) with Eq. (13) degenerates into an algebraic equation for one variable, which has a single solution, λ = − m sgnω ω 2 N 2 H − 1 g H ϕϕ .(50) One can see that the expressions under the square roots in the denominators of Eq. (29) and Eq. (50) are related by a negative factor. Therefore, Eq. (50) is defined in real numbers in case 1a. On the other hand, in cases 1b and 1c, there is no real solution of Eq. (25) with Eq. (13), and thus the collisional processes studied here are impossible for critical particles with any value of λ. We are unaware of a black hole spacetime where this would occur in the equatorial plane. However, a similar thing happens around the poles of the Kerr solution, as demonstrated by [34]. Note also that λ sgnω > 0 certainly violates Eq. (24) with Eq. (13). Let us now consider the physical interpretation of the degenerate case given by Eq. (50). Using Eq. (14) (with X H = 0), we find that Eq. (50) can be expressed as L = − m sgnω ω 2 N 2 H − 1 g H ϕϕ + qA H ϕ ,(51)E = − mω H sgnω ω 2 N 2 H − 1 g H ϕϕ − qA H t .(52) The charge of the particle plays a role of a free variable; there can never be a bound on the values of q in the admissible region in the degenerate case. However, if A H ϕ = 0, Eq. (51) will correspond to a single value of L, which will constitute a bound on L. If Eq. (13) holds together with A H ϕ = 0, we have − sgn(ωL) = sgn χ > 0, and therefore we can infer − l sgnω > 1 ω 2 N 2 H − 1 g H ϕϕ .(53) Note that Eq. (13) with A H ϕ = 0 impliesφ = 0. This would signify case 2c as defined in general (cf. [12]), yet the bound on l has a nonzero value in this case. We want a lower bound on E in the admissible region, and therefore we impose gauge conditions A H t = 0 and ω H sgnω < 0 in Eq. (52). Then, it holds that ε > |ω H | ω 2 N 2 H − 1 g H ϕϕ . (54) IV. ENERGY EXTRACTION A. Conservation laws and kinematic regimes Conservation laws Now, we discuss the properties of particles than can be produced in the high-energy collisional processes described in Sec. II E and, in particular, how much energy such particles can extract from a black hole. Let us consider a simple setup in which a critical particle or a nearly critical particle, call it particle 1, and an incoming usual particle, call it particle 2, collide close to the horizon radius r H , and their interaction leads to production of just two new particles, particle 3 and particle 4. We assume conservation of charge q 1 + q 2 = q 3 + q 4 ,(55) and also the conservation of 4-momentum at the point of collision. From the azimuthal component of the 4-momentum, see Eq. (4), we infer the conservation of angular momentum L 1 + L 2 = L 3 + L 4 .(56) The conservation law for the time component of the 4-momentum can be used to derive the conservation of energy E 1 + E 2 = E 3 + E 4 .(57) There is another conserved 4-momentum component, the radial one p r . Instead of writing down the conservation of p r , we combine it together with the conservation of the time component p t of the 4-momentum. For that, we use the combinations N 2 p t ∓ N √ g rr p r , which is advantageous because they lead to combinations of the functions X and Z, both defined in Eq. (5). Indeed, N 2 p t ∓ N √ g rr p r = X ∓ σZ .(58) Since we assumed that particle 2 is incoming, σ 2 = −1, the summation of the conservation laws leads to the following equation: X 1 ∓ σ 1 Z 1 + X 2 ± Z 2 = X 3 ∓ σ 3 Z 3 + X 4 ∓ σ 4 Z 4 .(59) Let us find the leading-order terms in Eq. (59). For usual particles, X and Z differ by a term proportional to N 2 (see Eq. (15)), so their combinations with different signs have different leading orders in expansion around r H , X − Z ∼ (r − r H ) 2 , X + Z ≈ 2X H .(60) On the other hand, in the case of critical particles, or nearly critical particles, the leading order of expansion in r C −r H for both combinations is X ∓ Z ∼ (r C − r H ) .(61) Let us start to analyze Eq. (59) with its upper sign. We assumed particle 2 to be a usual particle, and thus the leading order is the zeroth one, so 2X H 2 = X H 3 − σ 3 Z H 3 + X H 4 − σ 4 Z H 4 .(62) This equation can be satisfied only when one of the final particles, say 4, is usual and incoming, i.e., X H 4 > 0, σ 4 = −1. Let us now turn to Eq. (59) with its lower sign. We see that usual incoming particles 2 and 4 will make no contribution to zeroth order and first order. On the other hand, critical particle 1 will contribute to the first order, and this contribution will dominate the left-hand side. Therefore, the expansion of the right-hand side must also be dominated by a first-order contribution, which means that particle 3 has to be critical or nearly critical. The leading order of Eq. (59) with the lower sign thus becomes χ 1 + σ 1 χ 2 1 − N 2 H m 2 1 + λ 2 1 g H ϕϕ = χ 3 − C 3 + σ 3 (χ 3 − C 3 ) 2 − N 2 H m 2 3 + λ 2 3 g H ϕϕ .(63) Here, C 3 parametrizes deviation of particle 3 from criticality according to Eq. (18). We put C 1 = 0 for simplicity. One can denote the whole left-hand side of Eq. (63) as a new parameter A 1 such that N H A 1 ≡ χ 1 + σ 1 χ 2 1 − N 2 H m 2 1 + λ 2 1 g H ϕϕ ,(64) which will carry all the information about particle 1. Since χ 1 > 0, we can make sure that A 1 0. The difference between BSW-type processes, σ 1 = −1, and Schnittman-type processes, σ 1 = +1, is absorbed into the definition of A 1 , and thus the results expressed using A 1 hereafter will be the same for both. Kinematic regimes Having derived Eqs. (62) and (63) from the conservation law, Eq. (59), we now turn to their physical implications in collisional Penrose processes. For a Penrose process, one of the particles must fall inside the black hole, and we can make sure that particle 4 is bound to do so according to Eq. (62). On the other hand, particle 3 can be produced in four distinct kinematic regimes, based on the combination of sign of C 3 and the sign variable σ 3 in Eq. (63). In accordance with [29], let us denote the regimes with C 3 > 0 as +, C 3 < 0 as −, σ 3 = +1 as out, and σ 3 = −1 as in. The four kinematic regimes are then out+, out−, in+, and in−. There are important differences among the four kinematic regimes in several regards. First, we should determine which ones allow particle 3 to escape from the vicinity of the black hole. For simplicity, let us assume a situation when condition (7) is well approximated by linear expansion terms. In such a case, there can be at most one turning point near r H . The radius r T of this turning point is defined by the condition χ 3 (r T − r H ) − C 3 (r C − r H ) = N H m 2 3 + λ 2 3 g H ϕϕ (r T − r H ) ,(65) which can be rearranged as follows: r C − r T = (r C − r H ) χ 3 − C 3 − N H m 2 3 + λ 2 3 g H ϕϕ χ 3 − N H m 2 3 + λ 2 3 g H ϕϕ .(66) Note that Eq. (66) may imply r T < r H , and hence no turning point in the region of our interest. The motion of particle 3 must be allowed at r C , where it is produced; hence, χ 3 − C 3 − N H m 2 3 + λ 2 3 g H ϕϕ > 0 .(67) Therefore, the numerator of the fraction in Eq. (66) is positive, and since r C > r H by definition, we can conclude that r T < r C for particles produced with parameters in the admissible region, whereas r T > r C for the ones outside of it. (Note the definition of the admissible region of parameters given in Eq. (24).) However, if r T > r C , particle 3 produced at r C can never escape. Therefore, in order for particle 3 to escape, it must be produced with parameters in the admissible region. Particles with C 3 > 0 cannot fall into the black hole by definition, and thus they must have a turning point at a radius r H < r T < r C . Therefore, in regimes out+ and in+, particle 3 can be produced only with parameters in the admissible region, and it is automatically guaranteed to escape. Particles with C 3 < 0, in turn, can cross the horizon; their motion is allowed both at r H and at r C . Hence, there must be an even number of turning points between r H and r C . However, we assumed the existence of at most one turning point, and thus there can be none. Incoming particle 3 produced with C 3 < 0 therefore has to fall into the black hole; i.e., escape in the in− regime is impossible. Last, in the out− regime, particle 3 can either escape or be reflected and fall into the black hole, based on whether its parameters lie in the admissible region or not. The way in which parameters C 3 and σ 3 determine escape possibilities of particle 3 is actually independent of the particular system in question. This can be seen, e.g., through comparison with Sec. IV B in [30], in which particles moving along the symmetry axis are considered. However, despite being so universal and so important for escape of particle 3, parameters C 3 and σ 3 are quite irrelevant for all other purposes. Indeed, if particle 3 escapes, σ 3 must eventually flip to +1, whereas C 3 encodes only a small deviation from fine tuning of parameters of particle 3. We shall now solve Eq. (63) for C 3 and σ 3 , in order to view the four different kinematic regimes in terms of the other parameters, i.e., χ 3 , λ 3 , m 3 , and A 1 . First, we can observe from Eq. (63) that σ 3 = sgn(N H A 1 − χ 3 + C 3 ) .(68) Expressing C 3 form Eq. (63) and then substituting it back into Eq. (68), we obtain the solutions as follows: C 3 = χ 3 − N H 2 A 1 + 1 A 1 m 2 3 + λ 2 3 g H ϕϕ ,(69)σ 3 = sgn A 2 1 − m 2 3 + λ 2 3 g H ϕϕ .(70) Since we are interested only in the sign of C 3 , and σ 3 is a sign variable per se, only ratios among the four parameters on the right-hand sides matter to us. Therefore, we have considerable freedom in choosing the relevant three variables. Nevertheless, we have seen above that we also need to consider the admissible region, for which the relevant parameters areχ,λ. Thus, it is natural to understand Eqs. (69) and (70) as depending onχ 3 ,λ 3 and on the ratio between A 1 and m 3 . A third parameter, i.e., the ratio between A 1 and m 3 , clearly stands out; it tracks a comparison between properties of two particles, and it is irrelevant for the admissible region of particle 3. Therefore, we find it natural to visualize the different kinematic regimes as regions in the same two-dimensional parameter space as the admissible region, with the ratio between A 1 and m 3 serving as an external parameter. However, since we are interested in a physical interpretation, namely, in energy extraction, we will keep A 1 and m 3 separate in the equations, and we will not explicitly pass to the parameters normalized to unit rest mass. If we treat the ratio between A 1 and m 3 as an external parameter, there are just two main possibilities, namely, a heavy regime and a light regime. In the heavy regime, defined by m 3 > A 1 , the right-hand side of Eq. (70) is negative for anyλ 3 , and hence the in region covers the whole parameter space. In the light regime, defined by m 3 < A 1 , the parameter space is divided into in and out regions. B. Structure of the parameter space Overall picture Now, we should understand how the regions of parameters corresponding to different kinematic regimes are distributed across our parameter space. Let us start with the distinction between + and − regimes, which is always present regardless of the ratio between A 1 and m 3 . From the solution given in Eq. (69), we see that C 3 > 0 implies the inequality χ 3 > N H 2 A 1 + 1 A 1 m 2 3 + λ 2 3 g H ϕϕ ,(71) which defines the + region of parameters. Conversely, the inequality opposite to Eq. (71) entails C 3 < 0 and defines the − region. For C 3 = 0, one has χ 3 = N H 2 A 1 + 1 A 1 m 2 3 + λ 2 3 g H ϕϕ ,(72) which defines the border between the regions, and it corresponds to particle 3 being produced as precisely critical. In theχ 3 ,λ 3 parameter space, Eq. (72) represents a parabola with axisλ 3 = 0. For a physical interpretation, let us substitute Eq. (72) into Eq. (12) to obtain parametric expressions for the border as follows: q 3 = − 1 φ +ωA H ϕ ωλ 3 + N H 2 A 1 + 1 A 1 m 2 3 + λ 2 3 g H ϕϕ ,(73)L 3 = 1 φ +ωA H ϕ φ λ 3 − N H A H ϕ 2 A 1 + 1 A 1 m 2 3 + λ 2 3 g H ϕϕ ,(74)E 3 = 1 φ +ωA H ϕ ω Hφ −ωφ H λ 3 + N H A H t 2 A 1 + 1 A 1 m 2 3 + λ 2 3 g H ϕϕ .(75) Recalling the gauge condition Eq. (44), we can make sure that Eq. (75) leads to E 3 → ∞ for |λ 3 | → ∞. Therefore, we see that values of E 3 in neither the + nor − region are bounded from above. This was not possible in the previously known special cases; see [19]. Since the escape of particle 3 is guaranteed in the + regime, we can also conclude that there is no upper bound on the energy extracted from the black hole. Such a possibility is often called the super-Penrose process. Furthermore, as the + region exists for any value of m 3 , we see that there is no bound on the mass of escaping particles as well. Now, let us turn to the distinction between in and out regimes. From Eq. (70), we can see that parameters in the in region must satisfy the condition |λ 3 | > g H ϕϕ (A 2 1 − m 2 3 ) ,(76) whereas the opposite inequality holds for parameters in the out region. The two values of λ 3 that separate the regions, i.e., λ 3 = ± g H ϕϕ (A 2 1 − m 2 3 ) ,(77) correspond to a situation when our leading-order approximation breaks down, since the square root on the right-hand side of Eq. (63) becomes zero and we cannot consistently assign a value to σ 3 . This indicates that particle 3 with those values of λ 3 will be produced as class II nearly critical, and a different expansion would be needed to determine its initial direction of motion. Let us note that particle 3 can be produced in the in+ regime, i.e., with χ 3 and λ 3 satisfying both Eq. (71) and Eq. (76), for any value of the ratio between m 3 and A 1 . This is another thing that was not possible in the previously studied special cases (i.e., vacuum black holes and nonrotating black holes). Osculation points Having derived borders that divide theχ 3 ,λ 3 parameter space according to various criteria, we shall now consider the corners where the borders meet. We can get insight into this issue from the physical interpretation of the borders; Eq. (25) gives a set of parameters for which precisely critical particles are of class II, Eq. (72) corresponds to particle 3 being produced as precisely critical, and Eq. (77) corresponds to particle 3 being produced as class II critical or nearly critical. If any two of those eventualities happen together, the third one follows automatically. Therefore, all three borders must meet in the same points of the parameter space. Indeed, substituting Eq. (77) into both Eq. (25) and Eq. (72) leads to χ 3 = N H A 1 . Conversely, in the heavy regime m 3 > A 1 , in which case Eq. (77) is absent, the remaining borders given in Eqs. (25) and (72) cannot meet at any point. Note that in the m 3 = A 1 case, Eqs. (25) and (72) touch at λ 3 = 0. We have also seen that particle 3 can be produced with C 3 > 0 only when its other parameters satisfy the condition (24). Therefore, the + region must lie inside the admissible region in the parameter space, and their borders can only osculate. One can make sure that this is indeed the case by comparing the limiting behavior of Eqs. (25) and (72) 28)), we obtain the values of q, L, and E for the osculating points, namely, q = 1 φ +ωA H ϕ ∓ω g H ϕϕ (A 2 1 − m 2 3 ) − N H A 1 ,(78)L = 1 φ +ωA H ϕ ±φ g H ϕϕ (A 2 1 − m 2 3 ) − N H A H ϕ A 1 ,(79)E = 1 φ +ωA H ϕ ± ω Hφ −ωφ H g H ϕϕ (A 2 1 − m 2 3 ) + N H A H t A 1 .(80) Bounds on parameters: General considerations We have seen that there is no upper bound on the energy E 3 in the regions of the parameter space which correspond to particle 3 being able to escape. Let us now search for other bounds on the parameters of particle 3 in these regions. There are multiple possibilities, depending on the ratio between A 1 and m 3 . First, let us consider a hypothetical interaction, for which this ratio can take any value. More precisely, we shall consider an idealized scenario, in which it is possible to produce particle 3 with any value of m 3 in the processes with the same fixed value of A 1 . (Note that keeping A 1 fixed is motivated by existence of upper bounds on A 1 in terms of E 1 ; see Eqs. (93) and (95). Moreover, one can also find a lower bound on A 1 in a similar manner for σ 1 = +1.) Now, let us look at the union of all the + regions corresponding to all the possible values of m 3 . Since the osculation points given by Eq. (77) can occur at any value ofλ 3 , we see that this union will fill the whole admissible region iñ χ 3 ,λ 3 space. Therefore, the possible bounds given in Eqs. (47)-(49) onq, l, and ε in the admissible region will also serve as bounds onq 3 , l 3 , and ε 3 of particles produced by our hypothetical interaction. Second, let us consider a more realistic scenario, in which only some values of the ratio between m 3 and A 1 are possible. In such a case, we can search for bounds on parameters in the + region for given values of m 3 and A 1 . Since the parametric expressions given in Eqs. (73)-(75) are mere quadratic functions of λ 3 , they will always reach an extremum, and therefore there will always be bounds on values of q 3 , L 3 , and E 3 in the + region. Starting with q 3 , we find that for λ 3 = − g H ϕϕω N H A 1 ,(81) Eq. (73) reaches an extremum with value q b 3 = − 1 2 φ +ωA H ϕ N H A 1 + m 2 3 A 1 − g H ϕϕω 2 A 1 N H .(82) Turning to L 3 , we can infer that for λ 3 = g H ϕϕφ N H A H ϕ A 1 ,(83) expression (74) reaches an extremum with value L b 3 = − 1 2 φ +ωA H ϕ   N H A H ϕ A 1 + m 2 3 A 1 − g H ϕϕφ 2 A 1 N H A H ϕ   . (84) we can make sure that the condition (44) implies that Eq. (75) will reach a minimum. It occurs for λ 3 = − g H ϕϕ A 1 N H A H t ω Hφ −ωφ H ,(85) and its value is E min 3 = 1 2 φ +ωA H ϕ N H A H t A 1 + m 2 3 A 1 − g H ϕϕ A 1 N H A H t ω Hφ −ωφ H 2 .(86) Additional remarks on the out− region The discussion above can be extended by analyzing bounds on parameters in further, special regions in the parameter space. The out− region is particularly interesting in this regard, since there is an upper bound on the values of energy E 3 in this region. As noted in [30], this can be used to illustrate the difference between the BSW-type and Schnittmantype collisional process. Let us extend this argument to our more complicated case of equatorial charged particles. Equation (75) cannot have a maximum on account of Eq. (44), and thus the upper bound on E 3 in the out− region must be its value for one of the osculation points. Picking the higher of the values in Eq. (80), we can write the bound as follows: E 3 < ω Hφ −ωφ Ĥ φ +ωA H ϕ g H ϕϕ (A 2 1 − m 2 3 ) + N H A H t A 1 φ +ωA H ϕ .(87) We shall maximize the bound with respect to all possible parameters in order to derive an unconditional bound in terms of E 1 . First, we consider m 3 ≪ A 1 , which also allows us to factor out A 1 , E 3 < ω Hφ −ωφ Ĥ φ +ωA H ϕ g H ϕϕ + N H A H t φ +ωA H ϕ A 1 .(88) Second, we shall express A 1 using E 1 and maximize it with respect to other parameters of particle 1. We can use (12) with X H = 0 (as (80) lies on (25)) to rewrite χ 1 in terms of E 1 and λ 1 , χ 1 = E 1φ +ωA H ϕ A H t − λ 1 ω Hφ −ωφ H A H t .(89) Note that gauge condition (44) implies that the coefficient at E 1 is positive. In the |λ 1 | → ∞ limit, for fixed E 1 , using Eq. (89) in Eq. (64), we find that the leading order of A 1 is A 1 ≈ −λ 1 ω Hφ −ωφ H N H A H t + σ 1 |λ 1 | ω Hφ −ωφ H N H A H t 2 − 1 g H ϕϕ . (90) One can see that A 1 is not real due to Eq. (43). Therefore, for a given E 1 , the parameter A 1 will lie in the real numbers only for a finite interval of values of λ 1 , and ∂ A1 ∂λ1 will blow up with opposite signs at the opposite ends of that interval. Thus, there will always be an extremum with respect to λ 1 . (See the Appendix for details.) For the BSW-type process, σ 1 = −1, there will be a minimum. Hence, we shall start with the following inequality (see Eq. (64)): N H A 1 (E 1 , λ 1 , m 1 ) χ 1 (E 1 , λ 1 ) .(91) In order to maximize χ 1 (E 1 , λ 1 ) of Eq. (89), we need to look at values of λ 1 that satisfy N H A 1 (E 1 , λ 1 , m 1 ) = χ 1 (E 1 , λ 1 ) ,(92) i.e., the ends of the interval mentioned above, and on their dependence on m 1 (cf. the Appendix). The resulting unconditional bound on A 1 with σ 1 = −1 is A 1 E 1 φ +ωA H ϕ N H A H t − g H ϕϕ ω Hφ −ωφ H .(93) In combination with Eq. (88), Eq. (93) gives us the unconditional upper bound on energy E 3 of a particle produced in the out− regime in the BSW-type process as follows: E 3 < E 1 N H A H t + g H ϕϕ ω Hφ −ωφ H N H A H t − g H ϕϕ ω Hφ −ωφ H .(94) For the Schnittman-type process, σ 1 = +1, we can see that we need to put m 1 = 0 to maximize A 1 . Then we can find the maximum of A 1 with respect to λ 1 (using (64) with (89); see also the Appendix) and derive the unconditional bound on A 1 , A 1 2E 1 N H φ +ωA H ϕ A H t N 2 H A H t 2 − g H ϕϕ ω Hφ −ωφ H 2 .(95) Combining with Eq. (88), we conclude that the unconditional upper bound on the energy E 3 of a particle produced in the out− regime in the Schnittman-type process is E 3 < 2E 1 N H A H t N H A H t − g H ϕϕ ω Hφ −ωφ H .(96) Let us note that for ω = 0, the above results reduce to the ones of [30], i.e., E 3 < E 1 for the BSW-type process and E 3 < 2E 1 for the Schnittman-type process. The bound for the Schnittman-type process is higher than for the BSW-type process even in the general case, due to Eq. (43). On the other hand, also due to Eq. (43), we can see that E 3 > E 1 is generally not prevented for the BSW-type process, unlike in the ω = 0 case. However, the biggest difference is that in the general case, the gauge-dependent factors do not cancel, and thus the bounds need to be interpreted more carefully. We have seen above that the collisional processes analyzed here have multiple features that were absent in the previously studied special cases. Thus, now we shall discuss how the special cases follow from the general results. C. Special cases and the degenerate case Quasiradial limit First, let us investigate how to recover the results for radially moving particles [29]. Similarly as in Sec. III A, we can choose to consider either particles that move radially with respect to a locally nonrotating observer very close to the horizon, i.e., λ 3 = 0, or particles that would move radially in a region devoid of the influence of dragging and of magnetic field, i.e., L 3 = 0. However, both choices lead to a trivial transition, unlike in Sec. III A. Considering particles with a fixed value of λ 3 , the condition C 3 > 0, see Eq. (71), can be restated (using Eq. (89) for particle 3) as follows: E 3 > 1 φ +ωA H ϕ ω Hφ −ωφ H λ 3 + N H A H t 2 A 1 + 1 A 1 m 2 3 + λ 2 3 g H ϕϕ .(97) The key feature we want to reproduce is the existence of a threshold value µ, such that E 3 > µ corresponds to the + regime and E 3 < µ corresponds to the − regime. An indeed, by setting λ 3 = 0, we get a threshold value µ, given by µ ≡ N H A H t 2 φ +ωA H ϕ A 1 + m 2 3 A 1 .(98) Moreover, λ 3 = 0 lies in the out region whenever it exists. Thus, we can also see that for λ 3 = 0, the heavy regime m 3 > A 1 coincides with the in regime and the light regime m 3 < A 1 with the out regime. This also replicates the results of [29]. Geodesic limit Second, we discuss the transition to geodesic particles, i.e., q 3 = 0. We shall rewrite C 3 of Eq. (69) in terms of E 3 and q 3 (using Eq. (10) and dropping the contribution proportional to X H ), C 3 = − 1 ω H ωE 3 + q 3 ω Hφ −ωφ H − N H 2 A 1 + 1 A 1 m 2 3 + E 3 + q 3 A H t 2 g H ϕϕ ω 2 H .(99) (More precisely, by using Eq. (10) with (18) in the derivation of (63), one can make sure that the X H term influences only higher orders of expansion.) The resulting expression (99) admits a factorization, C 3 = − N H 2g H ϕϕ ω 2 H A 1 (E 3 − R + ) (E 3 − R − ) ,(100) where R ± stand for R ± = −q 3 A H t + g H ϕϕ A 1 N H − ω Hω ± |ω H | ω 2 − 2q 3 N H g H ϕϕ A 1 φ +ωA H ϕ − N 2 H g H ϕϕ 1 + m 2 3 A 2 1 .(101) Since R + > R − , the + regime corresponds to R − < E 3 < R + for a fixed value of q 3 . Let us now rewrite σ 3 of Eq. (70) in terms of E 3 and q 3 , σ 3 = sgn A 2 1 − m 2 3 + E 3 + q 3 A H t 2 g H ϕϕ ω 2 H .(102) The result again admits a factorization, σ 3 = − sgn[(E 3 − S + ) (E 3 − S − )](103) where S ± are S ± = −q 3 A H t ± |ω H | g H ϕϕ (A 2 1 − m 2 3 ) .(104) As S + > S − , the out regime corresponds to S − < E 3 < S + for a fixed value of q 3 . Now, let us put q 3 = 0 in the equations above to find the geodesic limit. For geodesic particles, i.e., for q 3 = 0, it should be possible to produce particles with high values of E 3 or m 3 only in the in− regime, which prevents their escape. In putting q 3 = 0, we denote the resulting values of R ± for the geodesic limit as R g ± , so that R g ± = g H ϕϕ A 1 N H −ω Hω ± |ω H | ω 2 − N 2 H g H ϕϕ 1 + m 2 3 A 2 1 .(105) Since S − becomes negative for q 3 = 0 and E 3 > 0, we need to consider only S + in the geodesic limit, i.e., S g + , which reads S g + = |ω H | g H ϕϕ (A 2 1 − m 2 3 ) .(106) If a geodesic particle 3 has sufficiently high energy, such that it satisfies both E 3 > R g + and E 3 > S g + , it will be produced in the in− regime and fall into the black hole. Conversely, we can see that R g ± and S g + all become imaginary for m 3 ≫ A 1 , and thus the in− regime is the only possible regime in that case. Hence, we reproduced the results of [19,20] that the mass and energy of escaping geodesic particles is bounded. Note that in [19,20], the symbols λ ± were used for R g ± and λ 0 for S g + . Degenerate case Third, let us return to the degenerate case given in Eq. (13), which was so far excluded from our discussion of energy extraction. We shall use the same parametrization as in the geodesic case. If we apply Eq. (13) to R ± given in Eq. (101), they go over to degenerate R ± , i.e., R d ± , which read R d ± = −q 3 A H t − g H ϕϕ A 1 N H ω Hω ∓ |ω H | ω 2 − N 2 H g H ϕϕ 1 + m 2 3 A 2 1 .(107) We have determined in Sec. III B that we need to impose the gauge condition A H t = 0 in the degenerate case. However, with this condition, it holds that R d ± = R g ± . Furthermore, putting A H t = 0 has the same effect on S ± given in Eq. (104) as putting q 3 = 0. Thus, we see that upon the gauge condition A H t = 0, the degenerate case completely coincides with the geodesic case. Therefore, the degenerate case corresponds to a situation when the spacetime behaves locally as vacuum close to r H . However, it can be shown that the spacetime does not need to be globally vacuum in order for Eq. (13) to be satisfied (see, e.g., [35]). V. RESULTS FOR KERR-NEWMAN SOLUTION A. List of relevant quantities Let us now apply the framework developed above to the case of extremal Kerr-Newman solution with mass M , angular momentum aM , and charge Q. The extremal case is defined by the condition M 2 = Q 2 + a 2 ,(108) which implies r H = M .(109) We also list the quantities relevant for our discussion as follows: N H = Q 2 + a 2 Q 2 + 2a 2 , g H ϕϕ = Q 2 + 2a 2 2 Q 2 + a 2 ,(110)ω H = a Q 2 + 2a 2 , φ H = Q Q 2 + a 2 Q 2 + 2a 2 ,(111)A H t = − Q Q 2 + a 2 , A H ϕ = Qa Q 2 + a 2 ,(112)ω = − 2a Q 2 + a 2 (Q 2 + 2a 2 ) 2 ,φ = − Q 3 (Q 2 + 2a 2 ) 2 .(113) B. Admissible region in the parameter space Critical particles can approach r = M , whenever their parameters lie inside the admissible region in the parameter space. Equations (26)- (28) for the border of the admissible region go over to q = Q 2 + a 2 Q (Q 2 + 2a 2 ) −2aλ + (Q 2 + 2a 2 ) 2 m 2 + (Q 2 + a 2 ) λ 2 ,(114)L = Q 2 λ + a (Q 2 + 2a 2 ) 2 m 2 + (Q 2 + a 2 ) λ 2 Q 2 + 2a 2 ,(115)E = −aλ + (Q 2 + 2a 2 ) 2 m 2 + (Q 2 + a 2 ) λ 2 Q 2 + 2a 2 .(116) As we discussed in Sec. III A 2, bounds on values of q, L, and E in the admissible region are given by extrema of Eqs. (114)-(116) as functions of λ. If |a| M < 1 2 , then Eq. (114) reaches an extremum, which has a value given in Eq. (30), i.e., q b = m Q 2 − 3a 2 Q ,(117) and the corresponding values given in Eqs. (31) and (32) of L and E become L = ma Q 2 + a 2 3Q 2 + a 2 Q 2 − 3a 2 , E = m Q 2 + a 2 Q 2 − a 2 Q 2 − 3a 2 .(118) For |a| M > √ 5−1 2 , there exists an extremum of Eq. (115), which has a value given in Eq. (35), i.e., L b = m sgn a a 4 + Q 2 a 2 − Q 4 Q 2 + a 2 ,(119) and the corresponding values given in Eqs. (36) and (37) of q and E go over to q = m |a| Q 3Q 2 + a 2 a 4 + Q 2 a 2 − Q 4 , E = m |a| Q 2 + a 2 2Q 2 + a 2 a 4 + Q 2 a 2 − Q 4 .(120) In the standard gauge vanishing at spatial infinity, the dragging potential and the electromagnetic potential of an extremal Kerr-Newman solution satisfy the conditions given in Eqs. (43) and (44). Therefore, Eq. (116) always has a minimum, see Eq. (40), E min = m |Q| Q 2 + a 2 ,(121) and the corresponding values given in Eqs. (41) and (42) of q and L turn into q = m |Q| Q 2 − a 2 Q , L = ma |Q| 2Q 2 + a 2 Q 2 + a 2 .(122) The degenerate case of Eq. (13) corresponds to the extremal Kerr solution, i.e., a = 0. Let us note that for the extremal Reissner-Nordström solution, the border of the admissible region has a symmetry with respect to change l → −l. Such a possibility was labeled as case 3 in [12]. A summary of the general results on bounds on parameters in the admissible region is given in Table I. 1 √ 3 < |a| M < 1 0 < |Q| M < 2 3 − √ Q 2 +a 2 |Q| 3 2a 2 − Q 2 q sgn Q < √ Q 2 +a 2 |Q| |a| < l sgn a |a| 3Q 2 +2a 2 Q 2 1 √ 3 2 3 0 q sgn Q < 3 2 0 < |a| M < 1 √ 3 2 3 < |Q| M < 1 √ Q 2 +a 2 |Q| 3 Q 2 − 2a 2 q sgn Q < √ Q 2 +a 2 |Q| One can observe that E m for the the values of energy given in Eqs. (118) and (120), whereas E min m. This implies that the expressions, given in Eqs. (114) and (115), for the values of q and L on the border, defined by Eq. (25), of the admissible region, are monotonic along the part of the border corresponding to E cr m. Therefore, the values of q and L in the part of the admissible region with E cr m are bounded by the values of q and L for points on Eq. (25) with E cr = m. Using the expressions for these points obtained in [12], we present the resulting bounds in Table II. C. Structure of the parameter space with regard to energy extraction As we examined in Sec. IV B, the parameter space of nearly critical particles can be divided into various regions corresponding to different kinematic regimes, in which particle 3 can be produced in our collisional process. In the extremal Kerr-Newman spacetime, Eqs. (73)-(75) for the border separating the + and − regions become q 3 = Q 2 + a 2 Q − 2aλ 3 Q 2 + 2a 2 + 1 2 A 1 + 1 A 1 m 2 3 + Q 2 + a 2 λ 2 3 (Q 2 + 2a 2 ) 2 ,(123)L 3 = Q 2 λ 3 Q 2 + 2a 2 + a 2 A 1 + 1 A 1 m 2 3 + Q 2 + a 2 λ 2 3 (Q 2 + 2a 2 ) 2 ,(124)E 3 = − aλ 3 Q 2 + 2a 2 + 1 2 A 1 + 1 A 1 m 2 3 + Q 2 + a 2 λ 2 3 (Q 2 + 2a 2 ) 2 .(125) The two values of λ 3 that separate in and out regions given in Eq. (77) are λ 3 = ± Q 2 + 2a 2 Q 2 + a 2 A 2 1 − m 2 3 .(126) Equations (78) q 3 = 1 Q Q 2 + a 2 A 1 ∓ 2a A 2 1 − m 2 3 ,(127)L 3 = aA 1 ± Q 2 A 2 1 − m 2 3 Q 2 + a 2 ,(128)E 3 = A 1 ∓ a A 2 1 − m 2 3 Q 2 + a 2 .(129) As we noted in Sec. IV B 3, the values of q 3 , L 3 , and E 3 in the + region are always bounded. Equations (81), (84), i.e., critical particles with those parameters cannot approach r = M , and nearly critical particles produced with those parameters in the vicinity of r = M cannot escape. Among the regions corresponding to different kinematic regimes of production of particle 3 for a given A1, the in− region, which corresponds to particle 3 falling into the black hole, is shaded in orange. Left: process with A1 = 2.5m3, i.e., in the light regime, for a black hole with a M = 1 2 . Right: process with 3A1 = 2m3, i.e., in the heavy regime, for a black hole with a M = √ 35 6 . For uncharged particles in the in+ region (marked by the red bar), there exists a lower and an upper bound on their energy, whereas for particles with λ3 = 0 in the in+ region (marked by the blue bar), there is only a lower bound. The ultimate lower bound on energy for the whole in+ region is marked by the green cross. and (86) for bounds on these parameters go over to q b 3 = 1 2Q Q 2 − 3a 2 Q 2 + a 2 A 1 + Q 2 + a 2 m 2 3 A 1 ,(130)L b 3 = 1 2 a 4 + Q 2 a 2 − Q 4 a (Q 2 + a 2 ) A 1 + a m 2 3 A 1 ,(131)E min 3 = 1 2 Q 2 Q 2 + a 2 A 1 + m 2 3 A 1 .(132) The structure of the parameter space is visualized for a black hole with a M = 1 2 and a process with m 3 < A 1 in the left panel of Figure 1 and for a black hole with a M = √ 35 6 and a process with m 3 > A 1 on the right panel. There, it is also shown how special limiting cases discussed in Sec. IV C correspond to different sections of the parameter space. VI. CONCLUSIONS We have studied high-energy collisions of equatorial charged test particles near the horizon of an extremal rotating electrovacuum black hole, i.e., the generalized BSW effect. Such collisions are only possible when critical particles can reach the vicinity of the horizon. Consequently, we distinguished different variants of the process based on whether there exist bounds on the charge and the angular momentum of the critical particles that can approach the horizon. (Since the values of angular momentum measured at the horizon are bounded solely in the vacuum case, we used the angular momentum measured at infinity as our reference.) Geometrically, these bounds can be seen as extrema of a hyperbola curve in the two-dimensional parameter space of the critical particles. We proceeded to examine the possibilities of energy extraction from the black hole by particles produced in the BSW collisions, using a 2 → 2 model process. We first discussed which kinematic regimes imply escape of one of the produced particles and noted that this picture is model independent. Then, we investigated to what regions in the parameter space do these kinematic regimes correspond, and we found several situations that were not possible in the previously studied cases with fewer parameters. We also explained how these limiting cases can be obtained as different sections of the full parameter space. This is visualized in Fig. 1 for the case of extremal Kerr-Newman solution. Leaving the technical intricacies aside, one main result stands out: there are no unconditional bounds on the extracted energy as long as both the black hole and the escaping particles are charged. (And the same actually holds for the absence of unconditional upper bounds on the mass of the escaping particle.) Thus, the influence of the electromagnetic field on the energy extraction is more important in the present setup, as it prevails for arbitrarily small black hole charge. Although these results are very promising, we have to acknowledge that a lot of effects have not been taken into account in our considerations, most notably the electromagnetic backreaction. Additionally, as shown in [30], despite the lack of unconditional bounds, there can be caveats which make the energy extraction unfeasible even within the test particle approximation. Although we leave the details for future work, we can nevertheless show by inspection that several of the concerns mentioned in [30] for the axial case do not apply to the equatorial case. In particular, some of the difficulties in the axial case arose from the fact that critical particles needed to be highly relativistic, E ≫ m, in order to approach the horizon for Q ≪ M . However, in the present case, we have shown that nonrelativistic critical particles can always approach the horizon, due to E min m, regardless of the value of Q. Other problems in the axial case were caused by the fact that the escaping particle 3 needed to have a charge of larger magnitude than the initial charged critical particle 1, i.e. |q 3 | > |q 1 | > 0, in order to have E 3 > E 1 . In the present setup, this is not an issue whenever the black hole is rotating; in that case, the energy of critical or nearly critical particles does not need to be proportional or approximately proportional to their charge. This suggests that the presence of the frame dragging is important for the energy extraction as well, as we shall examine in follow-up work. corresponds to Eq. (30) being a lower bound, whereas if the opposite inequality is satisfied, Eq. (30) will be an upper bound. Case 1c: If the expression under the square root in Eq. (29) is zero (and Eq. (29) is thus an invalid expression), the values of charge in the admissible region will be bounded by q b = 0. However, q = 0 cannot be attained for any finite value of other parameters on the border. Let us turn to the angular momentum L. From Eq. ( Again, there are three possible cases. Case 2a: If Eq. (34) is imaginary, there is no bound on L in the admissible region. Case 2b: If Eq. (34) is real, it will correspond to an extremum of Eq. (27) with value for |λ 3 | → ∞ and their values at λ 3 = 0, i.e., in between the values given in Eq. (77). By putting Eq. (77) into Eqs. (73)-(75) (or into Eqs. (26)-( -(80) for the osculation points, where the curves given by Eqs. (114)-(116) and Eqs. (123)-(125) touch, turn out to be FIG. 1 . 1Parameter space of critical and nearly critical particles for extremal Kerr-Newman black holes. On each panel, the part shaded in gray is outside the admissible region; TABLE II . IIBounds on the parametersq and l of critical particles with ε 1 that can approach r = M in an extremal Kerr-Newman spacetime. (The placement of nonstrict inequalities is based on results about class II critical particles; see[12].)Kerr-Newman black hole parametersRestrictions for ε 1|a| M |Q| M Bounds onq Bounds on l 1 0 No bound 2|a| √ 3 < l sgn a 2 |a| No boundAppendix A: Auxiliary formulasLet us introduce an expression P as follows:We can define P ± by putting δ = ±1 and W = W end , where W end is given byThen, it holds that A 1 (E 1 , λ 1 , m 1 ) is real whenever λ 1 ∈ [P − , P + ]. Note that A 1 (E 1 , λ 1 , m 1 ) is defined by (64) with (89) and also that λ 1 = P ± satisfies Eq. (92), i.e., the square root in Eq. (64) being equal to zero. It is possible to further define P ex by putting δ = −σ 1 sgn A H t ω Hφ −ωφ H and W = W ex , where W ex has the following relation to W end :One can make sure that A 1 (E 1 , λ 1 , m 1 ) reaches an extremum with respect to λ 1 for λ 1 = P ex . Gravitational collapse: The role of general relativity. R Penrose, Rivista Nuovo Cimento, Numero Speziale. 1252R. Penrose, Gravitational collapse: The role of general relativity, Rivista Nuovo Cimento, Numero Speziale 1, 252 (1969); . General Relativity Gravitation. 341141General Relativity Gravitation 34, 1141 (2002). Reversible and irreversible transformations in black-hole physics. D Christodoulou, Phys. Rev. Lett. 251596D. Christodoulou, Reversible and irreversible transformations in black-hole physics, Phys. Rev. Lett. 25, 1596 (1970). Rotating black holes: Locally nonrotating frames, energy extraction, and scalar synchrotron radiation. J M Bardeen, W H Press, S A Teukolsky, Astrophys. J. 178347J. M. Bardeen, W. H. Press, and S. A. Teukolsky, Rotating black holes: Locally nonrotating frames, energy extraction, and scalar synchrotron radiation, Astrophys. J. 178, 347 (1972). Energy limits on the Penrose process. R M Wald, Astrophys. J. 191231R. M. Wald, Energy limits on the Penrose process, Astrophys. J. 191, 231 (1974). Black hole in a uniform magnetic field. R M Wald, Phys. Rev. D. 101680R. M. Wald, Black hole in a uniform magnetic field, Phys. Rev. D 10, 1680 (1974). Revival of the Penrose process for astrophysical applications. S M Wagh, S V Dhurandhar, N Dadhich, Astrophys. J. 29012S. M. Wagh, S. V. Dhurandhar, and N. Dadhich, Revival of the Penrose process for astrophysical applications, Astrophys. J. 290, 12 (1985). On the energetics of Reissner-Nordström geometries. G Denardo, R Ruffini, Phys. Lett. B. 45259G. Denardo and R. Ruffini, On the energetics of Reissner-Nordström geometries, Phys. Lett. B 45, 259 (1973). High efficiency of the Penrose mechanism for particle collisions. T Piran, J Shaham, J Katz, Astrophys. J. Lett. 196107T. Piran, J. Shaham, and J. Katz, High efficiency of the Penrose mechanism for particle collisions, Astrophys. J. Lett. 196, L107 (1975). Upper bounds on collisional Penrose processes near rotating black-hole horizons. T Piran, J Shaham, Phys. Rev. D. 161615T. Piran and J. Shaham, Upper bounds on collisional Penrose processes near rotating black-hole horizons, Phys. Rev. D 16, 1615 (1977). Is the super-Penrose process possible near black holes?. O B Zaslavskii, arXiv:1511.07501Phys. Rev. D. 9324056gr-qcO. B. Zaslavskii, Is the super-Penrose process possible near black holes?, Phys. Rev. D 93, 024056 (2016); arXiv:1511.07501 [gr-qc]. Kerr black holes as particle accelerators to arbitrarily high energy. M Bañados, J Silk, S M West, arXiv:0909.0169Phys. Rev. Lett. 103111102hep-phM. Bañados, J. Silk, and S. M. West, Kerr black holes as particle accelerators to arbitrarily high energy, Phys. Rev. Lett. 103, 111102 (2009); arXiv:0909.0169 [hep-ph]. Kinematic restrictions on particle collisions near extremal black holes: A unified picture. F Hejda, J Bičák, arXiv:1612.04959Phys. Rev. D. 9584055gr-qcF. Hejda and J. Bičák, Kinematic restrictions on particle collisions near extremal black holes: A unified picture, Phys. Rev. D 95, 084055 (2017); arXiv:1612.04959 [gr-qc]. Collision of an innermost stable circular orbit particle around a Kerr black hole. T Harada, M Kimura, arXiv:1010.0962Phys. Rev. D. 8324002gr-qcT. Harada and M. Kimura, Collision of an innermost stable circular orbit particle around a Kerr black hole, Phys. Rev. D 83, 024002 (2011); arXiv:1010.0962 [gr-qc]. Circular orbits and acceleration of particles by near-extremal dirty rotating black holes: general approach. O B Zaslavskii, arXiv:1201.5351Classical Quant. Grav. 29205004gr-qcO. B. Zaslavskii, Circular orbits and acceleration of particles by near-extremal dirty rotating black holes: general approach, Classical Quant. Grav. 29, 205004 (2012); arXiv:1201.5351 [gr-qc]. The collisional Penrose process. J D Schnittman, arXiv:1910.02800General Relativity Gravitation. 5077astroph.HEJ. D. Schnittman, The collisional Penrose process, General Relativity Gravitation 50, 77 (2018); arXiv:1910.02800 [astro- ph.HE]. Kerr Black Holes as Particle Accelerators to Arbitrarily High Energy. E Berti, V Cardoso, L Gualtieri, F Pretorius, U Sperhake, arXiv:0911.2243Phys. Rev. Lett. 103239001Comment on. gr-qcE. Berti, V. Cardoso, L. Gualtieri, F. Pretorius, and U. Sperhake, Comment on "Kerr Black Holes as Particle Accelerators to Arbitrarily High Energy", Phys. Rev. Lett. 103, 239001 (2009); arXiv:0911.2243 [gr-qc]. Black holes as particle accelerators: a brief review. T Harada, M Kimura, arXiv:1409.7502Classical Quant. Grav. 31243001gr-qcT. Harada and M. Kimura, Black holes as particle accelerators: a brief review, Classical Quant. Grav. 31, 243001 (2014); arXiv:1409.7502 [gr-qc]. Collisional Penrose process near the horizon of extreme Kerr black holes. M Bejger, T Piran, M Abramowicz, F Håkanson, arXiv:1205.4350Phys. Rev. Lett. 109121101astro-ph.HEM. Bejger, T. Piran, M. Abramowicz, and F. Håkanson, Collisional Penrose process near the horizon of extreme Kerr black holes, Phys. Rev. Lett. 109, 121101 (2012); arXiv:1205.4350 [astro-ph.HE]. Upper limits of particle emission from high-energy collision and reaction near a maximally rotating Kerr black hole. T Harada, H Nemoto, U Miyamoto, arXiv:1205.7088Phys. Rev. D. 8624027gr-qcT. Harada, H. Nemoto, and U. Miyamoto, Upper limits of particle emission from high-energy collision and reaction near a maximally rotating Kerr black hole, Phys. Rev. D 86, 024027 (2012); arXiv:1205.7088 [gr-qc]. Energetics of particle collisions near dirty rotating extremal black holes: Banados-Silk-West effect versus Penrose process. O B Zaslavskii, arXiv:1205.4410Phys. Rev. D. 8684030gr-qcO. B. Zaslavskii, Energetics of particle collisions near dirty rotating extremal black holes: Banados-Silk-West effect versus Penrose process, Phys. Rev. D 86, 084030 (2012); arXiv:1205.4410 [gr-qc]. Revised upper limit to energy extraction from a Kerr black hole. J D Schnittman, arXiv:1410.6446Phys. Rev. Lett. 113261102astro-ph.HEJ. D. Schnittman, Revised upper limit to energy extraction from a Kerr black hole, Phys. Rev. Lett. 113, 261102 (2014); arXiv:1410.6446 [astro-ph.HE]. High efficiency of collisional Penrose process requires heavy particle production. K Ogasawara, T Harada, U Miyamoto, arXiv:1511.00110Phys. Rev. D. 9344054gr-qcK. Ogasawara, T. Harada, and U. Miyamoto, High efficiency of collisional Penrose process requires heavy particle produc- tion, Phys. Rev. D 93, 044054 (2016); arXiv:1511.00110 [gr-qc]. Maximum efficiency of the collisional Penrose process. O B Zaslavskii, arXiv:1607.00651Phys. Rev. D. 9464048gr-qcO. B. Zaslavskii, Maximum efficiency of the collisional Penrose process, Phys. Rev. D 94, 064048 (2016); arXiv:1607.00651 [gr-qc]. Infinite efficiency of the collisional Penrose process: Can a overspinning Kerr geometry be the source of ultrahigh-energy cosmic rays and neutrinos?. M Patil, T Harada, K Nakao, P S Joshi, M Kimura, arXiv:1510.08205Phys. Rev. D. 93104015gr-qcM. Patil, T. Harada, K.-i. Nakao, P. S. Joshi, and M. Kimura, Infinite efficiency of the collisional Penrose process: Can a overspinning Kerr geometry be the source of ultrahigh-energy cosmic rays and neutrinos?, Phys. Rev. D 93, 104015 (2016); arXiv:1510.08205 [gr-qc]. Collisional super-Penrose process and Wald inequalities. I V Tanatarov, O B Zaslavskii, arXiv:1611.05912General Relativity Gravitation. 49119gr-qcI. V. Tanatarov and O. B. Zaslavskii, Collisional super-Penrose process and Wald inequalities, General Relativity Gravi- tation 49, 119 (2017); arXiv:1611.05912 [gr-qc]. Bañados-Silk-West effect with nongeodesic particles: Extremal horizons. I V Tanatarov, O B Zaslavskii, arXiv:1307.0034Phys. Rev. D. 8864036gr-qcI. V. Tanatarov and O. B. Zaslavskii, Bañados-Silk-West effect with nongeodesic particles: Extremal horizons, Phys. Rev. D 88, 064036 (2013); arXiv:1307.0034 [gr-qc]. S Liberati, C Pfeifer, J J Relancio, arXiv:2106.01385Exploring black holes as particle accelerators in realistic scenarios. gr-qcS. Liberati, C. Pfeifer, and J. J. Relancio, Exploring black holes as particle accelerators in realistic scenarios, arXiv:2106.01385 [gr-qc]. Acceleration of particles by nonrotating charged black holes. O B Zaslavskii, arXiv:1007.4598JETP Letters. 92571gr-qcO. B. Zaslavskii, Acceleration of particles by nonrotating charged black holes, JETP Letters 92, 571 (2010); arXiv:1007.4598 [gr-qc]. Energy extraction from extremal charged black holes due to the Banados-Silk-West effect. O B Zaslavskii, arXiv:1207.5209Phys. Rev. D. 86124039gr-qcO. B. Zaslavskii, Energy extraction from extremal charged black holes due to the Banados-Silk-West effect, Phys. Rev. D 86, 124039 (2012); arXiv:1207.5209 [gr-qc]. Extraction of energy from an extremal rotating electrovacuum black hole: Particle collisions along the axis of symmetry. F Hejda, J Bičák, O B Zaslavskii, arXiv:1904.02035Phys. Rev. D. 10064041gr-qcF. Hejda, J. Bičák, and O. B. Zaslavskii, Extraction of energy from an extremal rotating electrovacuum black hole: Particle collisions along the axis of symmetry, Phys. Rev. D 100, 064041 (2019); arXiv:1904.02035 [gr-qc]. General limitations on trajectories suitable for super-Penrose process. O B Zaslavskii, arXiv:1506.06527Europhys. Lett. 11150004gr-qcO. B. Zaslavskii, General limitations on trajectories suitable for super-Penrose process, Europhys. Lett. 111, 50004 (2015); arXiv:1506.06527 [gr-qc]. Acceleration of particles as a universal property of rotating black holes. O B Zaslavskii, arXiv:1007.3678Phys. Rev. D. 8283004gr-qcO. B. Zaslavskii, Acceleration of particles as a universal property of rotating black holes, Phys. Rev. D 82, 083004 (2010); arXiv:1007.3678 [gr-qc]. Near-horizon circular orbits and extremal limit for dirty rotating black holes. O B Zaslavskii, arXiv:1506.00148Phys. Rev. D. 9244017gr-qcO. B. Zaslavskii, Near-horizon circular orbits and extremal limit for dirty rotating black holes, Phys. Rev. D 92, 044017 (2015); arXiv:1506.00148 [gr-qc]. Collision of two general geodesic particles around a Kerr black hole. T Harada, M Kimura, arXiv:1102.3316Phys. Rev. D. 8384041gr-qcT. Harada and M. Kimura, Collision of two general geodesic particles around a Kerr black hole, Phys. Rev. D 83, 084041 (2011); arXiv:1102.3316 [gr-qc]. Near-horizon description of extremal magnetized stationary black holes and Meissner effect. J Bičák, F Hejda, arXiv:1510.01911Phys. Rev. D. 92104006gr-qcJ. Bičák and F. Hejda, Near-horizon description of extremal magnetized stationary black holes and Meissner effect, Phys. Rev. D 92, 104006 (2015); arXiv:1510.01911 [gr-qc]. Mergers of charged black holes: Gravitational-wave events, short gamma-ray bursts, and fast radio bursts. B Zhang, arXiv:1602.04542Astrophys. J. Lett. 82731astro-ph.HEB. Zhang, Mergers of charged black holes: Gravitational-wave events, short gamma-ray bursts, and fast radio bursts, Astrophys. J. Lett. 827, L31 (2016); arXiv:1602.04542 [astro-ph.HE]. Fast radio bursts and their gamma-ray or radio afterglows as Kerr-Newman black hole binaries. T Liu, G E Romero, M.-L Liu, A Li, arXiv:1602.06907Astrophys. J. 82682astro-ph.HET. Liu, G. E. Romero, M.-L. Liu, and A. Li, Fast radio bursts and their gamma-ray or radio afterglows as Kerr-Newman black hole binaries, Astrophys. J. 826, 82 (2016); arXiv:1602.06907 [astro-ph.HE]. General relativistic considerations of the field shedding model of fast radio bursts. B Punsly, D Bini, arXiv:1603.05509Mon. Not. R. Astron. Soc. 45941astro-ph.HEB. Punsly and D. Bini, General relativistic considerations of the field shedding model of fast radio bursts, Mon. Not. R. Astron. Soc. 459, L41 (2016); arXiv:1603.05509 [astro-ph.HE]. Gravitational collapse to a Kerr-Newman black hole. A Nathanail, E R Most, L Rezzolla, arXiv:1703.03223Mon. Not. R. Astron. Soc. 46931astro-ph.HEA. Nathanail, E. R. Most, and L. Rezzolla, Gravitational collapse to a Kerr-Newman black hole, Mon. Not. R. Astron. Soc. 469, L31 (2017); arXiv:1703.03223 [astro-ph.HE].
[]
[]
[ "Maria Chudnovsky \nColumbia University\n10027New YorkNYUSA\n", "Peter Maceli \nColumbia University\n10027New YorkNYUSA\n" ]
[ "Columbia University\n10027New YorkNYUSA", "Columbia University\n10027New YorkNYUSA" ]
[]
Let G be the class of all graphs with no induced four-edge path or four-edge antipath. Hayward and Nastos [6] conjectured that every prime graph in G not isomorphic to the cycle of length five is either a split graph or contains a certain useful arrangement of simplicial and antisimplicial vertices. In this paper we give a counterexample to their conjecture, and prove a slightly weaker version. Additionally, applying a result of the first author and Seymour [1] we give a short proof of Fouquet's result [3] on the structure of the subclass of bull-free graphs contained in G.
10.1002/jgt.21763
[ "https://arxiv.org/pdf/1302.0404v2.pdf" ]
14,798,621
1302.0404
dcf62c9d171f987d68632a5b46d5aa7025b4a8b9
28 May 2013 October 16, 2018 Maria Chudnovsky Columbia University 10027New YorkNYUSA Peter Maceli Columbia University 10027New YorkNYUSA 28 May 2013 October 16, 2018Simplicial vertices in graphs with no induced four-edge path or four-edge antipath, and the H 6 -conjecture Let G be the class of all graphs with no induced four-edge path or four-edge antipath. Hayward and Nastos [6] conjectured that every prime graph in G not isomorphic to the cycle of length five is either a split graph or contains a certain useful arrangement of simplicial and antisimplicial vertices. In this paper we give a counterexample to their conjecture, and prove a slightly weaker version. Additionally, applying a result of the first author and Seymour [1] we give a short proof of Fouquet's result [3] on the structure of the subclass of bull-free graphs contained in G. Introduction All graphs in this paper are finite and simple. Let G be a graph. The complement G of G is the graph with vertex set V (G), such that two vertices are adjacent in G if and only if they are non-adjacent in G. For a subset X of V (G), we denote by G[X] the subgraph of G induced by X, that is, the subgraph of G with vertex set X such that two vertices are adjacent in G[X] if and only if they are adjacent in G. Let H be a graph. If G has no induced subgraph isomorphic to H, then we say that G is H-free. If G is not H-free, G contains H, and a copy of H in G is an induced subgraph of G isomorphic to H. For a family F of graphs, we say that G is F -free if G is F -free for every F ∈ F . We denote by P n+1 the path with n + 1 vertices and n edges, that is, the graph with distinct vertices {p 0 , ..., p n } such that p i is adjacent to p j if and only if |i − j| = 1. For a graph H, and a subset X of V (G), if G[X] is a copy of H in G, then we say that X is an H. By convention, when explicitly describing a path we will list the vertices in order. In this paper we are interested in understanding the class of {P 5 , P 5 }-free graphs. Let A and B be disjoint subsets of V (G). For a vertex b ∈ V (G) \ A, we say that b is complete to A if b is adjacent to every vertex of A, and that b is anticomplete to A if b is non-adjacent to every vertex of A. If every vertex of A is complete to B, we say A is complete to B, and that A is anticomplete to B if every vertex of A is anticomplete to B. If b ∈ V (G) \ A is neither complete nor anticomplete to A, we say that b is mixed on A. A homogeneous set in a graph G is a subset X of V (G) with 1 < |X| < |V (G)| such that no vertex of V (G)\X is mixed on X. We say that a graph is prime if it has at least four vertices, and no homogeneous set. Let us now define the substitution operation. Given graphs H 1 and H 2 , on disjoint vertex sets, each with at least two vertices, and v ∈ V (H 1 ), we say that H is obtained from H 1 by substituting H 2 for v, or obtained from H 1 and H 2 by substitution (when the details are not important) if: • V (H) = (V (H 1 ) ∪ V (H 2 )) \ {v}, • H[V (H 2 )] = H 2 , • H[V (H 1 ) \ {v}] = H 1 [V (H 1 ) \ {v}], and • u ∈ V (H 1 ) is adjacent in H to w ∈ V (H 2 ) if and only if w is adjacent to v in H 1 . Thus, a graph G is obtained from smaller graphs by substitution if and only if G is not prime. Since P 5 and P 5 are both prime, it follows that if H 1 and H 2 are {P 5 , P 5 }-free graphs, then any graph obtained from H 1 and H 2 by substitution is {P 5 , P 5 }-free. Hence, in this paper we are interested in understanding the class of prime {P 5 , P 5 }-free graphs. Let C n denote the cycle of length n, that is, the graph with distinct vertices {c 1 , ..., c n } such that c i is adjacent to c j if and only if |i − j| = 1 or n − 1. A theorem of Fouquet [3] tells us that: 1.1. Any {P 5 , P 5 }-free graph that contains C 5 is either isomorphic to C 5 or has a homogeneous set. That is, C 5 is the unique prime {P 5 ,P 5 }-free graph that contains C 5 , and so we concern ourselves with prime {P 5 ,P 5 , C 5 }-free graphs, the main subject of this paper. Let G be a graph. A clique in G is a set of vertices all pairwise adjacent. A stable set in G is a set of vertices all pairwise non-adjacent. The neighborhood of a vertex v ∈ V (G) is the set of all vertices adjacent to v, and is denoted N(v). A vertex v is simplicial if N(v) is a clique. A vertex v is antisimplicial if V (G) \ N(v) is a stable set, that is, if and only if v is a simplicial vertex in the complement. In [6] Hayward and Nastos proved: 1.2. If G is a prime {P 5 , P 5 , C 5 }-free graph, then there exists a copy of P 4 in G whose vertices of degree one are simplicial, and whose vertices of degree two are antisimplicial. A graph G is a split graph if there is a partition V (G) = A ∪ B such that A is a stable set and B is a clique. Földes and Hammer [2] showed: 1.3. A graph G is a split graphs if and only if G is a {C 4 , C 4 , C 5 }-free graph. Drawn in Figure 1 with its complement, H 6 is the graph with vertex set {v 1 , v 2 , v 3 , v 4 , v 5 , v 6 } and edge set {v 1 v 2 , v 2 v 3 , v 3 v 4 , v 2 v 5 , v 3 v 6 , v 5 v 6 }. Hayward and Nastos conjectured the following: 1.4 (The H 6 -Conjecture). If G is a prime {P 5 , P 5 , C 5 }-free graph which is not split, then there exists a copy of H 6 in G or G whose two vertices of degree one are simplicial, and whose two vertices of degree three are antisimplicial. First, in Figure 2 we provide a counterexample to 1.4. On the other hand, we prove the following slightly weaker version: 1.5. If G is a prime {P 5 , P 5 , C 5 }-free graph which is not split, then there exists a copy of H 6 in G or G whose two vertices of degree one are simplicial, and at least one of whose vertices of degree three is antisimplicial. We say that a graph G admits a 1-join, if V (G) can be partitioned into four non-empty pairwise disjoint sets (A, B, C, D), where A is anticomplete to C ∪ D, and B is complete to C and anticomplete to D. In trying to use 1.5 to improve upon 1.1 we conjectured the following: 1.6. If G is a {P 5 , P 5 }-free graph, then either • G is isomorphic to C 5 , or • G is a split graph, or • G has a homogeneous set, or • G or G admits a 1-join. However, 1.6 does not hold, and we give a counterexample in Figure 3. The bull is a graph with vertex set {x 1 , x 2 , x 3 , y, z} and edge set {x 1 x 2 , x 2 x 3 , x 1 x 3 , x 1 y, x 2 z}. Lastly, applying a result of the first author and Seymour [1] we give a short proof of 1.1, and Fouquet's result [3] on the structure of {P 5 , P 5 ,bull}-free graphs. This paper is organized as follows. Section 2 contains results about the existence of simplicial and antisimplicial vertices in {P 5 , P 5 }-free graphs. In Section 3 we give a counterexample to the H 6 -conjecture 1.4, and prove 1.5, a slightly weaker version of the conjecture. We also give a simpler proof of 1.2, and provide a counterexample to 1.6. Finally, in Section 4 we give a new proof of 1.1, and a structure theorem for {P 5 , P 5 ,bull}free graphs. Simplicial and Antisimplicial vertices In this section we prove the following result: 2.1. All prime {P 5 , P 5 , C 5 }-free graphs have both a simplicial vertex, and an antisimplicial vertex. Along the way we establish 2.9, a result which is helpful in finding simplicial and antisimplicial vertices in prime {P 5 , P 5 }-free graphs. Let G be a graph. We say G is connected if V (G) cannot be partitioned into two disjoint sets anticomplete to each other. If G is connected we say that G is anticonnected. Let X ⊆ Y ⊆ V (G). We say X is a connected subset of Y if G[X] is connected, and that X is an anticonnected subset of Y if G[X] is anticonnected. A component of X is a maximal connected subset of X, and an anticomponent of X is a maximal anticonnected subset of X. First, we make the following three easy observations: 2.2. If G is a prime graph, then G is connected and anticonnected. Proof. Passing to the complement if necessary, we may suppose G is not connected. Since G has at least four vertices, there exists a component C of V (G) such that |V (G) \ C| ≥ 2. However, then V (G) \ C is a homogeneous set, a contradiction. This proves 2.2. We say a vertex v ∈ V (G) \ X is mixed on an edge of X, if there exist adjacent x, y ∈ X such that v is mixed on {x, y}. Similarly, a vertex v ∈ V (G) \ X is mixed on a non-edge of X, if there exist non-adjacent x, y ∈ X such that v is mixed on {x, y}. 2.3. Let G be a graph, X ⊆ V (G), and suppose v ∈ V (G) \ X is mixed on X. 1. If X is a connected subset of V (G), then v is mixed on an edge of X. 2. If X is an anticonnected subset of V (G), then v is mixed on a non-edge of X. Proof. Suppose X is a connected subset of V (G). Since v is mixed on X, both X ∩ N(v) and X \ N(v) are non-empty. As G[X] is connected, there exists an edge given by x ∈ X ∩ N(v) and y ∈ X \ N(v), and v is mixed on {x, y}. This proves 2.3.1. Passing to the complement, we get 2.3.2. 2.4. Let G be a graph, X 1 , X 2 ⊆ V (G) with X 1 ∩ X 2 = ∅, and v ∈ V (G) \ (X 1 ∪ X 2 ). 1. If G is P 5 -free, and X 1 , X 2 are connected subsets of V (G) anticomplete to each other, then v is not mixed on both X 1 and X 2 . 2. If G is P 5 -free, and X 1 , X 2 are anticonnected subsets of V (G) complete to each other, then v is not mixed on both X 1 and X 2 . Proof. Suppose G is P 5 -free, X 1 , X 2 are disjoint connected subsets of V (G) anticomplete to each other, and v is mixed on both X 1 and X 2 . By 2.3.1, v is mixed on an edge of X 1 , given by say x 1 , y 1 ∈ X 1 with v adjacent to x 1 and non-adjacent to y 1 , and an edge of X 2 , given by say x 2 , y 2 ∈ X 2 with v adjacent to x 2 and non-adjacent to y 2 . However, then 2.6. Let u, v and w be three pairwise non-adjacent vertices in a {P 5 , P 5 }-free graph G such that w is mixed on an anticonnected subset {y 1 , x 1 , v, x 2 , y 2 } is a P 5 ,A of N(u) ∩ N(v). Then no vertex z ∈ N(w) \ (A ∪ {u, v}) can be mixed on {u, v}. Proof. Suppose there exists a vertex z ∈ N(w) \ (A ∪ {u, v}) which is mixed on {u, v}, with say z adjacent to v and non-adjacent to u. Since w is mixed on A, by 2.3.2, it follows that w is mixed on a non-edge of A, given by say x, y ∈ A with w adjacent to x and non-adjacent to y. By 2.5, z is not mixed on A. However, if z is anticomplete to A, then {y, u, x, w, z} is a P 5 , and if z is complete to A, then {x, y, w, u, z} is a P 5 , in both cases a contradiction. This proves 2.6. Now, we can start to prove 2.1. 2.7. Let G be a prime {P 5 , P 5 , C 5 }-free graph. Then G has an antisimplicial vertex, or admits a 1-join. Proof. Suppose G does not admit a 1-join. Let W be a maximal subset of vertices that has a partition A 1 ∪ ... ∪ A k with k ≥ 2 such that: • A 1 , ..., A k are all anticonnected subsets of V (G), and • A 1 , ..., A k are pairwise complete to each other. (1) V (G) \ W is non-empty. (1). By 2.2, G is anticonnected, which implies that V (G) \ W is non-empty. This proves(2) Every v ∈ V (G) \ W is either anticomplete to or mixed on A i for each i ∈ {1, ..., k}. Suppose v ∈ V (G) \ W is complete to some A i . Take B to be the union of all the A j to which v is complete. However, since {v} ∪ W \B is anticonnected and complete to B, it follows that W ′ = B ∪ ({v} ∪ W \B) contradicts the maximality of W . This prove (2). (3) If for some i ∈ {1, ..., k}, v ∈ V (G) \ W is mixed on A i , then v is anticomplete to W \A i . By 2.4.2, any v ∈ V (G) \ W is mixed on at most one A i , and so together with (2) this proves (3). (4) Every vertex in V (G) \ W is mixed on exactly one A i , for some i ∈ {1, ..., k}. Suppose not. Let X ⊆ V (G) \ W be the set of vertices anticomplete to W , which is non-empty by (2) and (3). By 2.2, G is connected, and so there exists an edge given by v ∈ X and u ∈ V (G) \ (X ∪ W ). By (2), u is mixed on some A i , and so, by 2.3.2, u is mixed on a non-edge of A i , given by say x i , y i ∈ A i with u adjacent to x i and non-adjacent to y i . However, by (3), u is anticomplete to W \ A i , and so for j = i and a vertex z ∈ A j we get that {v, u, x i , z, y i } is a P 5 , a contradiction. This proves (4). And so, by (3) and (4), we can partition Suppose for i = j, b i ∈ B i is adjacent to b j ∈ B j . By 2.3.2, b i is mixed on a non-edge of A i , given by say x i , y i ∈ A i with b i adjacent to x i and non-adjacent to y i . As b j is mixed on A j , there exists x j ∈ A j non-adjacent to b j , however then {b j , b i , x i , x j , y i } is a P 5 , a contradiction. This proves (5). V (G) = A 1 ∪ ... ∪ A k ∪ B 1 ∪ ... ∪ B k , where each B i is (6) Exactly one B i is non-empty. By (1) and (4), at least one B i is non-empty. Suppose for i = j, B i and B j are both nonempty. Then, by (5), A = B i , B = A i , C = (A 1 ∪ ... ∪ A k )\A i and D = (B 1 ∪ ... ∪ B k )\B i is a 1-join, a contradiction. This proves (6). Hence, by (6), we may assume B 1 is non-empty while B 2 , . . . , B k are all empty. (7) k = 2 and |A 2 | = 1. Since A 2 ∪ ... ∪ A k is not a homogeneous set, (6) implies that k = 2 and |A 2 | = 1. This proves (7). Let a be the vertex in A 2 . (8) B 1 is a stable set. Suppose not. Then there exists a component B of B 1 with |B| > 1. Since a is anticomplete to B 1 , and B is a component of B 1 , as G is prime, it follows that there exist a 1 ∈ A 1 which is mixed on B. Thus, by 2.3.1, a 1 is mixed on an edge of B, given by say b, b ′ ∈ B with a 1 adjacent to b and non-adjacent to b ′ . Next, partition A 1 = C ∪ D with C = A 1 ∩ (N(b) \ N(b ′ )) and D = A 1 \ C, where both C and D are non-empty, as a 1 ∈ C and b ′ is mixed on A 1 . Since A 1 is anticonnected there exists a non-edge given by c ∈ C and d ∈ D. However, since d ∈ D, it follows that {d, a, c, b, b ′ } is either a P 5 , P 5 or C 5 , a contradiction. This proves (8). Thus, by (8), a is an antisimplicial vertex. This proves 2.7. Next, we observe: 2.8. Let u and v be non-adjacent vertices in a prime P 5 -free graph G. Then either A useful consequence of 2.8 is the following: • N(u) ∩ N(v) is a clique, or • there exists a vertex w ∈ V (G) \ (N(u) ∪ N(v) ∪ {u, v}) which is mixed on an anticonnected subset of N(u) ∩ N(v). 2.9. Let v be a vertex in a prime {P 5 , P 5 }-free graph G. 1. If v is antisimplicial, and we choose u non-adjacent to v such that |N(u) ∩ N(v)| is minimum, then u is a simplicial vertex. 2. If v is simplicial, and we choose u adjacent to v such that |N(u)∪N(v)| is maximum, then u is an antisimplicial vertex. Proof. Suppose v is antisimplicial, we choose u non-adjacent to v such that |N(u) ∩ N(v)| is minimum, and u is not simplicial. Since v is antisimplicial, it follows that N(u) \ N(v) is empty, and thus, as u is not simplicial, N(u) ∩ N(v) is not a clique. Hence, by 2.8, there exists some w, non-adjacent to both u and v, which is mixed on an anticonnected subset of N(u) ∩ N(v). However, then, by our choice of u, there exists a vertex z ∈ N(v)\N(u) adjacent to w, contradicting 2.6. This proves 2.9.1. Passing to the complement, we get 2.9.2. 2.10. Let G be a prime {P 5 , P 5 , C 5 }-free graph. Then G has a simplicial vertex, or an antisimplicial vertex. Proof. Suppose G does not have an antisimpicial vertex. Then, by 2.7, it admits a 1-join (A, B, C, D). (1) A and D are stable sets. By symmetry, it suffices to argue that A is a stable set. Suppose not. Then there exists a component A ′ of A with |A ′ | > 1. Since C ∪ D is anticomplete to A, and A ′ is a component of A, as G is prime, it follows that there exists b ∈ B which is mixed on A ′ . Thus, by 2.3.1, b is mixed on an edge of A ′ , given by say a, a ′ ∈ A ′ with b adjacent to a ′ and non-adjacent to a. By 2.2, G is connected, and so there exists an edge given by c ∈ C and d ∈ D. However, then {a, a ′ , b, c, d} is a P 5 , a contradiction. This proves (1). Next, fix some c ∈ C, and choose a vertex a ∈ A such that |N(a) ∩ N(c)| is minimum. (2) a is a simplicial vertex. Suppose not. Then, by (1), N(a) ∩ N(c) = N(a) ⊆ B is not a clique, and so, by 2.8, there exists w, non-adjacent to both a and c, which is mixed on an anticonnected subset of N(a) ∩ N(c). Since B is complete to C and anticomplete to D, it follows that w belongs to A. However, then, by our choice of a, there exists a vertex z ∈ N(c)\N(a) adjacent to w, contradicting 2.6. This proves (2). This completes the proof of 2.10. Putting things together we can now prove 2.1. Proof of 2.1. By 2.10, passing to the complement if necessary, there exists an antisimplicial vertex a. And so, by 2.9.1, if we choose s non-adjacent to a such that |N(a) ∩ N(s)| is minimum, then s is simplicial. This proves 2.1. The H 6 -Conjecture In this section we give a counterexample to the H 6 -conjecture 1.4, and prove 1.5, a slightly weaker version of the conjecture. We also give a proof of 1.2, and provide a counterexample to 1.6. We begin by establishing some properties of prime graphs. Recall the following theorem of Seinsche [7]: 3.1. If G is a P 4 -free graph with at least two vertices, then G is either not connected or not anticonnected. Together, 2.2 and 3.1 imply the following: Every prime graph contains P 4 . Next, as first shown by Hoàng and Khouzam [4], we observe that: 3.3. Let G be a prime graph. A vertex v ∈ V (G) is simplicial if and only if v is a degree one vertex in every copy of P 4 in G containing it. A vertex v ∈ V (G) is antisimplicial if and only if v is a degree two vertex in every copy of P 4 in G containing it. Proof. Both forward implications are clear. To prove the converse of 3.3.1, suppose there exists a vertex v which is not simplicial and yet is a degree one vertex in every copy of P 4 in G containing it. Then there exists an anticomponent A of N(v) with |A| > 1. Since v is complete to A, and A is a anticomponent of N(v), as G is prime, it follows that there exists u ∈ V (G) \ (N(v) ∪ {v}) which is mixed on A. Thus, by 2.3.2, u is mixed on a non-edge of A, given by say x, y ∈ A with u adjacent to x and non-adjacent to y. However, then {y, v, x, u} is a P 4 with v having degree two, a contradiction. This proves 3.3.1. Passing to the complement, we get 3.3.2. Finally, we observe that: 3.4. Let G be a prime graph. 1. The set of antisimplicial vertices in G is a clique. 2. The set of simplicial vertices in G is a stable set. Proof. Suppose there exist non-adjacent antisimplicial vertices a, a ′ ∈ V (G). Since a is antisimplicial, it follows that N(a ′ ) \ N(a) is empty. Similarly, N(a) \ N(a ′ ) is also empty. However, this implies that {a, a ′ } is a homogeneous set in G, a contradiction. This proves 3.4.1. Passing to the complement, we get 3.4.2. 3.5. Let G be a prime {P 5 , P 5 , C 5 }-free graph. Let A be the set of antisimplicial vertices in G, and let S be the set of simplicial vertices in G. Then G[A ∪ S] is a split graph which is both connected and anticonnected. Proof. 3.4 implies that G[A ∪ S] is a split graph, where A is a clique and S is a stable set. By 2.9.1, every vertex in A has a non-neighbor in S, and, by 2.9.2, every vertex in S has a neighbor in A. Thus, G[A ∪ S] is both connected and anticonnected. This proves 3.5. We are finally ready to give a proof of 1.2, first shown in [6] by Hayward and Nastos. 3.6. If G is a prime {P 5 , P 5 , C 5 }-free graph, then there exists a copy of P 4 in G whose vertices of degree one are simplicial, and whose vertices of degree two are antisimplicial. Proof. Let A be the set of antisimplicial vertices in G, and let S be the set of simplicial vertices in G. By 2.1, both A and S are non-empty. Hence, G[A ∪ S] is a graph with at least two vertices, which, by 3.5, is both connected and anticonnected, and so, by 3.1, it follows that G[A ∪ S] contains P 4 . Since 3.4 implies that A is a clique and S is a stable set, it follows that every copy of P 4 in G[A ∪ S] is of the desired form. This proves 3.6. Next, we turn our attention to the H 6 -conjecture. A result of Hoàng and Reed [5] implies the following: 3.7. If G is a prime {P 5 , P 5 , C 5 }-free graph which is not split, then G or G contains H 6 . In hopes of saying more along these lines, motivated by 3.6 and 3.7, Hayward and Nastos posed 1.4, which we restate: 3.8 (The H 6 -Conjecture). If G is a prime {P 5 , P 5 , C 5 }-free graph which is not split, then there exists a copy of H 6 in G or G whose two vertices of degree one are simplicial, and whose two vertices of degree three are antisimplicial. In Figure 2 we give a counterexample to 3.8. The graph G in Figure 2 contains C 4 , and so, by 1.3, is not split. The mapping φ : V (G) → V (G) φ := 1 2 3 4 5 6 7 8 9 10 11 12 3 1 4 2 7 5 8 6 11 9 12 10 is an isomorphism between G and G. Thus, as G is self-complementary, it suffices to check that G is P 5 -free, which is straight forward, as is verifying that G is prime, and we leave the details to the reader. The set of simplicial vertices in G is {1, 4}, and the set of antisimplicial vertices in G is {2, 3}. However, no copy of C 4 in G contains {2, 3}, and so there does not exist a copy of H 6 of the desired form. However, all is not lost as we can prove 1.5, a slightly weaker version of the H 6conjecture, which we restate: 3.9. If G is a prime {P 5 , P 5 , C 5 }-free graph which is not split, then there exists a copy of H 6 in G or G whose two vertices of degree one are simplicial, and at least one of whose vertices of degree three is antisimplicial. Proof. By 3.6, there exist simplicial vertices s, s ′ , and antisimplicial vertices a, a ′ such that {s, a, a ′ , s ′ } is a P 4 in G. Now, choose maximal subsets A of antisimplicial vertices in G, and S of simplicial vertices in G such that a, a ′ ∈ A, s, s ′ ∈ S, every vertex in A has a neighbor in S, and every vertex in S has a non-neighbor in A. (1) Any graph containing a vertex which is both simplicial and antisimplicial is split. By definition, if a vertex v ∈ V (G) is both simplicial and antisimplicial, then N(v) is a clique and V (G) \ N(v) is a stable set. This proves (1). (2) There exists no vertex v ∈ V (G) \ (A ∪ S) adjacent to a vertex u ∈ S and non-adjacent to a vertex w ∈ A. Suppose not. If u is adjacent to w, then N(u) is not a clique, and if u is non-adjacent to w, then V (G) \ N(w) is not a stable set, in both cases a contradiction. This proves (2). By (1) and (2), we can partition V (G) = A ∪ S ∪ B ∪ C ∪ D, where B is the set of vertices complete to A and anticomplete to S, C is the set of vertices complete to A with a neighbor in S, and D is the set of vertices anticomplete to S with a non-neighbor in A. Recall 3.4 implies that A is a clique and S is a stable set. Consider a vertex c ∈ C. Then there exists s c ∈ S adjacent to c. Hence, c is not antisimplicial, as otherwise we could add c to A contrary to maximality. By construction, s c has a non-neighbor a c ∈ A. Since c is complete to the A, it follows that N(c) is not a clique, and thus c is not simplicial. Hence, C contains no simplicial or antisimplicial vertices. Passing to the complement, we get that no vertex in D is simplicial or antisimplicial. This proves (3). (4) We may assume that C is a clique, and D is a stable set. By symmetry, it is enough to argue that if D is not a stable set, then the theorem holds. Suppose we have an edge given by x, y ∈ D. By definition, any antisimplicial vertex is adjacent to at least one of x and y. And so, as x and y both have non-neighbors in A, there exists a x , a y ∈ A such that a x is adjacent to x and non-adjacent to y, and a y is adjacent to y and non-adjacent to x. Since S is anticomplete to D, it follows that a x and a y do not have a common neighbor s ′′ ∈ S, as otherwise {a x , y, s ′′ , x, a y } is a P 5 . By construction, every vertex in A has a neighbor in S, and so there exists s x ∈ S adjacent to a x and non-adjacent to a y , and s y ∈ S adjacent to a y and non-adjacent to a x . However, then {s x , a x , a y , s y , x, y} is a copy of H 6 in G of the desired form. Passing to the complement, we may also assume that C is a clique. This proves (4). By (4), A ∪ C is a clique and D ∪ S is a stable set. Thus, for any d ∈ D, it follows that N(d) ⊆ A ∪ B ∪ C. Since A is complete to B, it follows that any a ∈ A is complete to (A \ {a}) ∪ B ∪ C. This proves (5). (6) We may assume both C and D are empty. By symmetry, it is enough to argue that if D is non-empty, then the theorem holds. Suppose D is non-empty, and choose d ∈ D with |N(d)| minimum. Then there exists a d ∈ A non-adjacent to d. By (3) and (5) so, by 2.8, there exists a vertex w, non-adjacent to both a d and d, which is mixed on an anticonnected subset of N(d). Since a d is complete to (A \ {a d }) ∪ B ∪ C, it follows that w ∈ D ∪ S. If w ∈ D, then, by our choice of d, there exists z ∈ N(w) \ N(d) which, by (5), is adjacent to a d , contradicting 2.6. Hence, w ∈ S. Since w is mixed on an anticonnected subset of N(d), by 2.3.2, w is mixed on a non-edge of N(d), given by say x, y ∈ N(d) with w adjacent to x and non-adjacent to y. Since A ∪ C is a clique, and B is complete to A and anticomplete to S, it follows that x ∈ C and y ∈ B. By construction, every vertex in A has a neighbor in S, and so there exists s d ∈ S adjacent to a d . Since s d is mixed on {a d , d} and non-adjacent to y, 2.5 implies that s d is anticomplete to {x, y}. However, then {s d , a d , x, w, y, d} is a copy of H 6 in G of the desired form. Passing to the complement, we may also assume that C is empty. This proves (6). By (6), since G is prime, it follows that |B| ≤ 1, implying that G is a split graph, a contradiction. This proves 3.9. Another conjecture which seemed plausible for a while is 1.6, which we restate: 3.10. If G is a {P 5 , P 5 }-free graph, then either • G is isomorphic to C 5 , or • G is a split graph, or • G has a homogeneous set, or • G or G admits a 1-join. {v, v 1 , v 2 , v 3 , v 4 } is a C 5 , or for N(v) = {v 2 , v 3 }, in which case {v, v 1 , v 2 , v 3 , v 4 } is a bull, in all cases a contradiction. This proves 4.3. Putting things together we obtain Fouquet's original structural result [3]: 4.4. If G is a {P 5 , P 5 ,bull}-free graph, then either • |V (G)| ≤ 2, or • G is isomorphic to C 5 , or • G has a homogeneous set, or • G or G is a half graph. Proof. As all graphs on three vertices have a homogeneous set, 4.4 immediately follows from 4.2 and 4.3. Figure 1 : 1H 6 and H 6 . 2. 5 . 5Let u and v be non-adjacent vertices in a P 5 -free graph G, and let A be an anticonnected subset of N(u) ∩ N(v). Then no vertex w ∈ V (G) \ (A ∪ {u, v}) can be mixed on both A and {u, v}. Proof. Since A and {u, v} are disjoint anticonnected subsets of V (G) complete to each other, 2.5 follows from 2.4.2. the set of vertices mixed on A i and anticomplete to (A 1 ∪ ... ∪ A k ) \ A i . ( 5 ) 5B 1 , ..., B k are pairwise anticomplete. Proof. Suppose N(u) ∩ N(v) is a not clique. Then there exists an anticomponent A of N(u) ∩ N(v) with |A| > 1. Since {u, v} is complete to N(u) ∩ N(v), and A is a anticomponent of N(u) ∩ N(v), as G is prime, it follows that there exists w ∈ V (G) \ ((N(u) ∩ N(v)) ∪ {u, v}) which is mixed on A.Thus, by 2.5, w is not mixed on {u, v}, and so w is anticomplete to {u, v}. This proves 2.8. Figure 2 : 2Counterexample to the H 6 -conjecture, where additionally {2, 3} is complete to {9, 10, 11, 12}. ( 3 ) 3No vertex of C ∪ D is simplicial or antisimplicial. ( 5 ) 5For all d ∈ D and u ∈ A, N(d) ⊆ N(u) ∪ {u}. , N(a d ) ∩ N(d) = N(d)is not a clique, and Figure 3 : 3Counterexample to Conjecture 3.10. a contradiction. This proves 2.4.1. Passing to the complement, we get 2.4.2.As a consequence of 2.3 and 2.4 we obtain the following two useful results: AcknowledgementWe would like to thank Ryan Hayward, James Nastos, Irena Penev, Matthieu Plumettaz, Paul Seymour, and Yori Zwols for useful discussions, and Max Ehramn for telling us about the H 6 -conjecture. The graph in Figure 3 contains C 4 and C 4 , and so, by 1.3, is not split; the structure of {P 5 , P 5 ,bull}-free graphs. The following is joint work with Max Ehramn. However, with Paul Seymour we found the counterexample in Figure 3However, with Paul Seymour we found the counterexample in Figure 3. The graph in Figure 3 contains C 4 and C 4 , and so, by 1.3, is not split; the structure of {P 5 , P 5 ,bull}-free graphs. The following is joint work with Max Ehramn. Let O k be the bipartite graph on 2k vertices with bipartition. Let O k be the bipartite graph on 2k vertices with bipartition ({a 1 , . . . a k }, {b 1 , . . . b k }) Assume that both G and H are prime, and that both G and G are not half graphs. Then there exists an induced subgraph H ′ of G, isomorphic to H, and a vertex v ∈ V (G) \ V (H ′ ). such that G[V (H ′ ) ∪ {v}] is primeLet G be a graph, and let H be a proper induced subgraph of G. Assume that both G and H are prime, and that both G and G are not half graphs. Then there exists an induced subgraph H ′ of G, isomorphic to H, and a vertex v ∈ V (G) \ V (H ′ ), such that G[V (H ′ ) ∪ {v}] is prime. Next, we give a proof of Fouquet's result 1.1, which we restate. Next, we give a proof of Fouquet's result 1.1, which we restate: To avoid a homogeneous set in G[V (H) ∪ {v}], by symmetry, the only possibilities are for N(v) = {v 1 }, in which case {v, v 1 , v 2 , v 3 , v 4 } is a P 5 , or for N(v) = {v 1 , v 2 }, in which case {v. 2Since C 5 is selfcomplementary both G and G contain an odd cycle, hence are non-bipartite, and thus not half graphs. As C 5 is prime, by 4.1, there exists a subgraph H induced by {v 1. v 5 } is a P 5 , in both cases a contradiction. This proves 4.2. Thus, to understand prime {P 5 , P 5 ,bull}-free graphs it is enough to study prime {P 5 , P 5 , C 5 ,bull}-free graphsIf G is a prime {P 5 , P 5 }-free graph which contains C 5 , then G is isomorphic to C 5 . Proof. Suppose not, and so C 5 is a proper induced subgraph of G. Since C 5 is self- complementary both G and G contain an odd cycle, hence are non-bipartite, and thus not half graphs. As C 5 is prime, by 4.1, there exists a subgraph H induced by {v 1 , v 2 , v 3 , v 4 , v 5 } isomorphic to C 5 , and a vertex v ∈ V (G)\V (H) such that the subgraph of G induced by V (H) ∪ {v} is prime. Considering the complement, we may assume v is adjacent to at most two vertices in V (H). To avoid a homogeneous set in G[V (H) ∪ {v}], by symmetry, the only possibilities are for N(v) = {v 1 }, in which case {v, v 1 , v 2 , v 3 , v 4 } is a P 5 , or for N(v) = {v 1 , v 2 }, in which case {v, v 2 , v 3 , v 4 , v 5 } is a P 5 , in both cases a contradiction. This proves 4.2. Thus, to understand prime {P 5 , P 5 ,bull}-free graphs it is enough to study prime {P 5 , P 5 , C 5 ,bull}-free graphs. If G is a prime {P 5 , P 5 , C 5 ,bull}-free graph, then either G or G is a half graph. If G is a prime {P 5 , P 5 , C 5 ,bull}-free graph, then either G or G is a half graph. \V (H) such that the subgraph of G induced by V (H) ∪ {v} is prime. Considering the complement, we may assume v is adjacent to at most two vertices in H. To avoid a homogeneous set in G[V (H) ∪ {v}], by symmetry, the only possibilities are for N(v) = {v 1 }, in which case {v. 2Since G and G are not half graphs, it follows that P 4 is a proper induced subgraph of G. As P 4 is prime, by 4.1, there exists a subgraph H induced by {v 1. v 4 } is a P 5 , for N(v) = {v 1 , v 4 }, in which case ReferencesProof. Suppose not. By 3.2, G contains P 4 , which is isomorphic to O 2 . Since G and G are not half graphs, it follows that P 4 is a proper induced subgraph of G. As P 4 is prime, by 4.1, there exists a subgraph H induced by {v 1 , v 2 , v 3 , v 4 } isomorphic to P 4 , and a vertex v ∈ V (G)\V (H) such that the subgraph of G induced by V (H) ∪ {v} is prime. Considering the complement, we may assume v is adjacent to at most two vertices in H. To avoid a homogeneous set in G[V (H) ∪ {v}], by symmetry, the only possibilities are for N(v) = {v 1 }, in which case {v, v 1 , v 2 , v 3 , v 4 } is a P 5 , for N(v) = {v 1 , v 4 }, in which case References Growing without cloning. M Chudnovsky, P Seymour, SIDMA. M. Chudnovsky and P. Seymour, Growing without cloning, SIDMA, 26 (2012), 860- 880. Split graphs. S Földes, P L Hammer, Congres. Numer. 19S. Földes and P.L. Hammer, Split graphs, Congres. Numer., 19 (1977), 311-315. A decomposition for a class of (P 5 , P 5 )-free graphs. J L Fouquet, Discrete Math. 121J.L. Fouquet, A decomposition for a class of (P 5 , P 5 )-free graphs, Discrete Math., 121 (1993), 75-83. On brittle graphs. C T Hoàng, N Khouzam, J. Graph Theory. 12C.T. Hoàng and N. Khouzam, On brittle graphs, J. Graph Theory, 12 (1988), 391- 404. Some classes of perfectly orderable graphs. C T Hoàng, B A Reed, J. Graph Theory. 13C.T. Hoàng and B.A. Reed, Some classes of perfectly orderable graphs, J. Graph Theory, 13 (1989), 445-463. P 5 )-free graphs, masters thesis. J Nastos, EdmontonDept. of Computer Science, University of AlbertaJ. Nastos, (P 5 , P 5 )-free graphs, masters thesis, Dept. of Computer Science, University of Alberta, Edmonton, 2006. On a property of the class of n-colorable graphs. D Seinsche, J. Comb. Theory B. 16D. Seinsche, On a property of the class of n-colorable graphs, J. Comb. Theory B 16 (1974), 191-193.
[]
[ "SHARP ESTIMATE OF ELECTRIC FIELD FROM A CONDUCTIVE ROD AND APPLICATION IN EIT", "SHARP ESTIMATE OF ELECTRIC FIELD FROM A CONDUCTIVE ROD AND APPLICATION IN EIT" ]
[ "Xiaoping Fang ", "ANDYoujun Deng ", "Hongyu Liu " ]
[]
[]
We are concerned with the quantitative study of the electric field perturbation due to the presence of an inhomogeneous conductive rod embedded in a homogenous conductivity. We sharply quantify the dependence of the perturbed electric field on the geometry of the conductive rod. In particular, we accurately characterise the localisation of the gradient field (i.e. the electric current) near the boundary of the rod where the curvature is sufficiently large. We develop layer-potential techniques in deriving the quantitative estimates and the major difficulty comes from the anisotropic geometry of the rod. The result complements and sharpens several existing studies in the literature. It also generates an interesting application in EIT (electrical impedance tomography) in determining the conductive rod by a single measurement, which is also known as the Calderón's inverse inclusion problem in the literature.
10.1111/sapm.12348
[ "https://arxiv.org/pdf/2009.02525v2.pdf" ]
221,516,960
2009.02525
ca59b0659029c8ef99647c91843d0d61e4ccb67e
SHARP ESTIMATE OF ELECTRIC FIELD FROM A CONDUCTIVE ROD AND APPLICATION IN EIT Xiaoping Fang ANDYoujun Deng Hongyu Liu SHARP ESTIMATE OF ELECTRIC FIELD FROM A CONDUCTIVE ROD AND APPLICATION IN EIT conductivity equationrod inclusionNeumann-Poincaré operatorasymptotic analysiselectrical impedance tomographysingle measurement 2010 Mathematics Subject Classification: 35Q6035J0531B1078A40 We are concerned with the quantitative study of the electric field perturbation due to the presence of an inhomogeneous conductive rod embedded in a homogenous conductivity. We sharply quantify the dependence of the perturbed electric field on the geometry of the conductive rod. In particular, we accurately characterise the localisation of the gradient field (i.e. the electric current) near the boundary of the rod where the curvature is sufficiently large. We develop layer-potential techniques in deriving the quantitative estimates and the major difficulty comes from the anisotropic geometry of the rod. The result complements and sharpens several existing studies in the literature. It also generates an interesting application in EIT (electrical impedance tomography) in determining the conductive rod by a single measurement, which is also known as the Calderón's inverse inclusion problem in the literature. where u(x) ∈ H 1 loc (R 2 ) is a potential field, and σ(x) is of the following form: σ := (σ 0 − 1)χ(D) + 1, σ 0 ∈ R + and σ 0 = 1, (1.2) with D being a bounded domain with a connected complement R 2 \D. H(x) in (1.1) is a (nontrivial) harmonic function in R 2 , which stands for a background potential. We are mainly concerned with the quantitative properties of the solution u(x) in (1.1) and in particular, its dependence on the geometry of D. To that end, we next introduce the rodgeometry of D for our subsequent study. Let Γ 0 be a straight line of length L ∈ R + with Γ 0 = (x 1 , 0), x 1 ∈ (−L/2, L/2). Let n := (0, 1) and define the two points P := (−L/2, 0) and Q := (L/2, 0). Then the rod D is introduced as D = D a ∪ D f ∪ D b , where D f is defined by D f := {x; x = Γ 0 ± tn, t ∈ (−δ, δ)}, δ ∈ R + .(1. 3) The two end-caps D a and D b are two half disks with radius δ and centering at P and Q, respectively. More precisely, D a = {x; |x − P | < δ, x 1 < −L/2}, D b = {x; |x − Q| < δ, x 1 > L/2}. It can be verified that D is of class C 1,α for a certain α ∈ R + . In what follows, we define S c := ∂D c = ∂(D a ∪ D b ), and S f := ∂D f . Specially, S f = Γ 1 ∪ Γ 2 , where Γ j , j = 1, 2 are defined by Γ 1 = {x; x = Γ 0 − δn}, Γ 2 = {x; x = Γ 0 + δn}. (1.4) Moreover, we shall always use z x and z y to signify the projections of x ∈ S f and y ∈ S f on Γ 0 , respectively. The elliptic PDE system (1.1) describes the perturbation of an electric field H(x) due to the presence of a conductive body D. u(x) signifies the electric potential and σ(x) signifies the conductivity of the space. The homogeneous background space possesses a conductivity being normalized to be 1, whereas the conductive rod D possesses an inhomogeneous conductivity being σ 0 . The perturbed electric potential is u − H and the gradient field ∇(u − H) is the corresponding electric current. 1.2. Background discussion and literature review. The conductivity equation (1.1) is a fundamental problem in many existing studies. It is the master equation for the electrical impedance tomography (EIT) which is an important medical imaging modality and it can also find important applications in materials science; see e.g. [5,6,15,43] and the references cited therein. There are rich results in the literature devoted to the quantitative properties of the solution to (1.1) and its geometric relationship to the conductive inclusion D. In this work, we shall derive an accurate characterisation of the perturbed electric field u − H and its dependence on the geometry of D. There are mainly two motivations for our study as described in what follows. First, in [4,9,14,34], the authors studied the electric field perturbation from thin/slender structures, which are originated from the study of imaging crack defects in EIT [4,9,41]. This is closely related to the current study. Indeed, the geometric setup in the aforementioned works are more general than the one considered in the present article. However, with the specific rod-geometry, we can derive an accurate characterisation of the perturbed electric field and its geometric dependence on D. In fact, in our asymptotic formula of the electric field u − H (with respect to δ 1), the leading-order term is exact and can be used to fully decode D. This is in a sharp contrast to the existing studies mentioned above, which inevitably involve some qualitative estimates due to the more general geometries. The rod-geometry, though special, also possesses several local features that are worth our investigation, which is the second motivation of our study as described below. Clearly, the studies mentioned above are mainly concerned with extracting the global geometry of ∂D from the perturbed electric field u−H. On the other hand, there are studies on the relationships between the local geometry of ∂D and the perturbed field u−H. In fact, there are classical results concerning the singularities of the solution near a boundary corner point [28]. Roughly speaking, if ∂D possesses a corner, then the solution u locally around the corner point can be decomposed into the sum of a regular part and a singular part. Such a qualitative property has been used to establish novel uniqueness and stability results for the Calderón inverse inclusion problem in EIT by a single partial boundary measurement [17,37,38]. The Calderón inverse inclusion problem is a longstanding problem in the literature and we shall present more background discussion in Section 4. In [18], the corner singularities of a conductive inclusion have been characterized in terms of the generalised polarisation tensors associated with the electric potential u, and the results are directly applied to EIT. Recently, in [39], the authors consider the case that ∂D is smooth, but possesses high-curvature points. In two dimensions, a high-curvature point means that the extrinsic curvature of the boundary curve ∂D at that point is sufficiently large. It is shown in [39] that the quantitative property of ∇u around the high-curvature point enables one to recover the local part of ∂D around that high-curvature point. However, the sharp curvature-dependence of ∇u in [39] is established through numerically refining the upper-bound estimate in [24]. As mentioned before, the rod-geometry possesses a few interesting local features that consolidate the numerical study in [39]. First, it is geometrically anisotropic where the two dimensions are of different scales. In fact, the curvature at the two end-caps (i.e. S c = ∂D a ∪ ∂D b ) is δ −1 ≥ 1, whereas the curvature at the facade part (i.e. S f ) is 0. Hence, the rod-geometry, though special, provides rich insights on the curvature dependence of the electric field with respect to the shape of the conductive inclusion. In fact, we shall see that the perturbed electric energy is localized at the two end-caps of the rod. Similar to [39], the result enables us to rigorously justify that one can uniquely determine the conductive rod by a single measurement in EIT. It is emphasized that in three dimensions or in the case that the rod is curved, the situation would become much more complicated. Hence, we mainly consider the case with a straight rod in the two dimensions. Nevertheless, even in such a case, the mathematical analysis is technically involved and highly nontrivial. We shall develop layer potential techniques to tackle the problem. It turns out that the so-called Neumann-Poincaré (NP) operator shall play a critical role in our analysis. We would like to mention that the NP operator and its spectral properties have received considerable attentions recently in the literature due to its important applications in several intriguing fields of mathematical physics, including plasmon resonances and invisibility cloaking [7,8,10,11,19,20,23,26,29,32,31,35]. Finally, we would also like to mention in passing that more general rod-geometries were also considered in the literature in different contexts of physical importance [21,22,36]. The rest of the paper is organized as follows. In Section 2, we derive several auxiliary results and in Section 3, we present the main results on the quantitative analysis of the solution u to (1.1) with respect to the geometry of the inclusion D. Finally, we consider in Section 4 the application of the quantitative result derived in Section 3 to Calderón's inverse inclusion problem. Some auxiliary results In this section, we shall establish several auxiliary results for our subsequent use. We first present some preliminary knowledge on the layer potential operators for solving the conductivity problem (1.1), and we also refer to [6,40] for more related results and discussions. 2.1. Layer potentials. Let G be the radiating fundamental solution to the Laplacian ∆ in R 2 , which is given by G(x) = 1 2π ln |x|, (2.1) For any bounded Lipschitz domain B ⊂ R 2 , we denote by S B : L 2 (∂B) → H 1 (R 2 \ ∂B) the single-layer potential operator given by S B [φ](x) := ∂B G(x − y)φ(y) ds y ,(2.2) and K * B : H −1/2 (∂B) → H −1/2 (∂B) the Neumann-Poincaré (NP) operator: K * B [φ](x) := p.v. 1 2π ∂B x − y, ν x |x − y| 2 φ(y) ds y ,(2.3) where p.v. stands for the Cauchy principle value. In (2.3) and also in what follows, unless otherwise specified, ν signifies the exterior unit normal vector to the boundary of the domain concerned. It is known that the single-layer potential operator S B is continuous across ∂B and satisfies the following trace formula ∂ ∂ν S B [φ] ± = ± 1 2 I + K * B [φ] on ∂B, (2.4) where ∂ ∂ν stands for the normal derivative and the subscripts ± indicate the limits from outside and inside of a given inclusion B, respectively. By using the layer-potential techniques, one can readily find the integral solution to (1.1) by u = H + S D [ϕ], (2.5) where the density function ϕ ∈ H −1/2 (∂D) is determined by ϕ = λI − K * D −1 ∂H ∂ν ∂D . (2.6) Here, the constant λ is defined by λ := σ 0 + 1 2(σ 0 − 1) . Asymptotic expansion of the Neumann-Poincaré operator. In what follows, we always suppose that δ 1. We shall present some asymptotic expansions of the Neumann-Poincaré operator with respect to δ. Recalling that ∂D = S a ∪ S f ∪ S b , we decompose the Neumann-Poincaré operator into several parts accordingly. To that end, we introduce the following boundary integral operator: K S,S [φ](x) := χ(S ) 1 2π S x − y, ν x |x − y| 2 φ(y)ds y , f or S ∩ S = ∅. (2.7) It is obvious that K S,S is a bounded operator from L 2 (S) to L 2 (S ). For the case S = S c , we mean S c = S a ∪ S b . In what follows, we define S a 1 and S b 1 by S a 1 = {x; |x − P | = 1, x 1 < −L/2}, S b 1 = {x; |x − Q| = 1, x 1 > L/2}. (2.8) For the subsequent use, we also introduce the following regions: ι δ (P ) := {x; |P − z x | = O(δ), x ∈ S f }, (2.9) ι δ (Q) := {x; |Q − z x | = O(δ), x ∈ S f }. (2.10) Defineφ(x) := φ(x), where x ∈ S a , S b andx ∈ S a 1 , S b 1 . We can prove the following result on the asymptotic expansion of the Neumann-Poincaré operator. Lemma 2.1. The Neumann-Poincaré operator K * D admits the following asymptotic expansion: K * D [φ](x) = K 0 [φ](x) + δK 1 [φ](x) + O(δ 2 ), (2.11) where K 0 is defined by K 0 [φ](x) = χ(S a ) K S f ,S a [φ](x) + 1 4π S a 1φ (y) + χ(S b ) K S f ,S b [φ](x) + 1 4π S b 1φ + A Γ 2 ,Γ 1 [φ] + A Γ 1 ,Γ 2 [φ] + χ(ι δ (P ))K S a ,S f [φ](x) + χ(ι δ (Q))K S b ,S f [φ](x),(2. 12) and K 1 [φ] =χ(S b ) x − P, ν x 2π|x − P | 2 S a 1φ + χ(S a ) x − Q, ν x 2π|x − Q| 2 S b 1φ + χ(S f \ ι δ (P )) δ |x − P | 2 S a 1 (1 − ỹ − P, ν x )φ(ỹ)dsỹ + o δ |x − P | 2 + χ(S f \ ι δ (Q)) δ |x − Q| 2 S b 1 (1 − ỹ − Q, ν x )φ(ỹ)dsỹ + o δ |x − P | 2 . (2.13) Here, the operators A Γ 1 ,Γ 2 and A Γ 2 ,Γ 1 are defined by A Γ 1 ,Γ 2 [φ](x) = 1 π χ(Γ 2 ) Γ 1 δ |x − y| 2 φ(y)ds y , A Γ 2 ,Γ 1 [φ](x) = 1 π χ(Γ 1 ) Γ 2 δ |x − y| 2 φ(y)ds y . (2.14) Proof. First, one has the following separation: K * D [φ](x) = 1 2π S f x − y, ν x |x − y| 2 φ(y)ds y + 1 2π S a ∪S b x − y, ν x |x − y| 2 φ(y)ds y =:A 1 [φ](x) + A 2 [φ](x). (2.15) Note that for x, y ∈ Γ j , j = 2, 3, one can easily obtain that x − y, ν x = 0. Thus one has A 1 [φ](x) = 1 2π S f x − y, ν x |x − y| 2 φ(y)ds y =χ(S a ∪ S b ) 1 2π S f x − y, ν x |x − y| 2 φ(y)ds y + χ(Γ 1 ) 1 2π Γ 2 (x − 2δν x − y) + 2δν x , ν x |x − y| 2 φ(y)ds y + χ(Γ 2 ) 1 2π Γ 1 (x − 2δν x − y) + 2δν x , ν x |x − y| 2 φ(y)ds y =K S f ,S c [φ](x) + δχ(Γ 1 ) 1 π Γ 2 1 |x − y| 2 φ(y)ds y + δχ(Γ 2 ) 1 π Γ 1 1 |x − y| 2 φ(y)ds y . (2.16) On the other hand, A 2 [φ](x) = 1 2π S a ∪S b x − y, ν x |x − y| 2 φ(y)ds y =χ(S a ) 1 2π S a x − y, ν x |x − y| 2 φ(y)ds y + χ(S b ) 1 2π S b x − y, ν x |x − y| 2 φ(y)ds y + χ(S a ) 1 2π S b x − y, ν x |x − y| 2 φ(y)ds y + χ(S b ) 1 2π S a x − y, ν x |x − y| 2 φ(y)ds y + χ(S f ) 1 2π S a ∪S b x − y, ν x |x − y| 2 φ(y)ds y =χ(S a ) 1 4π S a 1φ (ỹ)dsỹ + χ(S b ) 1 4π S b 1φ (ỹ)dsỹ + K S b ,S a [φ] + K S a ,S b [φ] + K S c ,S f [φ]. (2.17) For y ∈ S a and x ∈ S b , by using Taylor's expansions one has |x − y| = |x − P − (y − P ))| = |x − P − δ(ỹ − P )| = |x − P | + δ x − P,ỹ − P + O(δ 2 ). (2.18) Thus one has K S a ,S b [φ](x) = δ x − P, ν x 2π|x − P | 2 S a 1φ + O(δ 2 ). (2.19) Similarly, one can obtain K S b ,S a [φ](x) = δ x − Q, ν x 2π|x − Q| 2 S b 1φ + O(δ 2 ). (2.20) For x ∈ S f y ∈ S a , by direct computations, one can obtain K S a ,S f [φ](x) = δ 2 S a 1 1 − ỹ − P, ν x |x − P | 2 − 2δ x − P,ỹ − P + δ 2φ (ỹ)dsỹ. We decompose S f = (S f \ ι δ (P )) ∪ ι δ (P ), then one has K S a ,S f [φ](x) = δ 2 |x − P | 2 S a 1 (1 − ỹ − P, ν x )φ(ỹ) dsỹ + o δ 2 |x − P | 2 , x ∈ S f \ ι δ (P ). (2.21) Similarly, one can derive the asymptotic expansion for K S b ,S f . By substituting (2.19)-(2.21) back into (2.17) and combining (2.16) one finally achieves (2.11), which completes the proof. The proof is complete. Lemma 2.2. The operators A Γ j ,Γ k , {j, k} = {1, 2}, {2, 1}, defined in (2.14) are bounded op- erators from L 2 (Γ j ) to L 2 (Γ k ) . Furthermore, the operators χ(ι δ (P ))K S a ,S f and χ(ι δ (Q))K S b ,S f are bounded operators from L 2 (S a ) to L 2 (S f ), and from L 2 (S b ) to L 2 (S f ), respectively. Proof. We only prove that A Γ 2 ,Γ 1 is a bounded operator L 2 (Γ 2 ) to L 2 (Γ 1 ). First, for φ 1 ∈ L 2 (Γ 1 ) and φ 2 ∈ L 2 (Γ 2 ), one has | A Γ 2 ,Γ 1 [φ 2 ], φ 1 L 2 (Γ 1 ) | = 1 2π Γ 1 Γ 2 δ |x − y| 2 φ 2 (y)ds y φ 1 (x)ds x ≤ 1 4π Γ 1 Γ 2 δ |x − y| 2 φ 2 2 (y)ds y ds x + 1 4π Γ 1 Γ 2 δ |x − y| 2 φ 2 1 (x)ds y ds x = 1 4π L/2 −L/2 L/2 −L/2 δ |x 1 − y 1 | 2 + 4δ 2 dx 1 φ 2 2 (y)dy 1 + 1 4π L/2 −L/2 L/2 −L/2 δ |x 1 − y 1 | 2 + 4δ 2 dy 1 φ 2 1 (x)dx 1 = 1 8π L/2 −L/2 arctan L/2 − y 1 2δ − arctan −L/2 − y 1 2δ φ 2 2 (y)dy 1 + 1 8π L/2 −L/2 arctan L/2 − x 1 2δ − arctan −L/2 − x 1 2δ φ 2 1 (x)dx 1 ≤C( φ 1 2 L 2 (Γ 1 ) + φ 2 2 L 2 (Γ 2 ) ), where the constant C is independent of δ. By following a similar arguments as in [6] (pp. 18), one can show that A Γ 2 ,Γ 1 is a bounded operator L 2 (Γ 2 ) to L 2 (Γ 1 ). Lemma 2.3. Suppose x ∈ S c , then for any function φ ∈ L 2 (S f ), which satisfies φ(y) = −φ(y + 2δn), y ∈ Γ 1 ,(2. 22) there holds K S f \(ι δ (P )∪ι δ (Q)),S c [φ](x) = o(1). (2.23) Proof. Note that S f = Γ 1 ∪ Γ 2 . Straightforward computations show that K S f \(ι δ (P )∪ι δ (Q)),S c [φ](x) = 1 2π S f \(ι δ (P )∪ι δ (Q)) x − y, ν x |x − y| 2 φ(y)ds y = 1 2π S f \(ι δ (P )∪ι δ (Q)) x − z y − δν y , ν x |x − z y − δν y | 2 φ(y)ds y = 1 2π S f \(ι δ (P )∪ι δ (Q)) x − z y , ν x |x − z y | 2 φ(y)ds y + o(1) = 1 2π Γ 1 \(ι δ (P )∪ι δ (Q)) x − z y , ν x |x − z y | 2 φ(y)ds y + 1 2π Γ 1 \(ι δ (P )∪ι δ (Q)) x − z y , ν x |x − z y | 2 φ(y + 2δn)ds y + o(1) = o(1), which completes the proof. Quantitative analysis of the electric field In this section, we present the quantitative analysis of the solution to the conductivity equation (1.1) as well as its geometric relationship to the inclusion D. Several auxiliary lemmas. Recall that u is represented by (2.5). We first derive some asymptotic properties of the density function ϕ in (2.6). Let z x ∈ Γ 0 be defined by z x = x + δn, x ∈ Γ 1 , x − δn, x ∈ Γ 2 . (3.1) One has the following asymptotic expansion for H around Γ 0 : for x ∈ S a andx ∈ S a 1 . Moreover, H(x) = H(z x )+∇H(z x )·(x−z x )+O(|x−z x | 2 ) = H(z x )+δ∇H(z x )·(x−z x )+O(δ 2 ),H(x) = H(Q) + ∇H(P ) · (x − Q) + O(|x − Q| 2 ) = H(Q) + δ∇H(Q) · ν x + O(δ 2 ), (3.4) for x ∈ S b andx ∈ S b 1 . We now can show the following asymptotic result. ϕ(x) =            (λI + A δ ) −1 [(−1) j ∂ x 2 H(·, 0)](x 1 ) + δ(λI − A δ ) −1 [∂ 2 x 2 H(·, 0)](x 1 ) +χ(ι δ (P ) ∪ ι δ (Q))O(δ 2(1− ) ) + O(δ 2 ), x ∈ Γ j \ (ι δ (P ) ∪ ι δ (Q)), (λI − K * 1 ) −1 [∇H(P ) · ν] + o(1), x ∈ S a ∪ ι δ (P ), (λI − K * 2 ) −1 [∇H(Q) · ν] + o(1), x ∈ S b ∪ ι δ (Q),(3. 5) where 0 < < 1 and the operator A δ is defined by A δ [ψ](x 1 ) := 1 2π L/2 −L/2 δ (x 1 − y 1 ) 2 + 4δ 2 ψ(y 1 )dy 1 , ψ ∈ L 2 (−L/2, L/2). (3.6) The operators K * 1 and K * 2 are defined by K * 1 [ϕ 1 ](x) := S a ∪ι δ (P ) x − y, ν x |x − y| 2 ϕ 1 (y)ds y + χ(ι δ (P ))A S f ∩ι δ (P ) [ϕ 1 ](x) K * 2 [ϕ 2 ](x) := S b ∪ι δ (Q) x − y, ν x |x − y| 2 ϕ 1 (y)ds y + χ(ι δ (Q))A S f ∩ι δ (Q) [ϕ 2 ](x), (3.7) respectively. Proof. Since λI − K * D [ϕ] = ∂H ∂ν ∂D . By combining (2.11) and (3.2) one can readily verify that λI − K 0 [ϕ](x) = ∇H(z x ) · ν x + o(1), x ∈ S f . By using (2.12), one thus has In a similar manner, one can show that λϕ(x) − 1 π Γ 2 δ |x − y| 2 ϕ(y)ds y = −∇H(z x ) · n + o(1), x ∈ Γ 1 \ ι δ (P ) ∪ ι δ (Q) , λϕ(x) − 1 π Γ 1 δ |x − y| 2 ϕ(y)ds y = ∇H(z x ) · n + o(1), x ∈ Γ 2 \ ι δ (P ) ∪ ι δ (Q) .λϕ(x 1 , −δ) − 1 π L/2 −L/2 δ (x 1 − y 1 ) 2 + 4δ 2 ϕ(y 1 , δ)dy 1 = −∂ x 2 H(x 1 , 0) + o(1), |x 1 | ≤ L/2 − O(δ), λϕ(x 1 , δ) − 1 π L/2 −L/2 δ (x 1 − y 1 ) 2 + 4δ 2 ϕ(y 1 , −δ)dy 1 = ∂ x 2 H(x 1 , 0) + o(1), |x 1 | ≤ L/2 − O(δ).(λI − K * 2 )[ϕ](x) = ∇H(Q) · ν x + o(1), in S b ∪ ι δ (Q) . (3.11) and so the last equation in (3.5) follows. Next, by combining (2.11), (2.12), and (2.13) again for x ∈ Γ j \ (ι δ (P ) ∪ ι δ (Q)), j = 1, 2, and using the second and third equations in (3.5), one has λϕ(x 1 , (−1) j δ) − 1 2π L/2 −L/2 δ (x 1 − y 1 ) 2 + 4δ 2 ϕ(y 1 , (−1) j+1 δ)dy 1 =(−1) j ∂ x 2 H(x 1 , 0) + δ∂ 2 x 2 H(x 1 , 0) + χ(ι δ (P ) ∪ ι δ (Q))O(δ 2(1− ) ) + O(δ 2 ), 0 < < 1, (3.12) which verifies the first equation in (3.5) and completes the proof. Before presenting our main result, we need to further analyze the operator A δ defined in (3.6) Lemma 3.2. Suppose A δ is defined in (3.6), then it holds that A δ [y n 1 ](x 1 ) = 1 2 x n 1 + o(1), x ∈ Γ j \ ι δ (P ) ∪ ι δ (Q) , n ≥ 0. (3.13) Proof. We use deduction to prove the assertion. Since x ∈ Γ j \ ι δ (P ) ∪ ι δ (Q) , one has |L/2 − x 1 | = O(δ ), and |L/2 + x 1 | = O(δ ), 0 ≤ < 1. Then for n = 0, it is straightforward to verify that A δ [1](x 1 ) = 1 π L/2 −L/2 δ (x 1 − y 1 ) 2 + 4δ 2 dy 1 = 1 2π arctan L/2 − x 1 2δ − arctan −L/2 − x 1 2δ = 1 2 + o(1). (3.14) Next, we suppose that (3.13) holds for n ≤ N . Then by using change of variables, one can derive that A δ [y N +1 1 ](x 1 ) = 1 π L/2 −L/2 δ (x 1 − y 1 ) 2 + 4δ 2 y N 1 y 1 dy 1 = 1 2π L/2−x 1 2δ −L/2−x 1 2δ 1 1 + t 2 y N 1 (2δt + x 1 )dt = 1 π δ L/2−x 1 2δ −L/2−x 1 2δ 1 1 + t 2 y N 1 tdt + x 1 1 2 x N 1 + o(1) = 1 π δO ln(1 + δ 2( −1) ) + 1 2 x N +1 1 + o(1) = 1 2 x N +1 1 + o(1),(3.15) which completes the proof. The following lemma is also of critical importance Lemma 3.3. There holds the following that S a ∪ι δ (P ) (λI − K * 1 ) −1 [∇H(P ) · ν] = − 2δ λ − 1 2 −1 ∂ x 1 H(P ) + o(δ), S b ∪ι δ (Q) (λI − K * 2 ) −1 [∇H(Q) · ν] =2δ λ − 1 2 −1 ∂ x 1 H(Q) + o(δ). (3.16) Proof. For any f ∈ L 2 (∂D), we consider the following boundary integral equation (λI − K * D )[φ] = f. (3.17) By using the decomposition (2.12) (see also (3.10) and (3.11)), one has χ(S a ∪ ι δ (P ))(λI − K * 1 + o(1))[φ] + χ(S b ∪ ι δ (Q))(λI − K * 2 + o(1))[φ] + A Γ 2 ,Γ 1 [φ] + A Γ 1 ,Γ 2 [φ] + χ(ι δ (P ) ∪ ι δ (Q))O(δ 2(1− ) ) + O(δ 2 ) = f, 0 < < 1. (3.18) Note that ∂D is of C 1,α . By taking the boundary integral of both sides of (3.17) on ∂D and making use of (3.18), one then has By assuming f = χ(S a ∪ ι δ (P ))∇H(P ) · ν and plugging into (3.19), one thus has λ − 1 2 ∂D φ = S a ∪ι δ (P ) (λI − K * 1 + o(1))[φ] + S b ∪ι δ (Q) (λI − K * 2 + o(1))[φ] + Γ 1 A Γ 2 ,Γ 1 [φ] + Γ 2 A Γ 1 ,Γ 2 [φ] + o(δ) = ∂D f.λ − 1 2 S a ∪ι δ (P ) (λI − K * 1 + o(1)) −1 [∇H(P ) · ν] = S a ∪ι δ (P ) ∇H(P ) · ν = −2δ∂ x 1 H(P ), (3.20) which verifies the first equation in (3.16). Similarly, by assuming f = χ(S b ∪ι δ (Q))∇H(q)·ν, one can prove the second equation in (3.16). The proof is complete. 3.2. Sharp asymptotic approximation of the solution u. With Lemmas 3.1, 3.2 and 3.3, we can now establish one of the main results of this paper as follows. Theorem 3.1. Let u be the solution to (1.1) and (1.2), with D of the rod-shape described in Section 1.1. Then for x ∈ R 2 \ D, it holds that u(x) =H(x) + δ 1 2π λ − 1 2 −1 L/2 −L/2 ln (x 1 − y 1 ) 2 + x 2 2 (x 1 + L/2) 2 + x 2 2 ∂ 2 y 2 H(y 1 , 0) dy 1 + δ 1 π λ − 1 2 −1 L/2 −L/2 x 2 (x 1 − y 1 ) 2 + x 2 2 ∂ y 2 H(y 1 , 0) dy 1 + δ 1 2π λ − 1 2 −1 ln (x 1 − L/2) 2 + x 2 2 (x 1 + L/2) 2 + x 2 2 ∂ x 1 H(L/2, 0) + o(δ). (3.21) Proof. By using (2.5) and Taylor's expansion along with Γ 0 , one has u(x) =H(x) + S f \(ι δ (P )∪ι δ (Q)) G(x − z y )ϕ(y) ds y + δ S f \(ι δ (P )∪ι δ (Q)) ∇ y G(x − z y ) · ν y ϕ(y) ds y + S a ∪ι δ (P ) G(x − z y )ϕ(y) dsỹ + S b ∪ι δ (Q) G(x − z y )φ(ỹ) dsỹ + O(δ 2 ). (3.22) First, by using (3.12), one can derive that S f G(x − z y )ϕ(y) ds y =2δ Γ 1 \(ι δ (P )∪ι δ (Q)) G(x − z y )(λI − A δ ) −1 [∂ 2 x 2 H(·, 0)](y 1 )dy 1 + o(δ) =δ 1 2π λ − 1 2 −1 L/2 −L/2 ln((x 1 − y 1 ) 2 + x 2 2 )∂ 2 y 2 H(y 1 , 0) dy 1 + o(δ). (3.23) Similarly, one has S f ∇ y G(x − z y ) · ν y ϕ(y) ds y = 1 π λ − 1 2 −1 L/2 −L/2 x 2 (x 1 − y 1 ) 2 + x 2 2 ∂ y 2 H(y 1 , 0) dy 1 + o(1). (3.24) By using Lemma 3.3, one then obtains that We finally can derive the sharp asymptotic expansion of the electric field as follows. S b ∪ι δ (Q) G(x − z y )ϕ(y) ds y = S b ∪ι δ (Q) G(x − Q)ϕ(y) ds y + o(δ) =δ 1 π ln |x − Q| λ − 1 2 −1 ∂ x 1 H(Q) + o(δ) =δ 1 2π λ − 1 2 −1 ln((x 1 − L/2) 2 + x 2 2 )∂ x 1 H(L/2, 0) + o(δ).G(x − z y )ϕ(y) ds y = S a ∪ι δ (P ) G(x − P )ϕ(y) ds y + o(δ) = − 1 2π ln |x − P | S b ∪ι δ (Q) ϕ(y) ds y + 2 Γ 1 \(ι δ (P )∪ι δ (Q)) (λI − A δ ) −1 [∂ 2 x 2 H(·, 0)](y 1 )dy 1 + o(δ) = − δ 1 2π λ − 1 2 −1 ln((x 1 + L/2) 2 + x 2 2 )∂ x 1 H(L/2, 0) − δ 1 2π λ − 1 2 −1 ln((x 1 + L/2) 2 + x 2 2 ) L/2 −L/2 ∂ 2 y 2 H(y 1 , 0) dy 1 + o(δ). Theorem 3.2. Suppose H(x) = a · x, where a = (a 1 , a 2 ) ∈ R 2 . Then for x ∈ R 2 \ D, the electric field u satisfies u(x) =a · x + δ 1 π λ − 1 2 −1 a 2 arctan L/2 − x 1 x 2 + arctan L/2 + x 1 x 2 + δ 1 2π λ − 1 2 −1 a 1 ln (x 1 − L/2) 2 + x 2 2 (x 1 + L/2) 2 + x 2 2 + o(δ). (3.27) Furthermore, the perturbed gradient field admits the following asymptotic expansion: ∇u(x) = a + δ 1 π λ − 1 2 −1 f 2 (x)a 1 − f 1 (x)a 2 f 1 (x)a 1 + f 2 (x)a 2 + o(δ),(3. 28) where the functions f j , j = 1, 2 are defined by f 1 (x) := x 2 (x 1 − L/2) 2 + x 2 2 − x 2 (x 1 + L/2) 2 + x 2 2 f 2 (x) := x 1 − L/2 (x 1 − L/2) 2 + x 2 2 − x 1 + L/2 (x 1 + L/2) 2 + x 2 2 . (3.29) Proof. The proof is given by using (3.21) together with direct computations. Quantitative analysis and numerical illustrations. Define the following vector field E s := δ 1 π λ − 1 2 −1 f 2 (x)a 1 − f 1 (x)a 2 f 1 (x)a 1 + f 2 (x)a 2 . (3.30) According to (3.28), E s is the leading order term of the perturbed gradient field. It is noted that the distribution of |E s | is independent of the uniform gradient potential a. In fact, one has |E s | 2 = δ 2 1 π 2 λ − 1 2 −2 (a 2 1 + a 2 2 )(f 1 (x) 2 + f 2 (x) 2 ). (3.31) Moreover, further computations show that f 1 (x) 2 + f 2 (x) 2 = 1 |x − Q| − 1 |x − P | 2 + 2 |x − P ||x − Q| 1 − x − P, x − Q |x − P ||x − Q| . (3.32) One can thus derive that |E s | is maximized near the two caps (high curvature parts) of the inclusion D. In fact, near the caps one has |x − P | = δ + o(δ), or |x − Q| = δ + o(δ). By (3.32) one then has f 1 (x) 2 + f 2 (x) 2 = δ −2 (1 + o(1)),(3.33) while near the centering parts of the rod, f 1 (x) 2 + f 2 (x) 2 = O(1). To better illustrate the result, we next present some numerical solutions with different background fields. The parameters of the rod-shape inclusion are selected as follows: σ 0 = 2, L = 10, δ = 5 * tan(π/36) ≈ 0.4374. (3.34) We choose three different uniform background fields, i.e., a = (1, 0), (0, 1), (1, 1), respectively, and plot the absolute values of the perturbed fields as well as the corresponding gradient fields, which are scaled for better display. It is clearly shown from Figure 1 to Figure 3 that the gradient fields behave much stronger near the high curvature parts of the inclusion. Application to Calderón inverse inclusion problem In this section, we consider the application of the quantitative results derived in the previous section to the Calderón inverse inclusion problem. To that end, we let D denote a generic rod inclusion that is obtained through rigid motions performed on special case described in Section 1.1. We write D(L, δ, z 0 , σ 0 ) to signify its dependence on the length L, width δ, position z 0 (which is the geometric centre of D) as well as the conductivity parameter σ 0 . Consider the conductivity system (1.1) associated with a generic inclusion described above. The inverse inclusion problem is concerned with recovering the shape of the inclusion, namely ∂D, independent of its content σ 0 , by measuring the perturbed electric field (u − H) away from the inclusion. This is one of the central problems in EIT, which forms the fundamental basis for the electric prospecting. The case with a single measurement, namely the use of a single probing field H, is a longstanding problem in the literature. The existing results for the single-measurement case are mainly concerned with specific shapes including discs/balls and polygons/polyhedrons [12,13,18,25,33,38,42] as well as the other general shapes but with a-priori conditions; see [1,2,3,16,27,30]. As discussed earlier, in [39], the local recovery of the highly-curved part of ∂D was also considered. Next, using the asymptotic result quantitative result in Theorem 3.2, we shall show that one can uniquely determine a conductive inclusion up to an error level δ 1. Theorem 4.1. Let D j = D j (L j , δ j , z where Σ is a bounded simply-connected Lipschitz domain enclosing D j . Then it cannot hold that dist(D 1 , D 2 ) δ. (4.2) Proof. First, by (4.1), we know that u 1 = u 2 in R 2 \Σ and hence by unique continuation, we also know that u 1 = u 2 in R 2 \(D 1 ∪ D 2 ). Next, since the Laplacian is invariant under rigid motions, we note that the quantitative result in Theorem 3.2 still holds for D j . By contradiction, we assume that (4.2) holds. It is easily seen that there must be one cap point, say Θ 0 ∈ ∂D 1 , which lies away from D 2 and dist(Θ 0 , D 2 ) δ. Hence, one has u 1 (Θ 0 ) = u 2 (Θ 0 ). Now, we arrive at a contradiction by noting that using Theorem 3.2, one has u 1 (Θ 0 ) ∼ 1 whereas u 2 (Θ 0 ) ∼ δ 1. The proof is complete. Mathematical setup. Initially focusing on the mathematics, but not the physics, we consider the following elliptic PDE system in R 2 :   ∇ · σ(x)∇u(x) = 0, x = (x 1 , x 2 ) ∈ R 2 , u(x) − H(x) = O(|x| −1 ), |x| → ∞, ∈ S f andx ∈ S f 1 . Similarly, one has H(x) = H(P ) + ∇H(P ) · (x − P ) + O(|x − P | 2 ) = H(P ) + δ∇H(P ) · ν x + O(δ 2 ), (3.3) Lemma 3. 1 . 1Suppose ϕ is defined in (2.6), then one has can derive that ϕ(x 1 , −δ) = −ϕ(x 1 , δ) + o(1), for |x 1 | ≤ L/2 − O(δ). Furthermore, for x ∈ S a1 , by making use of (2.12), (3.3) and Lemma 3.3, one has (λI − K * 1 )[ϕ](x) = ∇H(P ) · ν x + o(1), in S a ∪ ι δ (P ).(3.10) combining (3.23), one can readily show that S a ∪ι δ (P ) by substituting (3.23)-(3.26) into (3.22) one has (3.21), which completes the proof. Figure 1 . 1a = (1, 0). Left: Perturbed field |u − a · x| (scaled) Right: Perturbed gradient field |∇u − a| (scaled). Figure 2 . 2a = (0, 1). Left: Perturbed field |u − a · x| (scaled) Right: Perturbed gradient field |∇u − a| (scaled). Figure 3 . 3a = (1, 1). Left: Perturbed field |u − a · x| (scaled) Right: Perturbed gradient field |∇u − a| (scaled). , j = 1, 2, be two conductive rods such that L j ∼ 1, δ j ∼ δ 1 and σ (j) 0 ∼ 1 for j = 1, 2. Let u j be the corresponding solution to (1.1) associated with D j and a given nontrivial H(x) = a · x. Suppose that u 1 = u 2 on ∂Σ, (4.1) AcknowledgmentsThe work of X. Fang Generic uniqueness and size estimates in the inverse conductivity problem with one measurement, Matematiche (Catania). G , 54supplG. Alessandrini, Generic uniqueness and size estimates in the inverse conductivity problem with one measurement, Matematiche (Catania), 54 (1999), suppl., 5-14. Analyticity and uniqueness for the inverse conductivity problem. G Alessandrini, V Isakov, Rend. Istit. Mat. Univ. Trieste. 281-2G. Alessandrini and V. Isakov, Analyticity and uniqueness for the inverse conductivity problem, Rend. Istit. Mat. Univ. Trieste, 28 (1996), no. 1-2, 351-369. Local uniqueness in the inverse conductivity problem with one measurement. G Alessandrini, V Isakov, J Powell, Trans. Amer. Math. Soc. 3478G. Alessandrini, V. Isakov and J. Powell, Local uniqueness in the inverse conductivity problem with one measurement, Trans. Amer. Math. Soc., 347 (1995), no. 8, 3031-3041. Imaging schemes for perfectly conducting cracks. H Ammari, J Garnier, H Kang, W K Park, K Solna, SIAM J. Appl. Math. 71H. Ammari, J. Garnier, H. Kang, W.K. Park, and K. Solna, Imaging schemes for perfectly conducting cracks, SIAM J. Appl. Math., 71 (2011), 68-91. Reconstruction of Small Inhomogeneities from Boundary Measurements. H Ammari, H Kang, Lecture Notes in Mathematics. 1846Springer-VerlagH. Ammari and H. Kang, Reconstruction of Small Inhomogeneities from Boundary Measurements, Lecture Notes in Mathematics, 1846, Springer-Verlag, Berlin, 2004. H Ammari, H Kang, Polarization and Moment Tensors With Applications to Inverse Problems and Effective Medium Theory, Applied Mathematical Sciences. Berlin HeidelbergSpringer-VerlagH. Ammari, H. Kang, Polarization and Moment Tensors With Applications to Inverse Problems and Effective Medium Theory, Applied Mathematical Sciences, Springer-Verlag, Berlin Heidelberg, 2007. Spectral analysis of a Neumann-Poincar-type operator and analysis of cloaking due to anomalous localized resonance. H Ammari, G Ciraolo, H Kang, H Lee, G Milton, Arch. Ration. Mech. Anal. 208H. Ammari, G. Ciraolo, H. Kang, H. Lee, and G. Milton, Spectral analysis of a Neumann-Poincar-type operator and analysis of cloaking due to anomalous localized resonance, Arch. Ration. Mech. Anal., 208 (2013), 667-692. Spectral analysis of a Neumann-Poincaré-type operator and analysis of cloaking due to anomalous localized resonance II. H Ammari, G Ciraolo, H Kang, H Lee, G Milton, Contemporary Mathematics. H. Ammari, G. Ciraolo, H. Kang, H. Lee, and G. Milton, Spectral analysis of a Neumann-Poincaré-type operator and analysis of cloaking due to anomalous localized resonance II, Contemporary Mathematics, 615 (2014), 1-14. Electrical impedance spectroscopy-based nondestructive testing for imaging defects in concrete structures. H Ammari, J K Seo, T Zhang, L Zhou, Sensors. H. Ammari, J.K. Seo, T. Zhang, and L. Zhou, Electrical impedance spectroscopy-based nondestructive testing for imaging defects in concrete structures, Sensors, 15 (2015), 10909-10922. Analysis of plasmon resonance on smooth domains using spectral properties of the Neumann-Poincare operator. K Ando, H Kang, J,. Math. Anal. Appl. 435K. Ando and H. Kang, Analysis of plasmon resonance on smooth domains using spectral properties of the Neumann-Poincare operator, J,. Math. Anal. Appl., 435 (2016), 162-178. Plasmon resonance with finite frequencies: a validation of the quasi-static approximation for diametrically small inclusions. K Ando, H Kang, H Liu, SIAM J. Appl. Math. 76K. Ando, H. Kang and H. Liu, Plasmon resonance with finite frequencies: a validation of the quasi-static approximation for diametrically small inclusions, SIAM J. Appl. Math., 76 (2016), 731-749. The inverse conductivity problem with one measurement: uniqueness for convex polyhedra. B Barceló, E Fabes, J.-K Seo, Proc. AMS. 122B. Barceló, E. Fabes and J.-K. Seo, The inverse conductivity problem with one measurement: uniqueness for convex polyhedra, Proc. AMS, 122 (1994) 183-189. Global Lipschitz stability estimates for polygonal conductivity inclusions from boundary measurements. E Beretta, E Francini, S Vessella, 10.1080/00036811.2020.1775819Appl. Anal. E. Beretta, E. Francini and S. Vessella, Global Lipschitz stability estimates for polygonal conductivity inclusions from boundary measurements, Appl. Anal., https://doi.org/10.1080/00036811.2020.1775819 Asymptotic formulas for steady state voltage potentials in the presence of thin inhomogeneities. A rigorous error analysis. E Beretta, E Francini, M S Vogelius, J. Math. Pures Appl. 82E. Beretta, E. Francini and M. S. Vogelius, Asymptotic formulas for steady state voltage potentials in the presence of thin inhomogeneities. A rigorous error analysis, J. Math. Pures Appl., 82 (2003), 1277-1301. L Borcea, Electrical impedance tomography, Inverse Problems. 18L. Borcea, Electrical impedance tomography, Inverse Problems, 18 (2002), no. 6, R99-R136. Single measurement detection of a discontinuous conductivity. K Bryan, Comm. Partial Differential Equations. K. Bryan, Single measurement detection of a discontinuous conductivity, Comm. Partial Differential Equations, 15 (1990), 503-514. X Cao, H Diao, H Liu, arXiv:2005.04420Determining a piecewise conductive medium body by a single far-field measurement. X. Cao, H. Diao and H. Liu, Determining a piecewise conductive medium body by a single far-field measurement, arXiv:2005.04420 Corner effects on the perturbation of an electric potential. D S Choi, J Helsing, M Lim, SIAM J. Appl. Math. 783D. S. Choi, J. Helsing and M. Lim, Corner effects on the perturbation of an electric potential, SIAM J. Appl. Math., 78 (2018), no. 3, 1577-1601. On spectral properties of Neuman-Poincaré operator on spheres and plasmonic resonances in 3D elastostatics. Y Deng, H Li, H Liu, J. Spectr. Theory. 140Y. Deng, H. Li, H. Liu, On spectral properties of Neuman-Poincaré operator on spheres and plasmonic resonances in 3D elastostatics, J. Spectr. Theory, 140 (2020), 213-242. Analysis of surface polariton resonance for nanoparticles in elastic system. Y Deng, H Li, H Liu, SIAM J. Math. Anal. 52Y. Deng, H. Li, H. Liu, Analysis of surface polariton resonance for nanoparticles in elastic system, SIAM J. Math. Anal., 52 (2020), 1786-1805. On regularized full-and partial-cloaks in acoustic scattering. Y Deng, H Liu, G Uhlmann, Comm. Partial Differential Equations. 426Y. Deng, H. Liu and G. Uhlmann, On regularized full-and partial-cloaks in acoustic scattering, Comm. Partial Differential Equations, 42 (6) (2017), 821-851. Full and partial cloaking in electromagnetic scattering. Y Deng, H Liu, G Uhlmann, Arch. Ration. Mech. Anal. 2231Y. Deng, H. Liu and G. Uhlmann, Full and partial cloaking in electromagnetic scattering, Arch. Ration. Mech. Anal., 223 (1)(2017), 265-299. Y Deng, H Liu, G Zheng, arXiv:2007.11181Mathematical analysis of plasmon resonances for curved nanorods. Y. Deng, H. Liu, G. Zheng, Mathematical analysis of plasmon resonances for curved nanorods, arXiv:2007.11181 The free boundary of a flow in a porous body heated from its boundary. E Dibenedetto, C M Elliott, A Friedman, Nonlinear Anal: TMA. 109E. Dibenedetto, C. M. Elliott and A. Friedman, The free boundary of a flow in a porous body heated from its boundary, Nonlinear Anal: TMA, 10, (1986), no. 9, 879-900. Inverse conductivity problem with one measurement: Error estimates and approximate identification for perturbed disks. E Fabes, H Kang, J.-K Seo, SIAM J. Math. Anal. 30E. Fabes, H. Kang and J.-K. Seo, Inverse conductivity problem with one measurement: Error estimates and approximate identification for perturbed disks, SIAM J. Math. Anal., 30 (1999), 699-720. Plasmon resonance and heat generation in nanostructures. X Fang, Y Deng, J Li, Math. Meth. Appl. Sci. 38X. Fang, Y. Deng and J. Li, Plasmon resonance and heat generation in nanostructures, Math. Meth. Appl. Sci, 38 (2015), 4663-4672. On the uniqueness in the inverse conductivity problem with one measurement. A Friedman, V Isakov, Indiana Univ. Math. J. 38A. Friedman and V. Isakov, On the uniqueness in the inverse conductivity problem with one measure- ment, Indiana Univ. Math. J., 38 (1989), 563-579. Boundary Value Problems in Non-Smooth Domains. P Grisvard, Pitman, LondonP. Grisvard, Boundary Value Problems in Non-Smooth Domains, Pitman, London 1985. Classification of spectral of the Neumann-Poincaré operator on planar domains with corners by resonance. J Helsing, H Kang, M Lim, Ann. I. H. Poincare-AN. 34J. Helsing, H. Kang and M. Lim, Classification of spectral of the Neumann-Poincaré operator on planar domains with corners by resonance, Ann. I. H. Poincare-AN, 34 (2017), 991-1011. On the inverse conductivity problem with one measurement. V Isakov, J Powell, Inverse Problems. 6V. Isakov and J. Powell, On the inverse conductivity problem with one measurement, Inverse Problems, 6 (1990), 311-318. Spectral resolution of the Neumann-Poincaré operator on intersecting disks and analysis of plasmon resonance. H Kang, M Lim, S Yu, Arch. Ratio. Mech. Anal. 226H. Kang, M. Lim and S. Yu, Spectral resolution of the Neumann-Poincaré operator on intersecting disks and analysis of plasmon resonance, Arch. Ratio. Mech. Anal., 226 (2017), 83-115. Spectral properties of the Neumann-Poincaré operator and uniformity of estimates for the conductivity equation with complex coefficients. H Kang, K Kim, H Lee, J Shin, S Yu, J London Math Soc. 2H. Kang, K. Kim, H. Lee, J. Shin and S. Yu, Spectral properties of the Neumann-Poincaré operator and uniformity of estimates for the conductivity equation with complex coefficients, J London Math Soc (2), 93 (2016), 519-546. Inverse conductivity problem with one measurement: uniqueness of balls in R 3. H Kang, J.-K Seo, SIAM J. Appl. Math. 59H. Kang and J.-K. Seo, Inverse conductivity problem with one measurement: uniqueness of balls in R 3 , SIAM J. Appl. Math., 59 (1999), 1533-1539. Asymptotic expansions for the voltage potentials with two-dimensional and three-dimensional thin interfaces. A Khelifi, H Zribi, Math. Meth. Appl. Sci. 34A. Khelifi and H. Zribi, Asymptotic expansions for the voltage potentials with two-dimensional and three-dimensional thin interfaces, Math. Meth. Appl. Sci., 34 (2011), 2274-2290. On anomalous localized resonance and plasmonic cloaking beyond the quasistatic limit. H Li, H Liu, Proceedings of the Royal Society A. 47420180165H. Li and H. Liu, On anomalous localized resonance and plasmonic cloaking beyond the quasistatic limit, Proceedings of the Royal Society A, 474: 20180165 Regularized transformation-optics cloaking for the Helmholtz equation:from partial cloak to full cloak. J Li, H Liu, L Rondi, G Uhlmann, Comm. Math. Phys. 335J. Li, H. Liu, L. Rondi and G. Uhlmann, Regularized transformation-optics cloaking for the Helmholtz equation:from partial cloak to full cloak, Comm. Math. Phys., 335 (2015), 671-712. Recovering piecewise-constant refractive indices by a single far-field pattern. E Blåsten, H Liu, Inverse Problems. 3685005E. Blåsten and H. Liu, Recovering piecewise-constant refractive indices by a single far-field pattern, Inverse Problems, 36 (2020), 085005. Stable determination of polygonal inclusions in Calderón's problem by a single partial boundary measurement. H Liu, C.-H Tsou, Inverse Problems. 3685010H. Liu and C.-H. Tsou, Stable determination of polygonal inclusions in Calderón's problem by a single partial boundary measurement, Inverse Problems, 36 (2020), 085010. On Calderón's inverse inclusion problem with smooth shapes by a single partial boundary measurement. H Liu, C.-H Tsou, W Yang, arXiv:2006.10586H. Liu, C.-H. Tsou and W. Yang, On Calderón's inverse inclusion problem with smooth shapes by a single partial boundary measurement, arXiv:2006.10586. Acoustic and Electromagnetic Equations: Integral Representations for Harmonic Problems. J C Nédélec, Springer-VerlagNew YorkJ. C. Nédélec, Acoustic and Electromagnetic Equations: Integral Representations for Harmonic Prob- lems, Springer-Verlag, New York, 2001. Optimal stability of reconstruction of plane Lipschitz cracks. L Rondi, SIAM J. Math. Anal. 36L. Rondi, Optimal stability of reconstruction of plane Lipschitz cracks, SIAM J. Math. Anal., 36 (2005), 1282-1292. Inverse inclusion problem: a stable method to determine disks. F Triki, C.-H Tsou, J. Differential Equations. 269F. Triki and C.-H. Tsou, Inverse inclusion problem: a stable method to determine disks, J. Differential Equations, 269 (2020), 3259-3281. Electrical impedance tomography and Calderón's problem, Inverse Problems. G Uhlmann, 2539G. Uhlmann, Electrical impedance tomography and Calderón's problem, Inverse Problems, 25 (2009), no. 12, 123011, 39 pp.
[]
[ "A VIETORIS-SMALE MAPPING THEOREM FOR THE HOMOTOPY OF HYPERDEFINABLE SETS", "A VIETORIS-SMALE MAPPING THEOREM FOR THE HOMOTOPY OF HYPERDEFINABLE SETS" ]
[ "Alessandro Achille ", "Alessandro Berarducci " ]
[]
[]
Results of Smale (1957)andDugundji (1969)allow to compare the homotopy groups of two topological spaces X and Y whenever a map f : X → Y with strong connectivity conditions on the fibers is given. We apply similar techniques in o-minimal expansions of fields to compare the ominimal homotopy of a definable set X with the homotopy of some of its bounded hyperdefinable quotients X/E. Under suitable assumption, we show that πn(X) def ∼ = πn(X/E) and dim(X) = dim R (X/E). As a special case, given a definably compact group, we obtain a new proof of Pillay's group conjecture "dim(G) = dim R (G/G 00 )" largely independent of the group structure of G.We also obtain different proofs of various comparison results between classical and o-minimal homotopy.
10.1007/s00029-018-0413-3
[ "https://arxiv.org/pdf/1706.02094v1.pdf" ]
119,135,539
1706.02094
6b0e7667b1cc4294ac3323efd8555ce34bfe9778
A VIETORIS-SMALE MAPPING THEOREM FOR THE HOMOTOPY OF HYPERDEFINABLE SETS 7 Jun 2017 Alessandro Achille Alessandro Berarducci A VIETORIS-SMALE MAPPING THEOREM FOR THE HOMOTOPY OF HYPERDEFINABLE SETS 7 Jun 2017 Results of Smale (1957)andDugundji (1969)allow to compare the homotopy groups of two topological spaces X and Y whenever a map f : X → Y with strong connectivity conditions on the fibers is given. We apply similar techniques in o-minimal expansions of fields to compare the ominimal homotopy of a definable set X with the homotopy of some of its bounded hyperdefinable quotients X/E. Under suitable assumption, we show that πn(X) def ∼ = πn(X/E) and dim(X) = dim R (X/E). As a special case, given a definably compact group, we obtain a new proof of Pillay's group conjecture "dim(G) = dim R (G/G 00 )" largely independent of the group structure of G.We also obtain different proofs of various comparison results between classical and o-minimal homotopy. Introduction Let M be a sufficiently saturated o-minimal expansion of a field. We follow the usual convention in model theory [TZ12] to work in a sufficiently saturated structure, so we assume that M is κ-saturated and κ-strongly homogeneous for κ a sufficiently big uncountable cardinal (this can always be achieved going to an elementary extension). A set X ⊆ M k is definable if it first-order definable with parameters from M , and it is type-definable if it is the intersection of a small family of definable sets, where "small" means "of cardinality < κ". The dual notion of -definable set is obtained by considering unions instead of intersections. The hypothesis that M has field operations ensures that every definable set can be triangulated [vdD98]. We recall that, given a definable group G, there is a normal type-definable subgroup G 00 , called infinitesimal subgroup, such that G/G 00 , with the logic topology [Pil04], is a real Lie group [BOPP05]. If in addition G is definably compact [PS99], we have dim(G) = dim R (G/G 00 ) [HPP08], namely the o-minimal dimension of G equals the dimension of G/G 00 as a real Lie group. These results were conjectured in [Pil04] and are still known as Pillay's conjectures. It was later proved that if G is definably compact, then G is compactly dominated by G/G 00 [HP11]. This means that for every definable subset D of G, the intersection p(D) ∩ p(D ∁ ) has Haar measure zero (hence in particular it has empty interior) where p : G → G/G 00 is the projection and D ∁ is the complement of D. Special cases were proved in [BO04] and [PP07]. The above results establish strong connections between definable groups and real Lie groups. The proofs are complex and based on a reduction to the abelian and semisimple cases, with the abelian case depending in turn on the study of the fundamental group and on the counting of torsion points [EO04]. A series of result of P. Simon [Sim15,Sim14,Sim13] provides however a new proof of compact domination which does not rely on Pillay's conjectures or the results of [EO04]. More precisely, [Sim14] shows that fsg groups in o-minimal theories admit a smooth left-invariant measure, and [Sim15] contains a proof of compact domination for definable groups admitting a smooth measure (even in a NIP context). The fact that definably compact groups in o-minimal structures are fsg is proved in [HPP08,Thm. 8.1]. Our main theorem sheds new light on the connections between compact domination and Pillay's conjectures, and concerns the topology of certain hyperdefinable sets X/E, where E is a bounded type-definable equivalence relation on a definable set X. Under a suitable contractibility assumption on the fibers of p : X → X/E (12.1), we obtain a homotopy comparison result between X and X/E, and in particular an isomorphism of homotopy groups π n (X) def ∼ = π n (X/E) in the respective categories. Similar results apply locally, namely replacing X/E with an open subset U ⊆ X/E and X with its preimage p −1 (U ) ⊆ X, thus obtaining π n (p −1 (U )) def ∼ = π n (U ). For the full result see Theorem 11.8 and Theorem 12.2. From these local results and a form of "topological compact domination" (13.2) we shall deduce that dim(X) = dim R (X/E), namely the dimension of X in the definable category equals the dimension of X/E in the topological category (Theorem 13.3). This yields a new proof of "dim(G) = dim R (G/G 00 )" for compactly dominated groups which does not depend on the counting of torsion points (for in fact it does not depend on the group structure!). Some comparison results between classical and o-minimal homotopy estabished in [DK85,BO02,BO09] also follow (see Corollary 12.3). In particular, if X = X(M ) ⊆ M k is a closed and bounded ∅-semialgebraic and st : X → X(R) is the standard part map, we can take E = ker(st) and deduce π n (X) def ∼ = π n (X(R)). This work can be considered as a continuation of the line of research initiated [BM11]: while in that paper we focused on the fundamental group, here we manage to encompass the higher homotopy groups and more generally homotopy classes [X, Y ] of map f : X → Y is the relevant categories. We have tried to make this paper as self-contained as possible. The proofs of the homotopy results are somewhat long but elementary and all the relevant notions are recalled as needed. The paper is organized as follows. In Section 2 we recall the notions of definable space and definable manifold, the main example being a definable group G. In Section 3 we introduce the logic topology on the quotient X/E of a definable set X by a bounded type-definable equivalence E. In Section 4 we recall the notion of "normal triangulation" due to Baro [Bar10], and we show how to produce normal triangulations satisfying some additional properties. In sections 5 and 6 we illustrate some of the analogies between the standard part map and the map G → G/G 00 , where G is a definably compact group and G/G 00 has the logic topology. These analogies are further developed in Section 7, where we discuss various versions of "compact domination". In Sections 8,9 we work in the category of classical topological spaces and we establish a few results for which we could not find a suitable reference. In particular in Section 8 we show that given an open subset U of a a triangulable space, any open covering of U has a refinement which is a good cover. In Section 10 we recall the definition of definable homotopy. Sections 11,12 and 13 contain the main results of the paper, labeled Theorem A (11.8), Theorem B (12.2), and Theorem C (13.3), respectively, as the titles of the corresponding sections. In Theorem A we prove that there is a homomorphism π def n (X) → π n (X/E) from the definable homotopy groups of X and the homotopy groups of X/E, under a suitable assumption on E. We actually obtain a more general result of which this is a special case. In Theorem B we strengthen the assumptions to obtain an isomorphism: π def n (X) ∼ = π n (X/E). Since the standard part map can put in the form p : X → X/E for a suitable E, some known comparison results between classical and o-minimal homotopy will follow. Finally, in Theorem C we add the assumption of "topological compact domination" to obtain dim(X) = dim R (X/E) and we deduce dim(G) = dim R (G/G 00 ) and some related results. Acknowledgement. Some of the results of this paper were presented at the 7th meeting of the Lancashire Yorkshire Model Theory Seminar, held on December the 5th 2015 in Preston. A.B. wants to thank the organizers of the meeting and acknowledge support from the Leverhulme Trust (VP2-2013-055) during his visit to the UK. The results were also presented at the Thematic Program On Model Theory, International Conference, June 20-24, 2016, University of Notre Dame. Definable spaces A fundamental result in [Pil88] establishes that every definable group G in M has a unique group topology, called t-topology, making it into a a definable manifold. This means that G has a finite cover U 1 , . . . , U m by t-open sets and for each i ≤ m there is a definable homeomorphism g i : U i → U ′ i where U ′ i is an open subset of some cartesian power M k with the topology induced by the order of M . The collection (g i : U i → X i ) i≤m is called an atlas and g i is called a local chart. Definable manifolds are special cases of definable spaces [vdD98]. The notion of definable spaces is defined through local charts g i : U i → U ′ i , like definable manifolds, with the difference that now U ′ i is an arbitrary definable subset of M k , not necessarily open. In particular every definable subset X of M k , with the topology induced by the order, is a definable space (with the trivial atlas consisting of a single local chart), but not necessarily a definable manifold. We collect in this section a few results on definable spaces which shall be needed in the sequel. They depend on the saturation assumptions on M . The results are easy and well known to the experts but the proofs are somewhat dispersed in the literature. Lemma 2.1. Let (A i : i ∈ I) be a small downward directed family of definable open subsets of a definable space X (where "small" means |I| < κ). Then i∈I A i is open. Proof. Let x ∈ i∈I A i and fix a definable fundamental family (B ε : ε > 0) of neighbourhoods of x decreasing with ε (for example take B ε to be the points of X at distance < ε from x in a local chart). Since A i is open in X, there is ε i > 0 such that B εi ⊆ A i . By saturation, we can find an ε > 0 in M such that ε < ε i for each i ∈ I. It follows that B ε ⊆ i∈I A i , so x is in the interior of the intersection. Lemma 2.2. Let (X i : i ∈ I) be a small downward directed family of definable subsets of a definable space. Then i∈I X i = i∈I X i . Proof. The inclusion "⊆" is trivial. For the "⊇" direction let x ∈ i X i and suppose for a contradiction that x / ∈ i∈I X i . Then there is an open neighbourhood U of x disjoint from i∈I X i . By saturation there is i ∈ I such that U is disjoint from X i , hence x / ∈ X i , a contradiction. Lemma 2.3. Let (X i : i ∈ I) be a small downward directed family of definable subsets of the definable space X. Suppose that H := i∈I X i is clopen. Then for every i ∈ I there is j ∈ I such that X j ⊆ int(X i ). Proof. Fix i ∈ I. Since H is open, H ⊆ int(X i ). Using the fact that H is also closed, we have H = H = i∈I X i = i∈I X i (by Lemma 2.2). The latter intersection is included in int(X i ), hence by saturation there is j ∈ I such that X j ⊆ int(X i ). Logic topology Let X be a definable set and consider a type-definable equivalence relation E ⊆ X × X of bounded index (namely of index < κ) and put on X/E the logic topology: a subset O ⊆ X/E is open if and only if its preimage in X isdefinable, or equivalently C ⊆ X/E is closed if and only if its preimage in X is type-definable. This makes X/E into a compact Hausdorff space [Pil04]. We collect here a few basic results, including some results from [Pil04,BOPP05], which shall be needed later. Proposition 3.1. For every definable set C ⊆ X, p(C) is closed in X/E. Proof. By definition of logic topology, we need to show that p −1 (p(C)) is typedefinable. By definition, x belongs to p −1 (p(C)) if and only if ∃y ∈ C : xEy. Since E is type-definable, xEy is equivalent to a possibly infinite conjunction i∈I ϕ i (x, y) of formulas over some small index set I, and we can assume that every finite conjunction of the formulas ϕ i is implied by a single ϕ i . By saturation it follows that we can exchange ∃ and i , hence p −1 (p(C)) = {x : i ∃yϕ i (x, y)}, a type-definable set. Proposition 3.2. Let C ⊆ U ⊆ X/E with U open and C closed. Then there is a definable set D ⊆ X such that p −1 (C) ⊆ D ⊆ p −1 (U ). Proof. This is an immediate consequence of the fact that if a type-definable set is contained in a -definable set, there is a definable set between them. Proposition 3.3. Let y ∈ X/E and let D ⊆ X be a definable set containing p −1 (y). Then y is in the interior of p(D). Moreover, there is an open neighbourhood U of y such that p −1 (U ) ⊆ D. Proof. By Proposition 3.1, Z = p(X \ D) is a closed set in X/E which does not contain y. Hence the complement O of Z is an open neighbourhood of y contained in p(D). We have thus proved the first part. For the second part note that, since X/E is compact Hausdorff, it is in particular a normal topological space. We can thus find a fundamental system of open neighbourhoods U i of y such that {y} = i U i = i U i Each p −1 (U i ) is type-definable, and their intersections is contained in the definable set D, so there is some i ∈ N such that p −1 (U i ) ⊆ D. For our last set of propositions we assume that X is a definable space, possibly with a topology different from the one inherited from its inclusion in M k . Proposition 3.5. Assume p : X → X/E is continuous and let C be a definable subset of X. Then p(C) = p(C). Proof. It suffices to observe that p(C) ⊆ p(C) ⊆ p(C) where the first inclusion holds because p is continuous and the second by Proposition 3.1. Triangulation theorems The triangulation theorem [vdD98] is a powerful tool in the study of o-minimal structures expanding a field. In this section we review some of the relevant results and we prove a specific variation of the normal triangulation theorem of [Bar10] for simplexes with real algebraic vertices. Simplicial complexes are defined as in [vdD98]. They differ from the classical notion because simplexes are open, in the sense that they do not include their faces. As in [vdD98], the vertices of a simplicial complex are concrete points, namely they have coordinates in the given o-minimal structure M (expanding a field). More precisely, given n + 1 affinely independent points a 0 , . . . , a n ∈ M k , the (open) nsimplex σ M = (a 0 , . . . , a n ) ⊆ M k determined by a 0 , . . . , a n is the set of all linear combinations n i=0 λ i a i with λ 0 + . . . + λ n = 1 and 0 < λ i < 1 (with λ i ∈ M ). If we go to a bigger model N M , we write σ N for the set defined by the same formulas but with λ i ranging in N . We omit the subscript if there is no risk of ambiguity. A closed simplex is defined similarly but with the weak inequalities 0 ≤ λ i ≤ 1. In other words a closed simplex is the closureσ = cl(σ) of a simplex σ, namely the union of a simplex and all its faces. An simplicial complex is a finite collection P of (open) simplexes, with the property that for all σ, θ ∈ P , σ ∩ θ is either empty or the closure of some δ ∈ P (a common face of the two simplexes). We shall say that P is a closed simplicial complex if whenever it contains a simplex it contains all its faces. In this case we write P for the collection of all closuresσ of simplexes σ of P and we callσ a closed simplex of P . The geometrical realization |P | of a simplicial complex P is the union of its simplexes. We shall often assume that P is defined over R alg , namely its vertices have real algebraic coordinates, so that we can realize P either in M or in R. In this case, we write |P | M or |P | R for the geometric realization of P in M or R respectively. Notice that a simplicial complex is closed if and only if its geometrical realization is closed in the topology induced by the order of M . If L ⊆ P is a subcomplex of P , and σ ∈ P , we define |σ |L | R = σ ∩ |L| R and |σ |L | M = σ ∩ |L| M . To keep the notation uncluttered, we simply write σ |L when the model is clear from the context. Definition 4.1. A triangulation of a definable set X ⊆ M m is a pair (P, φ) consisting of a simplicial complex P defined over M and a definable homeomorphism φ : |P | M → X. We say that the triangulation φ is compatible with a subset S of X if S is the union of the images of some of the simplexes of P . Now, suppose that we have a triangulation φ : |P | M → X and we consider finitely many definable subsets S 1 , . . . , S l of X. The triangulation theorem tells us that there is another triangulation ψ : |P ′ | M → X compatible with S 1 , . . . , S l , but it does not say that we can choose P ′ to be a subdivision of P , thus in general |P ′ | M will be different from |P | M . This is going to be a problem if we want to preserve certain properties. For instance suppose that φ is a definable homotopy (namely its domain |P | M has the form Z × I where I = [0, 1]). The triangulation theorem does not ensure that ψ can be taken to be a definable homotopy as well. The "normal triangulation theorem" of Baro [Bar10] is a partial remedy to this defect: it ensures that we can indeed take P ′ to be a subdivision of P , hence in particular |P ′ | M = |P | M , although ψ will not in general be equal to φ. The precise statement is given below. It suffices to consider the special case when X = |P | and φ is the identity. (1) P ′ is a subdivision of P ; (2) (P ′ , φ ′ ) is compatible with the simplexes of P ; (3) for every τ ∈ P ′ and σ ∈ P , if τ ⊆ σ then φ ′ (τ ) ⊆ σ. From (3) it follows that the restriction of φ ′ to a simplex σ ∈ P is a homeomorphism onto σ and φ is definably homotopic to the identity on |P |. Since we are particularly interested in triangulations where the vertices of the simplicial complex have real algebraic coordinates, we prove the following proposition, which guarantees that the normal triangulation of a real algebraic simplicial complex can be also chosen to be real algebraic. Proposition 4.5. Let P be a simplicial complex in M k defined over R alg and let L be a subdivision of P . Then there is a subdivision L ′ of P such that: (1) L ′ is defined over R alg ; (2) there is a simplicial homeomorphism ψ : |L| → |L ′ | which fixes all the vertices of L with real algebraic coordinates. Proof. Since L is a subdivision of P , we have an inclusion of the zero-skeleta |P 0 | ⊆ |L 0 | ⊆ M k . For each v ∈ |L 0 |, let v 1 , . . . , v k ∈ M be the coordinates of v. The idea is that the combinatorial properties of the pair (P, L) (namely the properties invariant by isomorphisms of pairs of abstract complexes) can be described, in the language of ordered fields, by a first order condition ϕ L,P (x) on the coordinatesx of the vertices. We then use the model completeness of the theory of real closed fields to show that ϕ L,P (x) can be satisfied in the real algebraic numbers. The details are as follows. For each v ∈ |L 0 | we introduce free variables x v 1 , . . . , x v k and letx v be the k-tuple x v 1 , . . . , x v k . Finally letx be the tuple consisting of all these variables x v i as v varies. We can express in a first order way the following conditions onx: (1) If σ = (v 0 , . . . , v n ) ∈ L, then σ(x) := (x v0 , . . . ,x vn ) is n-simplex, namelȳ x v0 , . . . ,x vn are affinely independent; (2) If σ 1 and σ 2 are open simplexes of L with common face τ , then cl(σ 1 (x)) ∩ cl(σ 2 (x)) = cl(τ (x)); (3) If σ 1 and σ 2 are open simplexes of L with no face in common, then cl(σ 1 (x))∩ cl(σ 2 (x)) = ∅; (4) If σ ⊆ τ with σ ∈ L and τ ∈ P , then σ(x) ⊆ τ (x). These clauses express the fact that the collection σ(x) as σ varies in L is a simplicial complex L(x) (depending on the value ofx) isomorphic to L. Similarly we can define P (x) and express the fact that L(x) is a subcomplex of P (x). Our desired formula φ P,L (x) is the conjunction of these clauses together with the conditions x v i = v i whenever v i is real algebraic. By definition φ P,L (x) holds in M if we evaluate each variable x v i as the i-th coordinate of the vector v. By the model completeness of the theory of real closed fields, the formula can be satisfied by a tupleā of real algebraic numbers. The map sending each v toā v induces the desired isomorphism ψ : L → L ′ = L(ā). Later we shall need the following. Proposition 4.6. Let P be a simplicial complex, let X be a definable space and let f : |P | M → X be a definable function. Let V = {V i : i ∈ I} be a small family of -definable sets V i ⊆ X whose union covers the image of f . Then there is a subdivision P ′ of P and a normal triangulation (P ′ , φ) of P such that for every σ ∈ P ′ , (f • φ)(σ M ) is contained in some V i . Moreover, if P is defined over R alg , we can take P ′ defined over R alg . Proof. By saturation of M there is a finite set J ⊆ I such that Im(f ) ⊆ i∈J V i . Again by saturation there are definable subsets U i ⊆ V i for i ∈ J such that Im(f ) ⊆ i∈J U i . By Fact 4.4 there is a subdivision P ′ of P and a normal triangulation (P ′ , φ) of P compatible with the definable sets f −1 (U i ), for i ∈ J. Thus for σ ∈ P ′ , there is i ∈ J such that φ(σ M ) ⊆ f −1 (U i ), namely (f • φ)(σ M ) ⊆ U i . The "moreover" part follows from Proposition 4.5. Indeed, if P is over R alg , we first obtain (P ′ , φ) as above. If P ′ is over R alg we are done. Otherwise, we take a subdivision P ′′ of P over R alg and a simplicial isomorphism ψ : P ′′ → P ′ , and replace (P ′ , φ) with (P ′′ , φ • ψ). Standard part map Let X = X(M ) ⊆ M be a definable set and suppose X ⊆ [−n, n] for some n ∈ N. Then there is a map st : X → R, called standard part, which sends a ∈ X to the unique r ∈ R satisfying the same inequalities p < x < q with p, q ∈ Q. More generally, let X be a definable subset of M k and assume X ⊆ [−n, n] k for some n ∈ N. We can then define st : X → R k component-wise, namely st((a 1 , . . . , a k )) := (st(a 1 ), . . . , st(a k )). Now let E := ker(st) ⊆ X × X be the type-definable equivalence relation induced by st, namely aEb if and only if st(a) = st(b). There is a natural bijection st(X) ∼ = X/E sending st(a) to the class of a modulo E, so in particular E has bounded index. The next two propositions are probably well known but we include the proof for the reader's convenience. Proposition 5.1. The natural bijection st(a) → a/E is homeomorphism st(X) ∼ = X/E where X/E has the logic topology and st(X) ⊆ R k has the euclidean topology. Proof. Every closed subset C of R k can be written as the intersection i C i of a countable collection of closed ∅-semialgebraic sets C i (where "∅-semialgebraic" means "semialgebraic without parameters"). We then have st(a) ∈ C if and only a ∈ C i (M ). This shows that the closed sets C ⊆ st(X) ⊆ R k in the euclidean topology correspond to the sets whose preimage in X is type-definable, and the proposition follows. Thanks to the above result we can identify st : X → st(X) and p : X → X/E where E = ker(st). The next proposition shows that these maps are continuous. Proof. Let a ∈ X and let r := st(a) ∈ R k . Then st −1 (r) = n∈N {b ∈ X : |b − r| < 1/n}. This is a small intersection of relatively open subsets of X, so it is open in X by Lemma 2.1. Remark 5.3. If X = X(M ) ⊆ M k is ∅-semialgebraic, we may interpret the definining formula of X in R and consider the set X(R) ⊆ R k of real points of X. If we further assume that X is closed and bounded, then X ⊆ [−n, n] k for some n ∈ N, so we can consider the standard part map st : X → R k . It is easy to see that in this case st(X) coincides with X(R), so we can write X(R) = st(X) ∼ = X/E. Our next goal is to stuty the fibers of st : X → X(R). We need the following. Definition 5.4. Given a simplicial complex P and a point x ∈ |P | (not necessarily a vertex), the open star of x with respect to P , denoted St(x, P ), is the union of all the simplexes of P whose closure contains x. Proposition 5.5. Given x, y ∈ |P |, if St(x, P ) ∩ St(y, P ) is non-empty, then there is z ∈ |P | such that St(x, P ) ∩ St(y, P ) = St(z, P ). Proof. Let σ ∈ P be a simplex of minimal dimension included in St(x, P ) ∩ St(y, P ) and let z ∈ σ. We claim that z is as desired. To this aim it suffices to show that, given θ ∈ P , we have θ ⊆ St(x, P ) ∩ St(y, P ) if and only if θ ⊆ St(z, P ). For one direction assume θ ⊆ St(x, P ) ∩ St(y, P ). Thenθ ∩σ is non-empty, as the intersection contains both x and y. It follows that there is a simplex δ ∈ P such thatθ ∩σ =δ. Notice that δ is included in St(x, P ) ∩ St(y, P ) since its closure contains x and y. Since σ was of minimal dimension contained in this intersection, it follows that δ = σ. But then x ∈ σ ⊆θ, hence θ ⊆ St(x, P ). For the other direction, assume θ ⊆ St(z, P ), namely z ∈θ. Since z ∈ σ, it follows that σ ⊆θ and theforeσ ⊆θ. But x, y are contained inσ, so they are contained inθ, witnessing the fact that θ ⊆ St(x, P ) ∩ St(y, P ). The following result depends on the local conic structure of definable sets. Proposition 5.6. Let X be closed and bounded ∅-semialgebraic set and let st : X(M ) → st(X) = X(R) be the standard part map. Then for every y ∈ X(R), the preimage st −1 (y) is an intersection of a countable decreasing sequence S 0 ⊇ S 1 ⊇ S 2 ⊇ . . . of definably contractible open subsets of X. Proof. By the triangulation theorem (Fact 4.2), there is a simplicial complex P over R alg and a ∅-definable homeomorphism f : X → |P | M . In this situation, f R : X(R) → P (R) is a homeomorphism and st(f (x)) = f R (st(x) ). Thus we can replace X with P and assume that X is the realization of a simplicial complex. Therefore, we now have a closed simplicial complex X(R) over the reals, which is thus locally contractible. More precisely, given y ∈ X(R) we can write {y} as an intersection i∈N S i where S i is the open star of y with respect to the i-th iterated barycentric subdivision of P . The preimage st −1 (y) can then be written as the corresponding intersection i∈N S i (M ) interpreted in M , and it now suffices to observe that each S i (M ) is an open star (Proposition 5.5), hence it is definably contractible (around any of its points). Our next goal is to show that, much of what we said about the standard part map, has a direct analogue in the context of definable groups, with p : G → G/G 00 in the role of the standard part. Definable groups Let G be a definable group in M and let H < G be a type-definable subgroup of bounded index. We may put on the coset space G/H the logic topology, thus obtaining a compact topological space. In this context we have a direct generalization of Proposition 5.2. Fact 6.1 ([Pil04, Lemma 3.2]). Every type-definable subgroup H < G of bounded index is clopen in the t-topology of G. In particular, the natural map p : G → G/H is continuous, where G has the t-topology and the coset space G/H has the logic topology. If we further assume that H is normal, then G/H is a group and we may ask whether the logic topology makes it into a topological group. This is indeed the case [Pil04]. Some additional work shows that in fact G/H is a compact real Lie group [BOPP05]. In the same paper the authors show that G admits a smallest type-definable subgroup H < G of bounded index (see [She08] for a different proof), which is denoted G 00 and called the infinitesimal subgroup. When G is definably compact in the sense of [PS99], the natural map p : G → G/G 00 shares may of the properties of the standard part map. Definition 6.2. Let us recall that a definable set B ⊆ X is called a definable open ball of dimension n if B is definably homeomorphic to {x ∈ M n : |x| < 1}; a definable closed ball is defined similarly, using the weak inequality ≤; we shall say that B is a definable proper ball if there is a definable homeomorphism f from B to a definable closed ball taking ∂B to the definable sphere S n−1 . In analogy with Proposition 5.6, the following holds. Fact 6.3. [Ber09, Theorem 2.2]Let G be a definably compact group of dimension n and put on G the t-topology of [Pil88]. Then there is a decreasing sequence S 0 ⊇ S 1 ⊇ S 2 ⊇ . . . of definably contractible subsets of G such that G 00 = i∈N S i . The proof in [Ber09, Theorem 2.2] depends on compact domination and the sets S n are taken to be "cells" in the o-minimal sense. For later purposes we need the following strengthening of the above fact, which does not present difficulties, but requires a small argument. Corollary 6.4. In Fact 6.3 we can arrange so that, for each i ∈ N S i+1 ⊆ S i and S i is a definable proper ball of dimension n = dim(G). Proof. By Lemma 2.3 we can assume that S i+1 ⊆ S i for every i ∈ N. Since M has field operations, a cell is definably homeomorphic to a definable open ball (first show that it is definably homeomorphic to a a product of intervals). In general it is not true that a cell is a definable proper ball, even assuming that the cell is bounded [BF09]. However by shrinking concentrically S i via the homeomorphism, we can find a definable proper n-ball C i with S i+1 ⊆ C i ⊆ S i . To conclude, it suffices to replace S i with the interior of C i . Compact domination A deeper analogy between the standard part map and the projection p : G → G/G 00 is provided by Fact 7.1 and Fact 7.2 below. The above fact was used in [BO04] to introduce a finitely additive measure on definable subsets of [−n, n] k ⊆ M k (n ∈ N) by lifting the Lebesgue measure on R k through the standard part map. In the same paper it was conjectured that, reasoning along similar lines, one could try to introduce a finitely additive invariant measure on definably compact groups (the case of the torus being already handled thanks to the above result). When [BO04] was written, Pillay's conjectures from [Pil04] were still open, and it was hoped that the measure approach could lead to a solution. A first confirmation to the existence of invariant measures came from [PP07], but only for a limited class of definable group. A deeper analysis lead to the existence of invariant measures in every definable compact group [HPP08] and to the solution of Pillay's conjectures, as discussed in the introduction. Finally, the following far reaching result was obtained, which can be considered as a direct analogue to Fact 7.1. Fact 7.2 ([HP11]). Let G be a definably compact group and consider the projection p : G → G/G 00 . Then for every definable set D ⊆ G, p(D) ∩ p(D ∁ ) has Haar measure zero. In the terminology introduced in [HPP08] the above result can be described by saying that G is compactly dominated by G/G 00 . Perhaps surprisingly, when the above result was obtained, Pillay's conjectures had already been solved, so compact domination did not actually play a role in its solution. In hindsight however, as we show in the last part of this paper (Section 13), compact domination can in fact be used to prove "dim(G) = dim R (G/G 00 )", as predicted by Pillay's conjectures (the content of Pillay's conjectures also includes the statement that G/G 00 is a real Lie group). To prepare the ground, we introduce the following definition. In the rest of the section E is a type-definable equivalence relation of bounded index on a definable set X. Definition 7.3. We say that X is topologically compactly dominated by X/E if for every definable set D ⊆ X, p(D) ∩ p(D ∁ ) has empty interior, where p : X → X/E is the projection. Since "measure zero" implies "empty interior", topological compact domination holds both for the standard part map (taking E = ker(st)) and for definably compact groups. Notice that Definition 7.3 can be given for definable sets in arbitrary theories, not necessarily o-minimal, so it is not necessary that X carries a topology. However in the o-minimal case a simpler formulation can be given, as in Corollary 7.5 below. We first recall some definitions. Let X be a definable space. We say that a typedefinable set Z ⊆ X is definably connected if it cannot be written as the union of two non-empty open subsets which are relatively definable, where a relatively definable subset of Z is the intersection of Z with a definable set. Following [vdD98], we distinguish between the frontier and the boundary of a definable set, and we write ∂D := D \ D for the frontier, and δD := D \ D • for the boundary, where D • is the interior. A basic result in o-minimal topology is that the dimension of the frontier of D is less than the dimension of D. Here we shall however be concerned with the boundary, rather than the frontier. Proposition 7.4. Let X be a definable space. Assume that p is continuous and each fiber of p : X → X/E is definably connected. Then for every definable set D ⊆ X, p(D) ∩ p(D ∁ ) = p(δD). Proof. We prove p(D) ∩ p(D ∁ ) ⊆ p(δD). So let y ∈ p(D) ∩ p(D ∁ ). If for a contradiction p −1 (y) ∩ δD = ∅, then p −1 (y) ∩ D • and p −1 (y) ∩ (D ∁ ) • are both non-empty. Since they are relatively definable in p −1 (y) and open, we contradict the hypothesis that p −1 (y) is definably connected. The opposite inclusion is easy, using the fact that p(A) = p(A) (Proposition 3.5). In the light of the above proposition, topological compact domination takes the following form. Corollary 7.5. Assume that X is a definable space, p : X → X/E is continuous, and each fiber of p is definably connected. Then X is topologically compactly dominated by X/E if and only if the image p(Z) of any definable set Z ⊆ X with empty interior, has empty interior. Proof. Suppose that the image of every definable set with empty interior has empty interior. Given a definable set D ⊆ X, we want to show that p(D)∩p(D ∁ ) has empty interior. This follows from the inclusion p(D) ∩ p(D ∁ ) ⊆ p(δD) (Proposition 7.4) and the fact that δD has empty interior. Conversely, assume topological compact domination and let Z be a definable subset of X with empty interior. By Proposition 3.5, p(δZ) ⊆ p(Z) ∩ p(Z ∁ ) = p(Z) ∩ p(Z ∁ ), so p(δZ) has empty interior. Good covers By a triangulable space we mean a compact topological space which is homeomorphic to a polyhedron, namely to the realization |P | R of a closed finite simplicial complex over R. Our aim is to show that open subsets of a triangulable spaces have enough good covers. We are going to use barycentric subdivions holding a subcomplex fixed, as defined in [Mun84,p. 90]. We need the following observation. Remark 8.2. Let P be a closed (finite) simplicial complex and let L be a closed subcomplex. Let P i be the i-th barycentric subdivision of P holding L fixed. Then for every real number ε > 0 there is i ∈ N such that for every closed simplexσ of P i , eitherσ has diameter < ε orσ lies inside the ε-neighbourhood of some simplex of L. Proof. We can assume that Y is the geometric realization |P | (over R) of a finite simplicial complex P . Since Y is a metric space, so is O ⊆ Y . In particular O is paracompact, and therefore V has a locally finite star-refinement W ≺ V. We plan to show that O is the realization of an infinite simplicial complex L with the property that each closed simplex of L is contained in some element of W. Granted this, by Proposition 5.5 we can take U to be the open cover consisting of the sets St(x, L) for x ∈ O. To begin with, note that we can write O as the union O = n∈N C n of an increasing sequence of compact sets in such a way that every compact subset of O belongs to C n for some n (it suffices to define C n as the set of points at distance ≥ 1/n from the frontier of O). Since C 0 is compact, by the Lebesgue number lemma there is some ε 0 > 0 such that every subset of C 0 of diameter < ε 0 is contained in some element of W. Now let P 0 be an iterated barycentric sudivision of P with simplexes of diameter < ε and let L 0 be the largest closed subcomplex of P 0 with |L 0 | ⊆ C 0 . Notice that every closed simplex of L 0 is contained in some element of W. Starting with P 0 , L 0 we shall define by induction a sequence of subdivisions P i of P = P 0 and subcomplexes L i of L 0 . For concreteness, let us consider the case i = 1. The complex P 1 will be of the form P (n) 0 , where P (n) 0 is the n-th iterated barycentric subdivision of P 0 holding the subcomplex L 0 fixed. To choose the value of n we proceed as follows. By the Lebesgue number lemma there is some ε 1 > 0 with ε 1 < ε 0 /2 such that every closed subset of C 1 of diameter < ε 1 is contained in some element of W. By taking a smaller value for ε 1 if necessary, we can also assume (by definition of L 0 ), that the closed ε 1 -neighbourhood of any closed simplexσ of L 0 is contained in some element of W. By Remark 8.2 there is some n 0 such that for every n ≥ n 0 and for every closed simplexσ of P (n) 0 , eitherσ is contained in the ε 1 neighbourhood of some λ ∈ L 0 , or the diameter ofσ is less then ε 1 . In both cases, ifσ is included in C 1 , then it is contained in some element of W. We now define P 1 = P (n) 0 and we let L 1 be the biggest closed subcomplex of P 1 with |L 1 | ⊆ C 1 . The crucial observation is that L 0 is a subcomplex of L 1 , since both are subcomplexes of P 1 and |L 0 | ⊆ |L 1 |. Having defined P 0 , L 0 , ε 0 , P 1 , L 1 , ε 1 we can continue in the same fashion: given P i , L i , ε i we define P i+1 , L i+1 , ε i+1 in the same way we defined P 1 , L 1 , ε 1 starting from P 0 , L 0 , ε 0 and observe that ε n → 0 as n → ∞. Since by construction each L i is a subcomplex of L i+1 , we can consider the infinite simplicial complex L := i∈N L i . We claim that its geometrical realization is O. Granted the claim, by construction each closed simplex of L := i∈N L i is contained in some W ∈ W, and the proof is finished. To prove the claim notice that by construction i L i ⊆ O. To prove the equality we must show that L i is not too small. Consider for instance L 1 . We claim that if x ∈ O is such that its closed ε 1 -neighbourhood is contained in C 1 , then x ∈ |L 1 |. Indeed, consider the (open) simplex σ ∈ P 1 containing x. Then eitherσ has diameter < ε 1 or it is included in the ε 1 -neighbourood of |L 0 |, and in both cases σ is included in |L 1 |. The same argument applies for an arbirary i ∈ N instead of i = 1 and immediately implies the desired claim (since ε i → 0). where S n is the n-th sphere and we put on π n (Y ) the usual group operation if n > 0 (see [Hat02] for the details). Homotopy In the rest of this section we work in the classical category of topological spaces and we give a sufficient condition for two maps to be homotopic. Later we shall need to adapt the proofs to the definable category, but with additional complications. Definition 9.2. Given a collection U of subsets of a set O and two functions f, g : Z → O, we say that f and g are U-close if for any z ∈ Z there is U ∈ U such that both f (z) and g(z) are in U . The following definition is adapted from [Dug69, Note 4]. Definition 9.3. Let f : Z → Y be a function between two sets Z and Y . Let P be a collection of sets whose union P includes Z, and let U be a collection of subsets of Y . We say that f is (U, P )-small if for every σ ∈ P the image f (σ ∩ Z) is contained in some U ∈ U. Lemma 9.4. Let U be a locally finite good cover of a topological space Y and let L be a closed subcomplex of a closed simplicial complex P defined over R. Let f : |L ∪ P (0) | R → Y be a (U, P )-small map (recall that P is the collection of all closures of simplexes of P ). Then f can be extended to a (U, P )-small map f ′ : |P | R → Y with the property that, for all U ∈ U and for every closed simplexσ of P , if f (σ |L∪P (0) ) ⊆ U , then f ′ (σ) ⊆ U . Proof. Reasoning by induction we can assume that f ′ is already defined on |L∪P (k) | and we only need to extend it to |L ∪ P (k+1) |. Let σ ∈ P (k+1) . We can identifȳ σ with the cone over its boundary ∂σ, so that every point ofσ is determined by a pair (t, x) with t ∈ [0, 1] and x ∈ ∂σ. Let U 1 , . . . , U n be the elements of U containing f ′ (σ |L∪P (k) ) (notice that n > 0 by the inductive hypothesis), let V be their intersection, and let φ : [0, 1] × V → V be a retraction of V to a point. We extend f ′ toσ sending (t, x) ∈σ to φ(t, f ′ (x)) ∈ V . Note that if f ′ (σ |L∪P (k) ) ⊆ U ∈ U, then U is one of the U i , and since by construction f ′ (σ) ⊆ V = i U i , we get f ′ (σ) ⊆ U . Proposition 9.5. Let U be a locally finite good cover of a topological space Y , let P be a closed simplicial complex and let f, g : |P | R → Y be two maps. Assume that f and g are U-close. Then, f and g are homotopic. Proof. Since f and g are U-close, the family V = {f −1 (U ) ∩ g −1 (U ) : U ∈ U} is an open cover of |P | R . By the Lebesgue number lemma (since we work over R) there is an iterated barycentric subdivision P ′ of P such that every closed simplex of P ′ is contained in some element of V. Then, by construction, for every σ ∈ P ′ there is U ∈ U such that f (σ) and g(σ) are contained in U . Let now I = [0, 1] and consider the simplicial complex P ′ × I with the standard triangulation (as in [Hat02,p. 112, Proof of 2.10]). Consider the subcomplex P ′ × {0, 1} of P ′ × I and note that it contains the 0-skeleton of P ′ × I. Define f ⊔ g : |P × {0, 1}| R = |P ′ × {0, 1}| R → Y as the function which sends (x, 0) to f (x) and (x, 1) to g(x) . Note that f ⊔ g is (U, P ′ × I)-small. Since U is a good cover, by Lemma 9.4 we can extend it to a (U, P ′ × I)-small function H : |P × I| R → Y . This map is a homotopy between f and g. Definable homotopies Given a definable set Z and a -definable set Y , we say that a map f : Z → Y is definable if it takes values in a definable subset Y 0 of Y and is definable as a function from Z to Y 0 . We can adapt Definition 9.1 to the definable category as follows. Definition 10.1. If Z is a definable space and Y is a -definable set, we let [Z, Y ] def denote the set of all equivalence classes of definable continuous maps f from Z to Y modulo definable homotopies. Similarly we write [Z, Y ] def 0 when we work with pointed spaces and homotopies relative to the base point z 0 ∈ Z. The n-th o-minimal homotopy group is defined as π n (Y ) def := [S n , Y ] def 0 where S n is the n-th sphere in M . If n > 0 we put on π n (Y ) def a group operation in analogy with the classical case. In [BO02] it is proved that if Y is a ∅-semialgebraic set, π 1 (Y ) def ∼ = π 1 (Y (R)), so in particular π 1 (Y ) def is finitely generated. This has been generalized to the higher homotopy groups in [BO10]. We shall later give a self-contained proof of both results. By the same arguments we obtain the following result of [BMO10]: given a definably compact group G there is a natural isomorphism π n (G) def ∼ = π n (G/G 00 ). The new proofs yield a stronger result: if p : G → G/G 00 is the projection, for every open subset O of G/G 00 , there is an isomorphism π n (p −1 (O)) def ∼ = π n (O). This was so far known for n = 1 [BM11]. Notice that p −1 (O) is -definable, whence the decision to consider -definable sets in Definition 10.1. With the new approach we obtain additional functoriality properties and generalizations, as it will be explained in the rest of the paper. Theorem A As above, let X = X(M ) be a definable space, and let E ⊆ X × X be a definable equivalence relation of bounded index. In this section we work under the following assumption. Assumption 11.1 (Assumption A). X/E is a triangulable topological space and the natural map p : X → X/E is continuous. The fact that X/E is triangulable allows us to apply the results of Section 8 regarding the existence of good covers. Note that the continuity of p is not a vacuous assumption because X/E has the logic topology, not the the quotient topology. By the results in Section 5 and Section 6 the assumption is satisfied in the special case X/E = G/G 00 (where G is a definably compact group) and also when X is a closed and bounded ∅-semialgebraic set and E = ker(st). We shall prove that there is a natural homomorphism π def n (X) → π n (X/E). This will be obtained as a consequence of a more general result concerning homotopy classes. The following definition plays a crucial role in the definition of the homomorphism, and exploits the analogies between the projection p : X → X/E and the standard part map. |P | M st / / f |P | R f * p −1 (O) p / / O We say that f is U-approximable if it has a U-approximation. In general, given f and U, we cannot hope to find f * which is a U-approximation of f . However we shall prove that, given U, every definable continous function f is definably homotopic to a U-approximable map. Proof. Let σ ∈ P . Since f is continous, f (σ) ⊆ f (σ) and by Proposition 3.5 we have p(f (σ)) = p(f (σ)), so if f is (p −1 (U), P )-small, it is also (p −1 (U), P )-small. The following lemma shows that small maps are approximable. Proof. Define f * on the zero-skeleton of P by f * (0) (st(v)) = p(f (v)) for any vertex v of |P | M (since v has coordinates in R alg we can identify st(v) ∈ |P | R with v). Since f is (p −1 (V), P )-small, f * (0) is (V, P )-small and therefore, by Lemma 9.4 (and Lemma 11.4), we can extend f * (0) to a (V, P )-small map f * : |P | R → O. We claim that f * is a V-approximation of f . Indeed, fix a point z ∈ |P | M and let σ = σ M ∈ P be a simplex containing z. Since f is (p −1 (V), P )-small, there is a V ∈ V such that p • f (σ) ⊂ V , so in particular p • f (σ (0) M ) = f * (0) (σ (0) R ) ⊆ V . By Lemma 9.4, we also have f * (σ R ) ⊆ V . Since st(σ M ) =σ R , both p • f (z) and f * (st(z)) are in V . The next lemma shows that every map f is homotopic to a small (hence approximable) map f ′ . P ′ , φ) of |P | such that f ′ = f • φ is (p −1 (U), P ′ )-small. Moreover if P is defined over R alg , we can take P ′ defined over R alg . Notice that f ′ is homotopic to f (as φ is homotopic to the identity). Proof. By Proposition 4.6. Lemma 11.7. Let U be a star-refinement of a good cover of O. Any two U- approximations of f : |P | M → p −1 (O) are homotopic. Proof. Let f * 1 and f * 2 be two U-approximations of f . Then and f * 1 and f * 2 are St(U)close and since U star-refines a good cover they are homotopic by Proposition 9.5. We are now ready to state the main result of this section. [|P | M , p −1 (U )] p P U / / i p −1 (U ) [|P | R , U ] iU [|P | M , p −1 (V )] p P V / / [|P | R , V ] where the vertical arrows are induced by the inclusions. (3) By the triangulation theorem, the same statements continue to hold if we replace everywhere |P | by a ∅-semialgebraic set. In the rest of the section we fix a closed simplicial complex P in M defined over R alg and we prove Theorem 11.8. We shall define a map p P O : We are only claiming that this will be the case if f is (p −1 (U), P )-small, which is a stronger property than being U-approximable. The reason for the introduction of this stronger property, is that we are not able to show that if two definably homotopic maps are U-approximable, then their approximations are homotopic. We can do this only if the maps are (p −1 (U), P )-small. The formal definition is the following. [|P | M , p −1 (O)] def → [|P | R , O] determinedP ′ , φ) of |P | such that f ′ = f • φ is (p −1 (U), P ′ )-small. By Lemma 11.5 f ′ has a U-approximation f ′ * . We shall see (Lemma 11.11 below) that the homotopy class [f ′ * ] does not depend on the choice of P ′ , φ and f ′ * , so we can define p P O ([f ]) = [f ′ * ]. To prove that the definition is sound we need the following. Proof. Let y ∈ |P | R and let x ∈ |P | M be such that st(x) = y. By definition of approximation f * 0 (y) is U-close to p(f 0 (x)), which by hypothesis is U-close to p(f 1 (x)), which in turn is U-close to f * 1 (y). We deduce that f * 0 (y) is St(U)-close to f * 1 (y). We can now finish the proof that Definition 11.9 is sound. Lemma 11.11. Let U be a star-refinement of a good cover of O. Let f 0 , f 1 : |P | M → p −1 (O) be definably homotopic definable continous maps and let (P 0 , φ 0 ) and (P 1 , φ 1 ) be two normal triangulations of P such that f 0 • φ 0 is (p −1 (U), P 0 )- small and f 1 • φ 1 is (p −1 (U), P 1 )-small. Now let (f 0 • φ 0 ) * and (f 1 • φ 1 ) * be U- approximations of f 0 • φ 0 and f 1 • φ 1 respectively. Then (f 0 • φ 0 ) * and (f 1 • φ 1 ) * are homotopic. Proof. First note that f 0 • φ 0 and f 1 • φ 1 are definably homotopic, because so are f 0 and f 1 and φ 0 , φ 1 are both definably homotopic to the identity. Let H : |P ×I| M → p −1 (O) be a definable homotopy between f 0 • φ 0 = H 0 and f 1 • φ 1 = H 1 . Let P ′ be a common refinement of P 0 and P 1 (for the existence see for instance [Muk15, Cor. 9.5.8]). Now let (T, ψ) be a normal triangulation of P ′ × I such that H • ψ is (p −1 (U), T )small. Notice that T induces two subdivisions P ′ 0 and P ′ 1 of P ′ such that (P ′ 0 × 0) ∪ (P ′ 1 × 1) is a subcomplex of T . Notice that both f 0 • φ 0 and f 1 • φ 1 are (p −1 (U), P ′ )small, because the smallness property is preserved by refinining the triangulations. Moreover, the restriction of ψ to the subcomplex (P ′ 0 × 0) ∪ (P ′ 1 × 1) induces two normal triangulations (P ′ 0 , ψ 0 ) and (P ′ 1 , ψ 1 ) of P ′ , namely ψ 0 (x) = y if and only if ψ(x, 0) = (y, 0), and similarly for ψ 1 . By the properties of normal triangulations, for each σ ∈ P ′ , we have ψ 0 (σ) = σ = ψ 1 (σ), so f 0 • φ 0 • ψ 0 and f 1 • φ 1 • ψ 1 are also (p −1 (U), P ′ )-small. Now let (H • ψ) * : |P × I| R → O be a U-approximation of H • ψ. Then (H • ψ) * is a homotopy between two maps, which are easily seen to be U-approximations of f 0 • φ 0 • ψ 0 and f 1 • φ 1 • ψ 1 (the two maps induced by H • ψ by restriction), so we may call them (f 0 • φ 0 • ψ 0 ) * and (f 1 • φ 1 • ψ 1 ) * respectively. Since ψ 0 fixes the simplexes of P ′ and f 0 • φ 0 is (p −1 (U), P ′ )-small, we have that f 0 • φ 0 • ψ 0 is U-close to f 0 • φ 0 (because any point of |P | M belongs to some σ ∈ P ′ which is mapped into some element of p −1 (U) by both maps). By Lemma 11.10 it follows that (f 0 • φ 0 • ψ 0 ) * is St(U)-close to (f 0 • φ 0 ) * hence homotopic to it. Similarly (f 1 • φ 1 • ψ 1 ) * is homotopic to (f 1 • φ 1 ) *V-approximation find V ′ ∈ V such that both (f ′ * • st)(x) ∈ V ′ and (p • f ′ )(x) ∈ V ′ . Notice that the latter implies that V ′ cannot be contained in C ∁ , hence it is contained in some element of U. This shows that f ′ * has image contained in U and is a U-approximation of f ′ . It Lemma 11.14. Theorem 11.8(4) holds, namely we can fix a base point and work with relative homology. follows that i U • p P U ([f ]) = p P V • i p −1 (U) ([f ]) = [f ′ * ]. Proof. It suffices to notice that all the constructions in the proofs can equivalently be carried out for spaces with base points. Lemma 11.15. Theorem 11.8(5) holds, namely for any open set O ⊆ X/E there is a well defined group homomorphism p P O : π n (p −1 (O)) def → π n (O). Proof. We have already proved that there is a natural map p S n O : π n (p −1 (O)) def → π n (O). We need to check that this map is a group homomorphism. To this end, let S n−1 be the equator of S n . Recall that, given [f ], [g] ∈ π n (p −1 (O)) def , where f, g : S n → p −1 (O), the group operation [f ] * [g] is defined as follows. Consider the natural map φ : S n → S n /S n−1 = S n ∨ S n , and let [f ] * [g] = [(f ∨ g) • φ], where f ∨ g maps the first S n using f , and the second using g. A similar definition also works for π n (O). Now, we have to check that p S n O ([f ] * [g]) = p P O ([f ]) * p P O ([g] ). By the triangulation theorem we can identify S n with the realization of a simplicial complex P defined over R alg and, modulo homotopy and taking a subdivision, we can assume that f and g are (p −1 (U), P )-small where U is an open cover of O star-refinining a good cover. Let f * and g * be U-approximations of f, g respectively, so that p The proof of Theorem 11.8 is now complete. Theorem B In this section we work under the following strengthening of 11.1. Assumption 12.1. X/E is a triangulable topological space and each fiber of p : X → X/E is the intersection of a decreasing sequence of definably contractible open sets. By Proposition 3.4 the assumption implies in particular that p is continuos, so we have indeed a strengthening of 11.1. The above contractibility hypothesis was already exploited in [BM11,Ber09,Ber07] and is satisfied by the main examples discussed in Section 5 and Section 6. In the rest of the section we prove Theorem 12.2. The main difficulty is the following. The homotopy properties of a space are essentially captured by the nerve of a good cover, but unfortunately it is not easy to establish a correspondence between good covers of X/E in the topological category and good covers of X in the definable category. One can try to take the preimages p −1 (U ) in X of the open sets U belonging a good cover of X/E, but these preimages are only -definable, and if we approximate them by definable sets, we loose some control on the intersections. We shall show however, that we can perform these approximations with a controlled loss of the amount of "goodness" of the covers. Granted all this, the idea is to lift homotopies from X/E to X, with an approach similar to the one of [Sma57,Dug69], namely we start with the restriction of the relevant maps to the 0-skeleton, and we go up in dimension. In the rest of the section fix an open set O ⊆ X/E. We need the following. Lemma 12.4. Let V be an open cover of O. Then there is a refinement W of V such that for every W ∈ W there is V ∈ V and a definably contractible definable set B ⊆ X such that p −1 (W ) ⊆ B ⊆ p −1 (V ). Proof. Let y ∈ O. By our assumption p −1 (y) is a decreasing intersection i∈N B i (y) of definably contractible definable sets B i (y). Now let V (y) ∈ V contain y and note that p −1 (V (y)) is a -definable set containing p −1 (y) = i∈N B i (y). By logical compactness B n (y) ⊆ p −1 (V (y)) for some n = n(y) ∈ N. By Proposition 3.3 we can find an open neighbourood W (y) of y with p −1 (W (y)) ⊆ B n (y). We can thus define W as the collection of all the sets W (y) as y varies in O. p −1 (V ) for some V ∈ V. Proof. Let V and W be as in Lemma 12.4. By hypothesis, and by the property of W, we have that f (|∂σ| M ) ⊆ p −1 (W ) ⊆ B ⊆ p −1 (V ) for some definably contractible set B and some V ∈ V. Then, f can be extended to a definable map on |σ| M with image contained in B ⊆ p −1 (V ). Definition 12.6. If W and V are as in Corollary 12.5, we say that W is semi-good within V. Lemma 12.7. For any open cover U of O and any n ∈ N, there is a refinement W of U such that, given a n-dimensional closed simplicial complex P , a closed subcomplex L, and a (p −1 (W), P )-small definable continous map f : |L ∪ P (0) | M → p −1 (O), there is a (p −1 (U), P )-small definable continous map F : |P | M → p −1 (O) extending f . Proof. Reasoning by induction, it suffices to show that given k < n and an open cover U of O, there is a refinement W of U such that, given a n-dimensional closed simplicial complex P and a (p −1 (W), P )-small definable map f : |L ∪ P (k) | M → p −1 (O), there is a (p −1 (U), P )-small definable map F : |L ∪ P (k+1) | M → p −1 (O) extending f . To this aim, consider three open covers W ≺ V ≺ U of O such that V is a star-refinement of U and and W ≺ V is semi-good within V. Let σ ∈ P (k+1) be a (k + 1)-dimensional closed simplex such thatσ is not included in the domain of f . Since |∂σ| M ⊆ |σ ∩ P (k) | M ⊆ dom(f ) and f is (p −1 (W), P )-small, there is W ∈ W such that f (|∂σ| M ) ⊆ f (|σ ∩ P (k) | M ) ⊆ p −1 (W ). By the choice of W, there is V σ ∈ V such that we can extend f |∂σ to a map F σ : |σ| M → p −1 (V σ ) and define F : |L ∪ P (k+1) | R → p −1 (O) as the union of f and the various F σ for σ ∈ P (k+1) . It remains to prove that F : |L ∪ P (k+1) | R → p −1 (O) is (p −1 (U), P )-small. To this aim let τ ∈ P be any simplex. By our hypothesis there is W ∈ W such that f (|τ ∩ P (k) | M ) ⊆ p −1 (W ). Now let V ∈ V contain W . By construction each face σ of τ belonging to L ∪ P (k+1) is mapped by F into p −1 (V σ ) for some V σ ∈ V. Moreover V σ intersects W , so it is included in St V (W ). The latter depends only on τ and not on σ and is is contained in some U ∈ U. We have thus shown that σ V σ is contained in some U ∈ U, thus showing that F is (p −1 (U), P )-small. Definition 12.8. Let U be an open cover of O. If W is as in Lemma 12.7 we say that W is n-good within U. If the only member of U is O (or if the choice of U is irrelevant), we simply say that W is n-good. Lemma 12.9. Let n ∈ N and let W be an n + 1-good cover of O. If P is an ndimensional simplicial complex and f, g : |P | M → p −1 (O) are definable continous functions such that for every σ ∈ P there is W ∈ W such that f (σ) and g(σ) are contained in p −1 (W ), then f and g are definably homotopic. Proof. Let I = [0, 1] and consider the simplicial complex P × I (of dimension n + 1) with the standard triangulation (as in [Hat02,p. 112, Proof of 2.10]). Consider the subcomplex P × {0, 1} of P × I and note that it contains the 0-skeleton of P × I. Define f ⊔g : |P ×{0, 1}| M → O as the function which sends (x, 0) to f (x) and (x, 1) to g (x). Note that f ⊔ g is (p −1 (W), P × I)-small by hypothesis. By Lemma 12.7 we can extend it to definable continuos function H : |P × I| M → p −1 (O). This map is a homotopy between f and g. Lemma 12.10. Let n ∈ N. Let V be an open covering of O which is a star refinement of a n + 1-good cover W. Given an n-dimensional simplicial complex P and definable continuos maps f, g : |P | M → p −1 (O), if there is a map f * : |P | R → O which is a V-approximations of both f and g, then f and g are definably homotopic. Proof. Let P ′ be an iterated barycentric subdivision of P such that for each σ ∈ P ′ there is V ∈ V such that f * (σ) ⊆ V . We claim that for each σ ∈ P ′ , there is a W ∈ W such that p • f (σ), p • g(σ) (and f * • st(σ)) are in W . Given this claim, we can conclude using Lemma 12.9. To prove the claim, fix a σ ∈ P ′ and let V ∈ V be such that f * (σ) ⊆ V . Since f * is V-approximation of f , for each x ∈ σ there is V x ∈ St(V) such that p • f (x) and f * • st(x) are in V x , and similarly there is a V ′ x such that p • g(x) and f * • st(x) are in V ′ x . Since V intersects both V x and V ′ x , St V (V ) contains both p • f (x) and p • g(x) , and since St(V) refines W, there is W ∈ W with the same property. Proof. Let U be an n + 1-good covering of O, let V be such that St(V) is a star refinement of U and let W ≺ V be n + 1-good within V. Let T be a barycentric Proof. Let n = dim(P ), let V be a star-refinement of U and let W be n-good within V. Consider an iterated baricentric subdivision P ′ of P such that f * is , that is f * and g * are homotopic. We can now apply Lemma 12.11 to find a definable homotopy between f and g, and so [f ] = [g]. subdivision of |P × I| R such that G is (W, T )-small. Let H (0) : T (0) → p −1 (O) be such that p • H (0) = G • st on the vertices of T . For each simplex σ ∈ T there is W ∈ W such that G(|σ| R ) ⊆ W , hence H (0) (|σ (0) | M ) ⊆ W .(W, P ′ )-small. Let f (0) : P ′(0) → p −1 (O) be such that p • f (0) = f * • st The surjectivity is immediate from Lemma 12.12. Theorem C In this section we work under the following strengthening of 12.1, where we considers definable proper balls (Definition 6.2) instead of definably contractible sets. Assumption 13.1. X/E is a triangulable manifold, X is a definable manifold, and each fiber of p : X → X/E is the intersection of a decreasing sequence of definable proper balls. We also need: Assumption 13.2 (Topological compact domination). The image under p : X → X/E of a definable subset of X with empty interior, has empty interior. Both assumptions are satisfied by p : G → G/G 00 for any definably compact group G (see section 7 and Corollary 6.4). Theorem 13.3 (Theorem C). Under Assumption 13.2, we have dim(X) = dim R (X/E). To prove the theorem the idea is to exploit the following link between homotopy and dimension: given a manifold Y and a punctured open ball U := A \ {y} in Y , the dimension of Y is the least integer i such that π i−1 (U ) = 0. Proposition 13.4. dim(X) ≥ dim R (X/E). Proof. Let n = dim(X) and N = dim R (X/E). Fix x ∈ X and let y = p(x). Let B 0 be an open definable ball containing p −1 (y). Since X/E is a manifold, there is a decreasing sequence of proper balls A i ⊆ X/E such that y = i∈N A i = i∈N A i . Now B 0 ⊇ p −1 (y) = i∈I p −1 (A i ) and p −1 (A i ) is type-definable (because A i is closed), so there is some i ∈ N with p −1 (A i ) ⊆ B 0 . Let A = A i and observe that p −1 (A) is -definable and contains the type definable set p −1 (y). Since the latter is a decreasiong intersection of definable proper balls, there is some definable proper ball B 1 such that x ∈ p −1 (y) ⊆ B 1 ⊆ B 1 ⊆ p −1 (A) ⊆ B 0 . Now let f : S n−1 → ∂B 1 = B 1 \B 1 be a definable homeomorphism (whose existence follows by the hypothesis that the ball is proper). By fixing base points, we can consider the homotopy class [f ] as a non-zero element of π def n−1 (B 0 \ x) (namely f is not definably homotopic to a constant in B 0 \ x). A fortiori, 0 = [f ] ∈ π def n−1 (p −1 (A) \ p −1 (y)), because if f is homotopic to a constant within a smaller space, it remains so in the larger space. Now observe that p −1 (A) \ p −1 (y) = p −1 (A \ y) and by Theorem 12.2 we have π def n−1 (p −1 (A \ y)) ∼ = π n−1 (A \ y). We conclude that π n−1 (A \ y) = 0, and since A is an open ball in the manifold X/E this can happen only if n ≥ N . So far we have not used the full strength of the assumption, namely the topological compact domination. Proposition 13.5. dim(X) ≤ dim R (X/E). Proof. As before, let n = dim(X) and N = dim R (X/E). Let A 0 ⊆ X/E be an open N -ball, namely a set homeomorphic to {|x| ∈ R N : |x| < 1}. Let A 1 ⊆ A 0 be the image of {|x| ∈ R N : |x| < 1/2} under the homeomorphism and note that 0 = π N −1 (A 0 \ A 1 ) and A 0 \ A 1 is a deformation retract of A 0 \ {y} for every y ∈ A 1 . By Theorem 12.2, we have 0 = π N −1 (p −1 (A 0 \ A 1 )) def , so there is a map f : S N −1 → p −1 (A 0 \ A 1 ) of pointed spaces with 0 = [f ] ∈ π N −1 (p −1 (A 0 \ A 1 )) def . Since A 0 is a ball, we have π N −1 (A 0 ) = 0 and, by Theorem 12.2, π def N −1 (p −1 (A 0 )) = 0 as well. In particular [f ] = 0 when seen as an element of π N −1 (p −1 (A 0 )) def . This is equivalent to say that f can be extended to a definable map F : D → p −1 (A 0 ), where D = S N −1 × I and F is a definable homotopy (relative to the base point) between f and a constant map. Notice that dim(F (D)) ≤ dim(D) = N . Now assume for a contradiction that N < dim(X). Then dim(F (D)) < dim(X), and therefore F (D) has empty interior in X. By topological compact domination (p • F )(D) has empty interior in X/E, so in particular there is some y ∈ A 1 such that y / ∈ (p • F )(D). It follows that the image of F is disjoint from p −1 (y), namely F takes values in p −1 (A 0 ) \ p −1 (y) = p −1 (A 0 \ y) and witnesses the fact that f is null-homotopic when seen as a map into p −1 (A 0 \ y). We can now reach a contradiction as follows. Since A 0 \ A 1 is a deformation retract of A 0 \{y}, the inclusion induces an isomorphism π N −1 (A 0 \A 1 ) ∼ = π N −1 (A 0 \ y). By the functoriality part in Theorem 12.2, there is an induced isomorphism π def N −1 (p −1 (A 0 \ A 1 )) ∼ = π def N −1 (p −1 (A 0 \ y)). Moreover, this isomorphism sends the homotopy class of f to the homotopy class of f itself, but seen as a map with a different codomain. This is absurd since f was not null-homotopic as a map to p −1 (A 0 \ A 1 ), while we have shown that it is null-homotopic as a map to p −1 (A 0 \ y). As a corollary we obtain. Proof. By [BOPP05], G/G 00 is a compact abelian connected Lie group and by the previous result its dimension is n. It follows that G/G 00 is isomorphic to an ndimensional torus, so π 1 (G/G 00 ) ∼ = Z n and, by Theorem 12.2, π def 1 (G) ∼ = Z n as well. To determine the k-torsion two approaches are possible. The first is to argue as in [EO04], namely to observe that G[k] ∼ = π def 1 (G)/kπ def 1 (G) and π def 1 (G) ∼ = Z n . Alternatively we can use the fact that G 00 is divisible [BOPP05] and torsion free [HPP08], so G and G/G 00 have isomorphic torsion subgroups. Since G/G 00 is a torus of dimension n, its torsion is known and we obtain the desired result. Notice that in [EO04] both the isomorphism π def 1 (G) ∼ = Z n and the determination of the k-torsion of G is proved directly without using G/G 00 , while our argument is a reduction to the case of the classical tori. Fact 4. 2 2(o-minimal triangulation theorem[vdD98]). Every definable set X ⊆ M m can be triangulated. Moreover, if S 1 , . . . , S l are finitely many definable subsets of X, there is a triangulation φ : |P | M → X compatible with S 1 , . . . , S l . Definition 4. 3 . 3Let P be an (open) simplicial complex in M m and let S 1 , . . . , S l be definable subsets of |P |. A normal triangulation of P is a triangulation (P ′ , φ ′ ) of |P | satisfying the following conditions: Fact 4. 4 ( 4Normal triangulation theorem[Bar10]). If S 1 , . . . , S l are finitely many definable subsets of |P |, there exists a normal triangulation of P compatible with S 1 , . . . , S l . Proposition 5. 2 . 2The preimage of any point of y ∈ st(X) under st : X → st(X) is open in X ⊆ M k . In particular, the standard part map is continuous (as the preimage of every subset is open). Fact 7. 1 1([BO04, Cor. 4.4]). Let X be a closed and bounded ∅-semialgebraic subset of M k and let D be a definable subset of X. Then st(D)∩st(D ∁ ) ⊆ R k has Lebesgue measure zero. Definition 8. 3 . 3Let U be an open cover of a topological space Y . Given A ⊆ Y we recall that the star of A with respect to U, denoted St U (A), is the union of all U ∈ U such that U ∩ A = ∅. We say that U star refines another cover V if for each U ∈ U there is a V ∈ V such that St U (U ) ⊆ V . We define St(U) to be the cover consisting of the sets St U (U ) as U ranges in U.Proposition 8.4. Let O be an open subset of a triangulable space Y (not necessarily a manifold).Then every open cover V of O has a locally finite refinement U which is a good cover. Recall that two continuous maps f 0 , f 1 : Z → Y between topological spaces are homotopic if there is a continuous function H :Z × [0, 1] → Y such that H(z, 0) = f 0 (z) and H(z, 1) = f 1 (z) for every z ∈ Z.Given base points z 0 ∈ Z and y 0 ∈ Y and a function f :Z → Y , we write f : (Z, z 0 ) → (Y, y 0 ) if f sends z 0 to y 0 . Given an homotopy H between two maps f 0 , f 1 : (Z, z 0 ) → (Y, y 0 ) we say that H is a homotopy relative to z 0 if H(z 0 , t) =y 0 for all t ∈ I, where I = [0, 1]. Definition 9.1. If Z and Y are topological spaces, we let [Z, Y ] denote the set of all homotopy classes of continuous functions from Z to Y . Given base points z 0 ∈ Z and y 0 ∈ Y , we let [(Z, z 0 ), (Y, y 0 )], or simply [Z, Y ] 0 , denote the set of all homotopy classes of continuous functions f : (Z, z 0 ) → (Y, y 0 ) relative to z 0 . The n-th homotopy group is defined as π n (Y ) := [S n , Y ] 0 Definition 11. 2 . 2Let O ⊆ X/E be an open subset. Let U be an open cover of O ⊆ X/E, and let P be a closed simplicial complex defined over R alg . Consider a definable continuous map f : |P | M → p −1 (O) and let st : |P | M → |P | R be the standard part map. We say that a continuous map f * : |P | R → O is an Uapproximation of f if p • f and f * • st are U-close, namely the two paths from the upper-left to the lower-right corner of the following diagram represent maps which are U-close. Definition 11. 3 . 3Given a collection U of open subsets of X/E let p −1 (U) be the collection of consisting of the -definable open sets p −1 (U ) ⊆ X as U varies in U.Notice that f : |P | M → X is (p −1 (U), P )-small (Definition 9.3) if and only if (p • f ) : |P | M → X/E is (U, P )-small.The next lemma shows that in this situation we can ignore the difference between closed and open simplexes. Recall that P = {σ : σ ∈ P }. We have:Lemma 11.4. Let U be a collection of open subsets of X/E and let f : |P | M → X be a definable continuous map. Then f is (p −1 (U), P )-small if and only if it is (p −1 (U), P )-small. Lemma 11. 5 . 5Let V be a locally finite good open cover of O and let f : |P | M → p −1 (O) be a (p −1 (V), P )-small map. Then there exists a V-approximation f * : |P | R → O of f . Lemma 11. 6 . 6Let O ⊆ X/E be an open subset of X/E. Given a definable map f : |P | M → p −1 (O) and an open cover U of O, we can find a subdivision P ′ of P and a normal triangulation ( Theorem 11.8 (Theorem A). Assume 11.1.(1) For each open set O ⊆ X/E, there is a mapp P O : [|P | M , p −1 (O)] def → [|P | R , O].(2) The maps p P O are natural with respect to inclusions of open sets. More precisely, let U ⊆ V be open sets in X/E. Then, we have the following commutative diagram: ( 4 ) 4The results remain valid replacing all the homotopy classes with their pointed versions as in Definition 9.1 and Definition 10.1.(5) In particular, for all O ⊆ X/E there is a natural map p S n O : π def n (p −1 (O)) → π n (O) which is in a group homomorphisms when n > 0. When O = X/E we obtain a homomorphism π def n (X) → π n (X/E). by the following property: if U is a star-refinement of a good cover of O, f is (p −1 (U), P )-small and f * is a U-approximation of f , then p P O ([f ]) = [f * ]. A word of caution is in order: we are not claiming that if f is U-approximable and f * is an approximation of f , then p P O ([f ]) = [f * ]. Definition 11. 9 ( 9Definition of the map p P O ). Let O ⊆ X/E be an open set. Let U be an open cover of O which is a star-refinement of a good cover and let f : |P | M → p −1 (O) be a definable map. By Lemma 11.6 there is a subdivision P ′ of P and a normal triangulation ( Lemma 11 . 10 . 1110Let f 0, f 1 : |P | M → X be definable maps and let f * 0 and f * 1 be U-approximations of f 0 , f 1 respectively. If f 0 , f 1 are p −1 (U)-close, then f * 0 and f * 1 are St(U)-close. and composing the homotopies we obtain the desired result.Lemma 11.12. Points (1) and (2) of Theorem 11.8 hold.Proof. We have already proved that p P O is well defined and we need to establish the naturality with respect to inclusions of open setsU ⊆ V ⊆ X/E. Let f : |P | M → p −1 (U ) ⊆ p −1 (V ) bea continuos definable map and notice that C = p•f (|P | M ) is a closed set. By Theorem 11.8(1), there are open covers U of U and V of V which starrefine a good cover of U and V respectively. We can further assume that V refines U ∪ {C ∁ }. By Lemma 11.6 there is a definable homeomorphism ψ : |P | M → |P | M definably homotopic to the identity such that f ′ := f • ψ is V-approximable (and clearly definably homotopic to f ). Since ψ(|P | M ) = |P | M , we have p•f ′ (|P | M ) = C. Let f ′ * : |P | R → V be a V-approximation of f ′ . Then by definition p P V ([f ]) = [f ′ * ]. Now fix some x ∈ |P | M , and using the definition of Lemma 11.13. Theorem 11.8(3) holds, namely we can work with ∅-semialgebraic sets instead of simplicial complexes. Proof. If Z is a ∅-semialgebraic set, there is a ∅-definable homeomorphism f : |P | → Z where P is a simplicial complex P with real algebraic vertices. We have induced bijections f * M : [Z(M ), p −1 (U )] def ≃ [|P | M , p −1 (U )] def and f * R : [|Z| R , U ] ≃ [|P | R , U ]. The results now follows from the previous points of the theorem. S n O ([f ]) = [f * ] and p S n O ([g]) = [g * ] .Now it suffices to observe that f * ∨ g * is a U-approximation of f ∨ g. Theorem 12.2 (Theorem B). Assume that p : X → X/E satisfies 12.1. Thenthe map p P O : [|P | M , p −1 (O)] def → [|P | R , O]in Theorem 11.8 is a bijection and similarly for pointed spaces. Thus in particular π n (X) def ∼ = π n (X/E) and more generally we have a natural isomorphismπ n (p −1 (O)) def ∼ = π n (O) for every open subset O ⊆ X/E.Recall that if X = X(M ) ⊆ M k is a closed and bounded ∅-semialgebraic and st : X → X(R) is the standard part map, we can identify p : X → X/E with st : X → X(R) and deduce the following result of[BO09].Corollary 12.3. If X = X(M ) ⊆ M k is a closed and bounded ∅-semialgebraic and st : X → X(R) is the standard part map π n (X) def ∼ = π n (X(R)) and similarly [|P | M , X] def ∼ = [|P | R , X(R)]. Lemma 12 . 11 . 1211Let n ∈ N. There is an open cover W of O such that, given an n-dimensional simplicial complex P and definable continuous maps f, g : |P | M → p −1 (O), if f * and g * are W-approximations of f and g respectively, and G : |P × I| R → O is a homotopy between f * and g * , then there is a definable homotopy H : |P × I| M → p −1 (O) between f and g. Using Lemma 12.7 we can extend H 0 to a (p −1 (V), T )-small definable continous map H : |T | M → p −1 (O). If x = (x, 0) ∈ |P × 0| M is a vertex of T , then (f * • st)(x) = (G • st)(x) = (p • H)(x) by construction. Since moreover f * • st and p • H are (V, T )-small, it follows that f * • st and p • H |0 are St(V)-close, hence f * is a St(V)-approximation of both f (by hypothesis) and of H |0 . We can then conclude using Lemma 12.10 that f and H |0 are homotopic, and, similarly, that H |1 is homotopic to g. Composing the homotopies, we can finally prove that f is homotopic to g. Lemma 12.12. Let U be an open cover of O. Let f * : |P | R → O be a continuous map. Then, we can find a map f : |P | M → p −1 (O) such that f * is a U-approximation of f . on the vertices of P ′ . Then we can apply Lemma 12.7 to extend f (0) to a (V, P ′ )-small map f : |P | M → p −1 (O). Now notice that p • f and f * • st are St(V)-close (since they are (V, P ′ )-small and they coincide on the vertices), and therefore f * is a St(V)-approximation of f , so also a U-approximation.We can now finish the proof of the main result of this section.Proof of Theorem 12.2. First we prove the injectivity. Suppose that p P O ([f ]) = p P O ([g]). Let W be as in Lemma 12.11. Choosing a different representative of the homotopy classes we can assume without loss of generality that f and g are (p −1 (W), P )-small, p P O ([f ]) = [f * ] and p P O ([g]) = [g * ], where f * and g * are Wapproximation of f and g respectively. By definition [f * ] = [g * ] Corollary 13.6 ([EO04]). Let G be an abelian definably compact and definably connected group of dimension n. Then π def1 (G) ∼ = Z n and G[k] ∼ = (Z/kZ) n , where G[k] is the k-torsion subgroup. Proposition 3.4. Assume that X is a definable space and each fiber of p : X → X/E is a downward directed intersection of definable open subsets of X. Then p : X → X/E is continuous. Proof. By Lemma 2.1 the preimage of any point is open, hence the preimage of every set is open. Definition 8.1. Let U be an open cover of a topological space Y . We say that U is a good cover if every finite intersection U 1 ∩ . . . ∩ U n of open sets U 1 , . . . , U n ∈ U is contractible. Normal triangulations in o-minimal structures. Elías Baro, Journal of Symbolic Logic. 751Elías Baro. Normal triangulations in o-minimal structures. Journal of Symbolic Logic, 75(1):275-288, 2010. O-minimal spectra, infinitesimal subgroups and cohomology. Alessandro Berarducci, Journal of Symbolic Logic. 724Alessandro Berarducci. O-minimal spectra, infinitesimal subgroups and cohomology. Journal of Symbolic Logic, 72(4):1177-1193, dec 2007. Cohomology of groups in o-minimal structures: acyclicity of the infinitesimal subgroup. Alessandro Berarducci, Journal of Symbolic Logic. 743Alessandro Berarducci. Cohomology of groups in o-minimal structures: acyclicity of the infinitesimal subgroup. Journal of Symbolic Logic, 74(3):891-900, sep 2009. o-Minimal Cohomology: Finiteness and Invariance Results. Alessandro Berarducci, Antongiulio Fornasiero, Journal of Mathematical Logic. 0902Alessandro Berarducci and Antongiulio Fornasiero. o-Minimal Cohomology: Finiteness and Invariance Results. Journal of Mathematical Logic, 09(02):167-182, dec 2009. On the homotopy type of definable groups in an o-minimal structure. Alessandro Berarducci, Marcello Mamino, Journal of the London Mathematical Society. 833Alessandro Berarducci and Marcello Mamino. On the homotopy type of definable groups in an o-minimal structure. Journal of the London Mathematical Society, 83(3):563-586, 2011. Higher homotopy of groups definable in o-minimal structures. Alessandro Berarducci, Marcello Mamino, Margarita Otero, Israel Journal of Mathematics. 1801Alessandro Berarducci, Marcello Mamino, and Margarita Otero. Higher homotopy of groups definable in o-minimal structures. Israel Journal of Mathematics, 180(1):143- 161, oct 2010. o-Minimal Fundamental Group, Homology and Manifolds. Alessandro Berarducci, Margarita Otero, Journal of the London Mathematical Society. 652Alessandro Berarducci and Margarita Otero. o-Minimal Fundamental Group, Homol- ogy and Manifolds. Journal of the London Mathematical Society, 65(2):257-270, apr 2002. An additive measure in o-minimal expansions of fields. Alessandro Berarducci, Margarita Otero, The Quarterly Journal of Mathematics. 554Alessandro Berarducci and Margarita Otero. An additive measure in o-minimal expan- sions of fields. The Quarterly Journal of Mathematics, 55(4):411-419, dec 2004. On O-Minimal Homotopy Groups. Elías Baro, Margarita Otero, The Quarterly Journal of Mathematics. 613Elías Baro and Margarita Otero. On O-Minimal Homotopy Groups. The Quarterly Journal of Mathematics, 61(3):275-289, mar 2009. Locally definable homotopy. Elías Baro, Margarita Otero, Annals of Pure and Applied Logic. 1614Elías Baro and Margarita Otero. Locally definable homotopy. Annals of Pure and Applied Logic, 161(4):488-503, jan 2010. A descending chain condition for groups definable in o-minimal structures. Alessandro Berarducci, Margarita Otero, Anand Ya&apos;acov Peterzil, Pillay, Annals of Pure and Applied Logic. 1342-3Alessandro Berarducci, Margarita Otero, Ya'acov Peterzil, and Anand Pillay. A de- scending chain condition for groups definable in o-minimal structures. Annals of Pure and Applied Logic, 134(2-3):303-313, jul 2005. Locally Semialgebraic Spaces. Hans Delfs, M Knebusch, SpringerBerlinHans Delfs and M Knebusch. Locally Semialgebraic Spaces. Springer, Berlin, 1985. Modified Vietoris theorems for homotopy. James Dugundji, Fundamenta Mathematicae. 66James Dugundji. Modified Vietoris theorems for homotopy. Fundamenta Mathemati- cae, 66:223-235, 1969. Definably compact abelian groups. J Mário, Margarita Edmundo, Otero, Journal of Mathematical Logic. 42Mário J. Edmundo and Margarita Otero. Definably compact abelian groups. Journal of Mathematical Logic, 4(2):163-180, 2004. Algebraic topology. Allen Hatcher, Cambridge University PressAllen Hatcher. Algebraic topology. Cambridge University Press, 2002. On NIP and invariant measures. Ehud Hrushovski, Anand Pillay, Journal of the European Mathematical Society. 13Ehud Hrushovski and Anand Pillay. On NIP and invariant measures. Journal of the European Mathematical Society, 13:1005-1061, 2011. Groups, measures, and the NIP. Ehud Hrushovski, Anand Peterzil, Pillay, Journal of the American Mathematical Society. 2102Ehud Hrushovski, Ya'acov Peterzil, and Anand Pillay. Groups, measures, and the NIP. Journal of the American Mathematical Society, 21(02):563-596, feb 2008. Differential Topology. A Mukherjee, Springer International PublishingA Mukherjee. Differential Topology. Springer International Publishing, 2015. Elements of algebraic topology. Munkres, Munkres. Elements of algebraic topology. 1984. On groups and fields definable in o-minimal structures. Anand Pillay, Journal of Pure and Applied Algebra. 533Anand Pillay. On groups and fields definable in o-minimal structures. Journal of Pure and Applied Algebra, 53(3):239-255, 1988. Type-definability, compact lie groups, and o-Minimality. Anand Pillay, Journal of Mathematical Logic. 42Anand Pillay. Type-definability, compact lie groups, and o-Minimality. Journal of Mathematical Logic, 4(2):147-162, 2004. Generic sets in definably compact groups. Fundamenta Mathematicae. Anand Ya&apos;acov Peterzil, Pillay, 193Ya'acov Peterzil and Anand Pillay. Generic sets in definably compact groups. Funda- menta Mathematicae, 193(2):153-170, 2007. Definable Compactness and Definable Subgroups of o-Minimal Groups. Charles Ya&apos;acov Peterzil, Steinhorn, Journal of the London Mathematical Society. 593Ya'acov Peterzil and Charles Steinhorn. Definable Compactness and Definable Sub- groups of o-Minimal Groups. Journal of the London Mathematical Society, 59(3):769- 786, jun 1999. Minimal bounded index subgroup for dependent theories. Saharon Shelah, Proceedings of the American Mathematical Society. 13631087Saharon Shelah. Minimal bounded index subgroup for dependent theories. Proceedings of the American Mathematical Society, 136(3):1087, 2008. Distal and non-distal NIP theories. Pierre Simon, Annals of Pure and Applied Logic. 1643Pierre Simon. Distal and non-distal NIP theories. Annals of Pure and Applied Logic, 164(3):294-318, 2013. Finding generically stable measures. Pierre Simon, The Journal of Symbolic Logic. 7701Pierre Simon. Finding generically stable measures. The Journal of Symbolic Logic, 77(01):263-278, mar 2014. A Guide to NIP theories. Lecture notes in Logic. Pierre Simon, Cambridge University PressPierre Simon. A Guide to NIP theories. Lecture notes in Logic. Cambridge University Press, 2015. A Vietoris mapping theorem for homotopy. Stephen Smale, Proceedings of the American Mathematical Society. 83Stephen Smale. A Vietoris mapping theorem for homotopy. Proceedings of the Ameri- can Mathematical Society, 8(3):604-604, 1957. A Course in Model Theory. Katrin Tent, Martin Ziegler, Cambridge University PressKatrin Tent and Martin Ziegler. A Course in Model Theory. Cambridge University Press, 2012. Lou van den Dries. Tame topology and o-minimal structures. LMS Lecture Notes Series. 248Cambridge Univ. PressLou van den Dries. Tame topology and o-minimal structures, volume 248. LMS Lecture Notes Series, Cambridge Univ. Press, 1998.
[]
[ "PARTIAL COACTIONS OF WEAK HOPF ALGEBRAS ON COALGEBRAS", "PARTIAL COACTIONS OF WEAK HOPF ALGEBRAS ON COALGEBRAS" ]
[ "Graziela Fonseca ", "Eneilson Fontes ", "Grasiela Martini " ]
[]
[]
It will be seen that if H is a weak Hopf algebra in the definition of coaction of weak bialgebras on coalgebras[15], then a definition property is suppressed giving rise to the (global) coactions of weak Hopf algebras on coalgebras. The next step will be introduce the more general notion of partial coactions of weak Hopf algebras on coalgebras as well as a family of examples via a fixed element on the weak Hopf algebra, illustrating both definitions: global and partial. Moreover, it will also be presented how to obtain a partial comodule coalgebra from a global one via projections, giving another way to find examples of partial coactions of weak Hopf algebras on coalgebras. In addition, the weak smash coproduct [15] will be studied and it will be seen under what conditions it is possible to generate a weak Hopf algebra structure from the coproduct and the counit defined on it. Finally, a dual relationship between the structures of partial action and partial coaction of a weak Hopf algebra on a coalgebra will be established.
10.1142/s0219498822500128
[ "https://arxiv.org/pdf/2009.09903v1.pdf" ]
119,570,362
2009.09903
14f3bfc168ff08893ba7ed18906cfcca0355b052
PARTIAL COACTIONS OF WEAK HOPF ALGEBRAS ON COALGEBRAS 21 Sep 2020 Graziela Fonseca Eneilson Fontes Grasiela Martini PARTIAL COACTIONS OF WEAK HOPF ALGEBRAS ON COALGEBRAS 21 Sep 2020Weak Hopf algebraglobalizationdualizationpartial comodule coalgebraweak smash coproduct Mathematics Subject Classification: primary 16T99; secondary 20L05 It will be seen that if H is a weak Hopf algebra in the definition of coaction of weak bialgebras on coalgebras[15], then a definition property is suppressed giving rise to the (global) coactions of weak Hopf algebras on coalgebras. The next step will be introduce the more general notion of partial coactions of weak Hopf algebras on coalgebras as well as a family of examples via a fixed element on the weak Hopf algebra, illustrating both definitions: global and partial. Moreover, it will also be presented how to obtain a partial comodule coalgebra from a global one via projections, giving another way to find examples of partial coactions of weak Hopf algebras on coalgebras. In addition, the weak smash coproduct [15] will be studied and it will be seen under what conditions it is possible to generate a weak Hopf algebra structure from the coproduct and the counit defined on it. Finally, a dual relationship between the structures of partial action and partial coaction of a weak Hopf algebra on a coalgebra will be established. Introduction Partial action theory appeared firstly in [11] in the context of operator algebra. Later, in [10], M. Dokuchaev and R. Exel brought partial actions to a purely algebraic context contributing to the development of classical results, such as Galois theory, in the case of partial actions of groups on rings. Following this line of research, S. Caenepeel and K. Janssen introduced the notions of partial actions and coactions of Hopf algebras on algebras in [6]. The main idea of studying partial actions for the context of Hopf algebras is to generalize the results obtained for partial group actions to this broader context. The notions of partial actions and coactions of Hopf algebras on coalgebras appeared for the first time in [9], dualizing the structures introduced in [6]. As a natural task, in [8], was introduced the notion of partial actions of weak Hopf algebras on algebras. In this work, the authors extended many results of the classic theory for this setting. We introduced in [7] the theory of partial actions of weak Hopf algebras on coalgebras, inspired by the notion of partial action of a Hopf algebra on a coalgebra, presented in [9]. Basically, it was constructed in [7] a correspondence between a partial action of a groupoid G on a coalgebra C and a partial action of the groupoid algebra kG on the coalgebra C. In the present work, we give successions to the theory of partial actions. The notion of partial and global coactions of weak Hopf algebras on coalgebras is introduced as well as some important properties and examples. In the sequel, we will study the weak smash coproduct presented in [15] in order to see under what conditions this structure is a weak Hopf algebra. We divide this paper as follows: The second section is devoted to the study of weak Hopf algebras, their properties and some examples that will be commonly used throughout the text. A weak bialgebra is a vector space that has a structure of algebra and coalgebra simultaneously, with a compatibility property between these structures. The axioms of weak bialgebra appear for the first time in [4]. If a weak bialgebra is provided with an anti-homomorphism of algebras and coalgebras, them we say that is a weak Hopf algebra. The main difference between a weak Hopf algebra and a Hopf algebra is that in the case of a Hopf algebra the counit is an algebra homomorphism. The first author was partially supported by CNPq, Brazil. The concept of coaction of a weak bialgebra on a coalgebra was introduced in [15]. In section 3, the coaction of a weak Hopf algebra on a coalgebra is presented. Generalizing this concept, the definition of partial coaction of a weak Hopf algebra on a coalgebra is exhibited with its properties and a family of examples. It is also ascertained what conditions are necessary and sufficient for a partial comodule coalgebra to be generated from a global comodule coalgebra via a projection. Section 4 is intended to investigate the weak smash coproduct presented in [15]. The idea is to construct a weak Hopf algebra from the existing coalgebra structure in the weak smash coproduct. Historically, the construction of Hopf algebras and weak Hopf algebras from global and partial (co)actions has been studied by several authors. This can be seen in texts such as [1], [13] and [14]. This shows a great concern in presenting new examples of such structures. Our contribution is to make the weak smash coproduct into a weak Hopf algebra under certain conditions. From now, some notations will be fixed. It will be denoted by k a generic field, unless some additional specification is made about such structure. Moreover, every tensorial product will be considered over the field k, then, it will be used the notation ⊗ instead of ⊗ k . A will always denote an algebra, C a coalgebra and H a weak Hopf algebra. Throughout the text other properties may be required over the structures A, C e H, but they will be duly mentioned. Besides that, every map will be considered k-linear and the vector spaces will be considered over the field k. Finally, the isomorphism V ⊗ k ≃ V ≃ k ⊗ V will be used automatically for every vector space V . Preliminaries In this section, we present few results of weak Hopf algebras. For more details we refer [2], [3] and [4]. A weak bialgebra (H, m, u, ∆, ε) (or simply H) is a vector space such that (H, m, u) is an algebra, (H, ∆, ε) is a coalgebra, and, in addition, the following conditions are satisfied for all h, k ∈ H: (i) ∆(hk) = ∆(h)∆(k); (ii) ε(hkℓ) = ε(hk 1 )ε(k 2 ℓ) = ε(hk 2 )ε(k 1 ℓ); (iii) (1 H ⊗ ∆(1 H ))(∆(1 H ) ⊗ 1 H ) = (∆(1 H ) ⊗ 1 H )(1 H ⊗ ∆(1 H )) = ∆ 2 (1 H ). Since ∆ is multiplicative, we conclude that ∆(h) = ∆(h1 H ) = ∆(1 H h), then h 1 ⊗ h 2 = h 1 1 1 ⊗ h 2 1 2 = 1 1 h 1 ⊗ 1 2 h 2 .(1) It is possible to use ε to define the following linear maps ε t : H → H and ε s : H → H h → ε(1 1 h)1 2 h → 1 1 ε(h1 2 ). Then, it can be defined the vector spaces H t = ε t (H) and H s = ε s (H). Thus, for any weak bialgebra H, every element h ∈ H can be written as h = (ε ⊗ I)∆(h) = (ε ⊗ I)∆(1 H h) = ε t (h 1 )h 2 ,(2)h = (I ⊗ ε)∆(h) = (I ⊗ ε)∆(h1 H ) = h 1 ε s (h 2 ).(3) Proposition 2.1. Let H be a weak bialgebra. Then, the following properties hold for all h, k ∈ H ε t (ε t (h)) = ε t (h) (4) ε s (ε s (h)) = ε s (h) (5) ε(hε t (k)) = ε(hk) (6) ε(ε s (h)k) = ε(hk) (7) ∆(1 H ) ∈ H s ⊗ H t (8) ε t (hε t (k)) = ε t (hk) (9) ε s (ε s (h)k) = ε s (hk)(10)∆(h) = 1 1 h ⊗ 1 2 for all h ∈ H t (11) ∆(h) = 1 1 ⊗ h1 2 for all h ∈ H s (12) h 1 ⊗ ε t (h 2 ) = 1 1 h ⊗ 1 2 (13) ε s (h 1 ) ⊗ h 2 = 1 1 ⊗ h1 2 (14) hε t (k) = ε(h 1 k)h 2 (15) ε s (h)k = k 1 ε(hk 2 ).(16)ε t (ε t (h)k) = ε t (h)ε t (k) (18) ε s (hε s (k)) = ε s (h)ε s (k),(19) for all h, k ∈ H. Let H be a weak bialgebra. We say that H is a weak Hopf algebra if there is a linear map S : H −→ H, called antipode, which satisfies: (i) h 1 S(h 2 ) = ε t (h); (ii) S(h 1 )h 2 = ε s (h); (iii) S(h 1 )h 2 S(h 3 ) = S(h), for all h ∈ H. The antipode of a weak Hopf algebra is anti-multiplicative, that is, S(hk) = S(k)S(h), and anti-comultiplicative, which means S(h) 1 ⊗ S(h) 2 = S(h 2 ) ⊗ S(h 1 ). Proposition 2.2. Let H be a weak Hopf algebra. Then, the following identities hold for all h ∈ H ε t (h) = ε(S(h)1 1 )1 2 (20) ε s (h) = 1 1 ε(1 2 S(h)) (21) ε t • S = ε t • ε s = S • ε s (22) ε s • S = ε s • ε t = S • ε t (23) h 1 ⊗ S(h 2 )h 3 = h1 1 ⊗ S(1 2 ) (24) h 1 S(h 2 ) ⊗ h 3 = S(1 1 ) ⊗ 1 2 h.(25) Hence, if H is a weak Hopf algebra, S(1 H ) = 1 H , ε • S = ε, S(H t ) = H s , S(H s ) = H t and S(H) is also a weak Hopf algebra, with the same counit and antipode. It is easy to see that every Hopf algebra is a weak Hopf algebra. Conversely we have the following result. Proposition 2.3. A weak Hopf algebra is a Hopf algebra if one of the following equivalent conditions is satisfied: (i) ∆(1 H ) = 1 H ⊗ 1 H ; (ii) ε(hk) = ε(h)ε(k); (iii) h 1 S(h 2 ) = ε(h)1 H ; (iv) S(h 1 )h 2 = ε(h)1 H ; (v) H t = H s = k1 H ; for all h, k ∈ H. In order to construct an example of weak Hopf algebra, we present the following definition. Definition 2.4 (Groupoid). Consider G a non-empty set with a binary operation partially defined which is denoted by concatenation. This operation is called product. Given g, h ∈ G, we write ∃gh whenever the product gh is set (similarly we use ∄gh whenever the product is not defined). Thus, G is called groupoid if: (i) For all g, h, l ∈ G, ∃(gh)l if and only if ∃g(hl), and, in this case, (gh)l = g(hl); (ii) For all g, h, l ∈ G, ∃(gh)l if and only if ∃gh and ∃hl; (iii) For each g ∈ G there are unique elements d(g), r(g) ∈ G such that ∃gd(g), ∃r(g)g and gd(g) = g = r(g)g; (iv) For each g ∈ G there exists an element such that d(g) = g −1 g and r(g) = gg −1 . Moreover, the element g −1 is the only one that satisfies such property and, in addition, (g −1 ) −1 = g, for all g ∈ G. An element e is said identity in G if for some g ∈ G, e = d(g) = r(g −1 ). Therefore, e 2 = e, which implies that d(e) = e = r(e) and e = e −1 . We denote G 0 the set of all identities elements of G. Besides that, one can define the set G 2 = {(g, h) ∈ G × G | ∃gh} of all pairs of elements composable in G. Example 2.6 (Groupoid Algebra). Let G be a groupoid such that the cardinality of G 0 is finite and kG the vector space with basis indexed by the elements of G given by {δ g } g∈G . Then, kG is a weak Hopf algebra with the following structures m(δ g ⊗ δ h ) = δ gh , if ∃gh , 0, otherwise u(1 k ) = 1 G = e∈G0 δ e ∆(δ g ) = δ g ⊗ δ g ε(δ g ) = 1 k S(δ g ) = δ g −1 . Remark that when it is assumed that the dimension of a weak Hopf algebra H is finite, it is obtained that the dual structure H * = Hom(H, k) is a weak Hopf algebra with the convolution product (f * g)(h) = m(f ⊗ g)(h) = f (h 1 )g(h 2 ) for all h ∈ H, the unit u H * (1 k ) = 1 H * = ε H , the coprodut defined by the relation ∆ H * (f ) = f 1 ⊗ f 2 ⇔ f (hk) = f 1 (h)f 2 (k) for all h, k ∈ H, and the counit ε H * (f ) = f (1 H ). Besides that, we have (ε t ) H * (f ) = f • ε t and (ε s ) H * (f ) = f • ε s . Example 2.7 (Dual Groupoid Algebra). Let G be a finite groupoid and (kG) * the vector space with basis indexed by the elements of G given by {p g } g∈G , where p g (δ h ) = 1 k , if g = h , 0, otherwise. Then, (kG) * is a weak Hopf algebra with the following structures p g * p h = p g , if g = h , 0, otherwise 1 (kG) * = g∈G p g ∆ (kG) * (p g ) = h∈G,∃h −1 g p h ⊗ p h −1 g ε (kG) * (p g ) = p g (1 kG ) S (kG) * (p g ) = p g • S. The following example of weak Hopf algebra was presented by G. Böhm and J. Gómes-Torrecillas in [3]. Example 2.8. Consider G a finite abelian group with cardinality |G|, where |G| is not a multiple of the characteristic of k. If we consider kG the algebra with basis indexed by the elements of G and with coalgebra structure given by ∆(g) = 1 |G| h∈G gh ⊗ h −1 ε(g) = |G|, if g = 1 G , 0, otherwise. Then, kG is a weak Hopf algebra with antipode defined by S(g) = g. Besides that, ε s (g) = ε t (g) = g, for all g ∈ G, what implies that H s = H t = kG. Comodule Coalgebra Consider H a weak bialgebra. In [15], Yu. Wang and L. Zhang defined C a (left) H-comodule coalgebra when there exits a linear map ρ : C −→ H ⊗ C c −→ c −1 ⊗ c 0 such that for all c ∈ C (CC1) (ε H ⊗ I C )ρ(c) = c (CC2) (I H ⊗ ∆ C )ρ(c) = (m H ⊗ I C ⊗ I C )(I H ⊗ τ C,H ⊗ I C )(ρ ⊗ ρ)∆ C (c) (CC3) (I H ⊗ ρ)ρ(c) = (∆ H ⊗ I C )ρ(c) (CC4) (I H ⊗ ε C )ρ(c) = (ε t ⊗ ε C )ρ(c). In this case, it is said that H coacts on the coalgebra C. ρ : C −→ H ⊗ C c −→ c −1 ⊗ c 0 that satisfies (CC1)-(CC3), then the condition (CC4) is satisfied. Proof. Suppose that there is a linear map ρ that satisfies the conditions (CC1)-(CC3), then for every c ∈ C ε t (c −1 )ε C (c 0 ) = c −1 1 S H (c −1 2 )ε C (c 0 ) (CC3) = c −1 S H (c 0−1 )ε C (c 00 ) (CC2) = c 1 −1 c 2 −1 S H (c 2 0−1 )ε C (c 2 00 )ε C (c 1 0 ) (CC3) = c 1 −1 c 2 −1 1 S H (c 2 −1 2 )ε C (c 1 0 )ε C (c 2 0 ) (15) = c 1 −1 2 ε H (c 1 −1 1 c 2 −1 )ε C (c 1 0 )ε C (c 2 0 ) (CC3) = c 1 0−1 ε C (c 1 00 )ε C (c 2 0 )ε H (c 1 −1 c 2 −1 ) (CC2) = c 0 1 −1 ε C (c 0 1 0 )ε C (c 0 2 )ε H (c −1 ) (CC1) = c 1 −1 ε C (c 1 0 )ε C (c 2 ) = (I H ⊗ ε C )ρ(c). Example 3.2. [15] Consider H a weak Hopf algebra finite dimensional. Then, H is a H * -comodule coalgebra defined by ρ : H −→ H * ⊗ H h i −→ h i * ⊗ h i where {h i } n i=1 is a basis for H and {h i * } n i=1 is the dual basis for H * . 3.1. Partial Comodule Coalgebra. In this section, the main purpose is to introduce the concept of a partial coaction of a weak Hopf algebra H on a coalgebra. It is also introduced some examples that support the theory exposed here and some properties. Definition 3.3. We say that C is a (left) partial H-comodule coalgebra (or that H coacts partially on C) if there exists a linear map ρ : C −→ H ⊗ C c −→ c −1 ⊗ c 0 such that for all c ∈ C (CCP1) (ε H ⊗ I C )ρ(c) = c (CCP2) (I H ⊗ ∆ C )ρ(c) = (m H ⊗ I C ⊗ I C )(I H ⊗ τ C,H ⊗ I C )(ρ ⊗ ρ)∆ C (c) (CCP3) (I H ⊗ ρ)ρ(c) = (m H ⊗ I H ⊗ I C )[(I H ⊗ ε C )(ρ(c 1 )) ⊗ (∆ H ⊗ I C )(ρ(c 2 ))]. Moreover, C is said a (left) symmetric partial H-comodule coalgebra if, in addition, satisfies (I H ⊗ ρ)ρ(c) = (m H ⊗ I H ⊗ I C )(I H ⊗ τ H⊗C,H )[(∆ H ⊗ I C )(ρ(c 1 )) ⊗ (I H ⊗ ε C )(ρ(c 2 ))]. Remark 3.4. Every H-comodule coalgebra is a partial H-comodule coalgebra. Indeed for every c ∈ C (I H ⊗ ρ)ρ(c) = c −1 ⊗ c 0−1 ⊗ c 00 (CC3) = c −1 1 ⊗ c −1 2 ⊗ c 0 (CC2) = c 1 −1 1 c 2 −1 1 ⊗ c 1 −1 2 c 2 −1 2 ⊗ c 2 0 ε C (c 1 0 ) 3.1 = ε t (c 1 −1 ) 1 c 2 −1 1 ⊗ ε t (c 1 −1 ) 2 c 2 −1 2 ⊗ c 2 0 ε C (c 1 0 ) (11) = 1 H 1 ε t (c 1 −1 )c 2 −1 1 ⊗ 1 H 2 c 2 −1 2 ⊗ c 2 0 ε C (c 1 0 ) (17) = ε t (c 1 −1 )1 H 1 c 2 −1 1 ⊗ 1 H 2 c 2 −1 2 ⊗ c 2 0 ε C (c 1 0 ) (1) = c 1 −1 c 2 −1 1 ⊗ c 2 −1 2 ⊗ c 2 0 ε C (c 1 0 ) = (m H ⊗ I H ⊗ I C )[(I H ⊗ ε C )(ρ(c 1 )) ⊗ (∆ H ⊗ I C )(ρ(c 2 ))]. Proposition 3.5. Let C be a partial H-comodule coalgebra. Then, C is a H-comodule coalgebra if and only if c −1 ε C (c 0 ) = ε t (c −1 )ε C (c 0 ) for all c ∈ C. Proof. Suppose that C is a partial H-comodule coalgebra that satisfies c −1 ε C (c 0 ) = ε t (c −1 )ε C (c 0 ), then it is enough to show that (I H ⊗ ρ)ρ(c) = (∆ H ⊗ I C )ρ(c): (I H ⊗ ρ)ρ(c) (1) = c 1 −1 ε C (c 1 0 )1 H 1 c 2 −1 1 ⊗ 1 H 2 c 2 −1 2 ⊗ c 2 0 (17) = 1 H 1 ε t (c −1 )ε C (c 1 0 )c 2 −1 1 ⊗ 1 H 2 c 2 −1 2 ⊗ c 2 0 (11) = ε t (c −1 ) 1 ε C (c 1 0 )c 2 −1 1 ⊗ ε t (c −1 ) 2 c 2 −1 2 ⊗ c 2 0 (CCP 2) = c −1 1 ε C (c 0 1 ) ⊗ c −1 2 ⊗ c 0 2 = (∆ H ⊗ I C )ρ(c). Example 3.6. Consider kG a groupoid algebra, where G is generated by the disjoint union of the finite groups G 1 and G 2 . Therefore, the group algebra kG 1 is a partial (kG) * -comodule coalgebra via ρ : kG 1 → (kG) * ⊗ kG 1 h → p e1 ⊗ h, where e 1 is the identity element of G 1 . 3.2. Coactions via ρ h . In this section it is explored a specific family of examples of partial comodule coalgebra. We say that C is a (left) H-comodule coalgebra via ρ h if, for some h ∈ H fixed, the linear map ρ h : C −→ H ⊗ C c −→ h ⊗ c defines a structure of comodule coalgebra on C. Note that since H is a weak Hopf algebra, the above application does not always defines a structure of H-comodule coalgebra on C. To see this, it is enough to observe that ρ 1H (c) = 1 H ⊗ c turns out C on a comodule coalgebra if and only if H is a Hopf algebra. The following result has the intention to characterize the properties that an element h ∈ H must to satisfy in order that ρ h be a coaction of H on a coalgebra C. Proposition 3.7. We say that C is a (left) H-comodule coalgebra via ρ h if and only if (i) ε H (h) = 1 k (ii) h 2 = h (iii) ∆ H (h) = h ⊗ h. Proof. The proof follows immediately from the definition comodule coalgebra via ρ h . Besides that, if h ∈ H satisfies the properties of Proposition 3.7, then h = ε t (h). Example 3.8. Consider kG the groupoid algebra generated by a groupoid G. Thus, fixing an element e ∈ G 0 , ρ δe ensure a structure of kG-comodule coalgebra on any coalgebra C by Proposition 3.7. Moreover, it is easy to see that ρ δg gives a structure of H-comodule coalgebra on a coalgebra C if and only if g ∈ G 0 . Thinking on the partial case, we say that C is a (left) partial H-comodule coalgebra via ρ h , for some fixed h ∈ H, if the linear map ρ h : C −→ H ⊗ C c −→ h ⊗ c determines a structure of partial H-comodule coalgebra on C. Proposition 3.10. C is a (left) partial H-comodule coalgebra via ρ h if and only if (i) ε H (h) = 1 k (ii) (h ⊗ 1 H )∆ H (h) = h ⊗ h. Observe that if h ∈ H satisfies the properties (i) and (ii), then h 2 = h. Proof. The proof follows immediately from the definition of partial H-comodule coalgebra via ρ h . Note that C is a symmetric partial H-comodule coalgebra via ρ h if and only if (i) ε H (h) = 1 k (ii) (h ⊗ 1 H )∆ H (h) = h ⊗ h (iii) ∆ H (h)(h ⊗ 1 H ) = h ⊗ h. Remark 3.11. If in addition h ∈ H satisfies h = ε t (h), then C is a H-comodule coalgebra. If kG is the weak Hopf algebra given in Example 2.8 and h is an element in G, then every parcial kG-comodule coalgebra via ρ h is actually a (global) kG-comodule coalgebra via ρ h thanks to Remark 3.11. The following example was inspired by Example 3.2.3 of [12], where k is seen as a partial kG-comodule algebra. Example 3.12. Consider kG the groupoid algebra where the groupoid G is the disjoint union of the finites groups G 1 and G 2 . Under these conditions, any coalgebra C is a partial kG-comodule coalgebra via ρ h where h = g∈G1 1 |G 1 | δ g . We can also characterize the coaction of the weak Hopf algebra kG on k when G a finite groupoid. Induced Coaction. Let H be a weak Hopf algebra and C a coalgebra. Suppose that C is a H-comodule coalgebra via ρ : C −→ H ⊗ C c −→ c −1 ⊗ c 0 Our goal in this section is to construct a symmetric partial H-comodule coalgebra from a H-comodule coalgebra. For this, consider D ⊆ C a subcoalgebra of C such that there exists a projection π : C → C onto D, i.e., π(π(c)) = π(c) for all c ∈ C, and Imπ = D. Under these conditions, it can be obtained the following result. Proposition 3.15. D is a symmetric partial H-comodule coalgebra via ρ : D −→ H ⊗ D d −→ (I H ⊗ π)ρ(d) = d −1 ⊗ π(d 0 ) if and only if the projection π satisfies: (i) d −1 ⊗ ∆ D (π(d 0 )) = d −1 ⊗ (π ⊗ π)(∆ D (d 0 )); (ii) d −1 ⊗ π(d 0 ) −1 ⊗ π(π(d 0 ) 0 ) = d 1 −1 ε D (π(d 1 0 ))d 2 −1 1 ⊗ d 2 −1 2 ⊗ π(d 2 0 ) = d 1 −1 1 d 2 −1 ε D (π(d 2 0 )) ⊗ d 1 −1 2 ⊗ π(d 1 0 ); for all d ∈ D. In this case we say that ρ is an induced coaction. Proof. Suppose that D is a symmetric partial H-comodule coalgebra via ρ(c) = (I H ⊗ π)ρ(d) = d −1 ⊗ π(d 0 ). Therefore, d −1 ⊗ ∆ D (π(d 0 )) = (I H ⊗ ∆ D )(ρ(d)) (CCP 2) = (m H ⊗ I D ⊗ I D )(I H ⊗ τ H,D ⊗ I D )(ρ ⊗ ρ)∆ D (d) (CC2) = d −1 ⊗ (π ⊗ π)(∆ D (d 0 )). Besides that, d −1 ⊗ π(d 0 ) −1 ⊗ π(π(d 0 ) 0 ) = (I H ⊗ ρ)ρ(d) (CCP 3) = (m H ⊗ I H ⊗ I D )[(I H ⊗ ε D )(ρ(d 1 )) ⊗ (∆ H ⊗ I D )(ρ(d 2 ))] = d 1 −1 ε D (π(d 1 0 ))d 2 −1 1 ⊗ d 2 −1 2 ⊗ π(d 2 0 ). Analogously, using the symmetry condition, d −1 ⊗ π(d 0 ) −1 ⊗ π(π(d 0 ) 0 ) = (I H ⊗ ρ)ρ(d) = d 1 −1 1 d 2 −1 ε D (π(d 2 0 )) ⊗ d 1 −1 2 ⊗ π(d 1 0 ). The converse is immediate. Note that the induced coaction is a H-comodule coalgebra if and only if ε t (d −1 )ε D (π(d 0 )) = d −1 ε D (π(d 0 )) for all d ∈ D. Example 3.16. Consider G 1 and G 2 finite groups, G the groupoid generated by the disjoint union of these groups and kG its groupoid algebra. Define ρ : kG 1 −→ (kG) * ⊗ kG 1 h −→ g∈G1 p g ⊗ hg with {g} g∈G1 basis for the Hopf algebra kG 1 and {p g } g∈G the dual basis for the weak Hopf algebra (kG) * . Thus, kG 1 is a (kG) * -comodule coalgebra. Define π : kG 1 → kG 1 g → g, if g ∈ D , 0, otherwise, where D =< l > k , for some l fixed in G 1 . Then, D is a symmetric partial (kG) * -comodule coalgebra by Proposition 3.15. Moreover, the induced coaction constructed is not global. Indeed, on the one hand, h −1 ε D (π(h 0 )) = g∈G1 p g ε D (π(hg)) hg=l = p h −1 l . On the other hand, ε t (kG) * (h −1 )ε D (π(h 0 )) = g∈G1 ε t kG * (p g )ε D (π(hg)) hg=l = ε t (kG) * (p h −1 l ). Note that p h −1 l = ε t (kG) * (p h −1 l ). Therefore, D is not a global (kG) * -comodule coalgebra. Weak Smash Coproduct Yu. Wang and L. Yu. Zhang, in [15], introduced a new structure generated from a H-comodule coalgebra, the weak smash coproduct. Therefore, a natural question arises: "Under what conditions the weak smash coproduct becomes a weak Hopf algebra? " This section is destined to answer this question and to make a contribution to the existing theory. The weak smash coproduct was defined being the vector space C ×H =< c 0 ⊗h 2 ε H (c −1 h 1 ) > k that has a specific structure of coalgebra. Initially we start showing that this structure of coalgebra is inherited from some properties of the vector space C ⊗ H. ∆(c ⊗ h) = c 1 ⊗ c 2 −1 h 1 ⊗ c 2 0 ⊗ h 2 . Moreover, if C is an algebra, then C ⊗ H is an algebra with product given by (c ⊗ h)(d ⊗ k) = (cd ⊗ hk), and unit 1 (C⊗H) = 1 C ⊗ 1 H . Proof. Indeed for all c ∈ C and h ∈ H (I ⊗ ∆)∆(c ⊗ h) = c 1 ⊗ c 2 −1 h 1 ⊗ c 2 0 1 ⊗ c 2 0 2 −1 h 2 ⊗ c 2 0 2 0 ⊗ h 3 (CC2) = c 1 ⊗ c 2 −1 c 3 −1 h 1 ⊗ c 2 0 ⊗ c 3 0−1 h 2 ⊗ c 3 00 ⊗ h 3 (CC3) = c 1 ⊗ c 2 −1 c 3 −1 1 h 1 ⊗ c 2 0 ⊗ c 3 −1 2 h 2 ⊗ c 3 0 ⊗ h 3 = (∆ ⊗ I)∆(c ⊗ h). The properties of associativity and unit follow naturally. C × H = < c 0 ⊗ h 2 ε H (c −1 h 1 ) > k is a coalgebra with counit ε(c × h) = ε C (c 0 )ε H (c −1 h). Proof. Indeed for all c ∈ C and h ∈ H ∆(c × h) = ∆(c 0 ⊗ h 2 ε H (c −1 h 1 )) = c 0 1 ⊗ c 0 2 −1 h 2 ε H (c −1 h 1 ) ⊗ c 0 2 0 ⊗ h 3 (CC2) = c 1 0 ⊗ c 2 0−1 h 2 ε H (c 1 −1 c 2 −1 h 1 ) ⊗ c 2 00 ⊗ h 3 (CC3) = c 1 0 ⊗ c 2 −1 2 h 2 ε H (c 1 −1 c 2 −1 1 h 1 ) ⊗ c 2 0 ⊗ h 3 = c 1 0 ⊗ (c 2 −1 1 ) 2 h 2 ε H (c 1 −1 (c 2 −1 1 ) 1 h 1 ) ⊗ c 2 0 ⊗ h 4 ε H (c 2 −1 2 h 3 ) (CC3) = c 1 0 ⊗ c 2 −1 2 h 2 ε H (c 1 −1 c 2 −1 1 h 1 ) ⊗ c 2 00 ⊗ h 4 ε H (c 2 0−1 h 3 ) = c 1 × c 2 −1 h 1 ⊗ c 2 0 × h 2 . Besided that, C × H is counitary since (ε ⊗ I)∆(c × h) = (ε ⊗ I)(∆(c 0 ⊗ h 2 ε H (c −1 h 1 ))) = (ε ⊗ I)(c 0 1 ⊗ c 0 2 −1 h 2 ε H (c −1 h 1 ) ⊗ c 0 2 0 ⊗ h 3 ) = ε C (c 0 1 )ε H (c 0 2 −1 h 2 )ε H (c −1 h 1 )(c 0 2 0 ⊗ h 3 ) = ε C (c 0−1 h 2 )ε H (c −1 h 1 )(c 00 ⊗ h 3 ) (CC3) = ε H (c −1 2 h 2 )ε H (c −1 1 h 1 )(c 0 ⊗ h 3 ) = ε H (c −1 h 1 )(c 0 ⊗ h 2 ) = c × h. Similarly, From now, suppose that C is a H-comodule bialgebra (then C is a weak bialgebra) where H is a commutative weak Hopf algebra. It follows from the commutativity of H that ε t and ε s are multiplicatives. Moreover, S. Caenepeel and E. De Groot showed in [5] that (I ⊗ ε)∆(c × h) = (c 0 1 ⊗ c 0 2 −1 h 2 )ε H (c −1 h 1 )ε C (c 0 2 0 )ε H (h 3 ) = (c 0 1 ⊗ c 0 2 −1 h 2 )ε H (c −1 h 1 )ε C (c 0 2 0 ) (CC2) = (c 1 0 ⊗ c 2 0−1 h 2 )ε H (c 1 −1 c 2 −1 h 1 )ε C (c 2 00 ) (CC3) = (c 1 0 ⊗ c 2 −1 2 h 2 )ε H (c 1 −1 c 2 −1 1 h 1 )ε C (c 2 0 ) = (c 1 0 ⊗ ε t (c 2 −1 ) 2 h)ε H (c 1 −1 ε t (c 2 −1 ) 1 )ε C (c 2 0 ) (11) = (c 1 0 ⊗ 1 H 2 h)ε H (c 1 −1 1 H 1 ε t (c 2 −1 ))ε C (c 2 0 ) (17) = (c 1 0 ⊗ 1 H 2 h)ε H (c 1 −1 ε t (c 2 −1 )1 H 1 )ε C (c 2 0 ) (8) = (c 0 ⊗ 1 H 2 h)ε H (c −1 ε s (1 H 1 ))(6)= (c 0 ⊗ 1 H 2 h)ε H (c −1 ε t (ε s (1 H 1 ))) (22) = (c 0 ⊗ 1 H 2 h)ε H (c −1 ε t (S H (1 H 1 ))) (6) = (c 0 ⊗ 1 H 2 h)ε H (c −1 S H (1 H 1 )) (25) = (c 0 ⊗ h 2 )ε H (c −1 ε t (h 1 )) (6) = (c 0 ⊗ h 2 )ε H (c −1 h 1 ) = c × h. Therefore, C × H =< c 0 ⊗ h 2 ε H (c −1 h 1 ) > k is a coalgebra.ε t • ε t = ε t (26) ε s • ε s = ε s (27) where ε t (h) = 1 H 1 ε H (1 H 2 h) and ε s (h) = 1 H 2 ε H (h1 H 1 ) for all h ∈ H. Since H is commutative ε t (h) = 1 H 1 ε H (1 H 2 h) = 1 H 1 ε H (h1 H 2 ) = ε s (h). And, analogously ε s (h) = ε t (h). Therefore, ε t • ε s = ε t (28) ε s • ε t = ε s .(29) These properties for ε t and ε s will be commonly used throughout this section. Proof. Let c, b ∈ C and h, k ∈ H. Then, (c × h)(b × k) = (c 0 ⊗ h 2 ε H (c −1 h 1 ))(b 0 ⊗ k 2 ε H (b −1 k 1 )) (6) = c 0 b 0 ⊗ h 2 k 2 ε H (c −1 ε t (h 1 ))ε H (b −1 ε t (k 1 )) (25) = c 0 b 0 ⊗ 1 H 2 h1 H 2 ′ kε H (c −1 S H (1 H 1 ))ε H (b −1 S H (1 H 1 ′ )) (6) = c 0 b 0 ⊗ 1 H 2 h1 H 2 ′ kε H (c −1 ε t (S H (1 H 1 )))ε H (b −1 ε t (S H (1 H 1 ′ ))) (22) = c 0 b 0 ⊗ 1 H 2 h1 H 2 ′ kε H (c −1 ε t (ε s (1 H 1 )))ε H (b −1 ε t (ε s (1 H 1 ′ ))) (8) = c 0 b 0 ⊗ 1 H 2 h1 H 2 ′ kε H (c −1 ε t (1 H 1 ))ε H (b −1 ε t (1 H 1 ′ )) (6) = c 0 b 0 ⊗ 1 H 2 h1 H 2 ′ kε H (c −1 1 H 1 )ε H (b −1 1 H 1 ′ ) (8) = (cb) 0 ⊗ 1 H 2 ε H (ε s (1 H 1 )(cb) −1 )hk (6) = (cb) 0 ⊗ 1 H 2 ε H ((cb) −1 ε t (ε s (1 H 1 )))hk (22) = (cb) 0 ⊗ 1 H 2 ε H ((cb) −1 ε t (S H (1 H 1 )))hk (6) = (cb) 0 ⊗ 1 H 2 ε H ((cb) −1 S H (1 H 1 ))hk (25) = (cb) 0 ⊗ (hk) 2 ε H ((cb) −1 ε t ((hk) 1 )) = (cb × hk). Note that the associativity of the algebras C and H guarantee the associativity of C × H. We also have that 1 C×H = 1 C × 1 H . For the next result consider C × H with the product given in Lemma 4.4 and the coproduct given by ∆(c × h) = c 1 × c 2 −1 h 1 ⊗ c 2 0 × h 2 for all c ∈ C and h ∈ H. Lemma 4.5. Under these conditions, the coproduct defined on C × H is multiplicative. Proof. Given b, c ∈ C and h, k ∈ H ∆((c × h)(b × k)) = ∆(cb × hk) = (cb) 1 × (cb) 2 −1 (hk) 1 ⊗ (cb) 2 0 × (hk) 2 = c 1 b 1 × (c 2 b 2 ) −1 h 1 k 1 ⊗ (c 2 b 2 ) 0 × h 2 k 2 = c 1 b 1 × c 2 −1 b 2 −1 h 1 k 1 ⊗ c 2 0 b 2 0 × h 2 k 2 = (c 1 × c 2 −1 h 1 )(b 1 × b 2 −1 k 1 ) ⊗ (c 2 0 × h 2 )(b 2 0 × k 2 ) = [(c 1 × c 2 −1 h 1 ) ⊗ (c 2 0 × h 2 )][(b 1 × b 2 −1 k 1 ) ⊗ (b 2 0 × k 2 )] = ∆(c × h)∆(b × k). Once proved the multiplicativity of the coproduct of C × H and knowing that the coassociativity follows by the fact that the coproduct of the weak smash coproduct is inherited from C ⊗ H, we are able to show the properties of weak bialgebra for ε C×H as we can see in the following lemma. Lemma 4.6. Let C × H be the weak smash coproduct. Then, the counity ε C×H satisfies ε((a × k)(c × h)(b × l)) = ε((a × k)(c × h) 1 )ε((c × h) 2 (b × l)) = ε((a × k)(c × h) 2 )ε((c × h) 1 (b × l)) for all a, b, c ∈ C and h, k, l ∈ H. Proof. Let a, b, c ∈ C and h, k, l ∈ H, ε((a × k)(c × h)(b × l)) = ε(acb × khl) = ε C ((acb) 0 )ε H ((acb) −1 khl) = ε C (a 0 c 0 b 0 )ε H (a −1 c −1 b −1 khl) On the one hand, ε((a × k)(c × h) 1 )ε((c × h) 2 (b × l)) = ε C ((ac 1 ) 0 )ε H ((ac 1 ) −1 kc 2 −1 h 1 )ε C ((c 2 0 b) 0 )ε H ((c 2 0 b) −1 h 2 l) = ε C (a 0 c 1 0 )ε H (a −1 c 1 −1 kc 2 −1 h 1 )ε C (c 2 00 b 0 )ε H (c 2 0−1 b −1 h 2 l) (CC3) = ε C (a 0 c 1 0 )ε H (a −1 c 1 −1 kc 2 −1 1 h 1 )ε C (c 2 0 b 0 )ε H (c 2 −1 2 b −1 h 2 l) (CC2) = ε C (a 0 c 0 1 )ε H (a −1 c −1 khb −1 l)ε C (c 0 2 b 0 ) = ε C (a 0 c 0 b 0 )ε H (a −1 c −1 b −1 khl). On the other hand, ε((a × k)(c × h) 2 )ε((c × h) 1 (b × l)) = ε(ac 2 0 × kh 2 )ε(c 1 b × c 2 −1 h 1 l) = ε C ((ac 2 0 ) 0 )ε H ((ac 2 0 ) −1 kh 2 )ε C ((c 1 b) 0 )ε H ((c 1 b) −1 c 2 −1 h 1 l) = ε C (a 0 c 2 00 )ε H (a −1 c 2 0−1 kh 2 )ε C (c 1 0 b 0 )ε H (c 1 −1 b −1 c 2 −1 h 1 l) (CC3) = ε C (a 0 c 2 0 )ε H (a −1 c 2 −1 2 kh 2 )ε C (c 1 0 b 0 )ε H (c 1 −1 b −1 c 2 −1 1 h 1 l) (CC2) = ε C (a 0 c 0 2 )ε H (a −1 khlc −1 b −1 )ε C (c 0 1 b 0 ) = ε C (a 0 c 0 b 0 )ε H (a −1 c −1 b −1 khl). Lemma 4.7. Let C × H be the weak smash coproduct. Then, (1 C×H ⊗ ∆(1 C×H ))(∆(1 C×H ) ⊗ 1 C×H ) = (∆(1 C×H ) ⊗ 1 C×H )(1 C×H ⊗ ∆(1 C×H )) = (I ⊗ ∆)∆(1 C×H ) 4.5 = (∆ ⊗ I)∆(1 C×H ). Proof. Indeed, (∆(1 C×H ) ⊗ 1 C×H )(1 C×H ⊗ ∆(1 C×H )) = 1 C 1 0 ⊗ 1 C 2 −1 2 1 H 2 ε H (1 C 1 −1 1 C 2 −1 1 1 H 1 ) ⊗ 1 C 2 00 1 C 1 ′ 0 ⊗ 1 H 4 1 C 2 ′ −1 2 ε H (1 C 2 0−1 1 C 1 ′ −1 1 H 3 1 C 2 ′ −1 1 ) ⊗ 1 C 2 ′ 00 ⊗ 1 H 6 ε H (1 C 2 ′ 0−1 1 H 5 ) = 1 C 1 0 ⊗ 1 C 2 −1 3 1 H 2 ε H (1 C 1 −1 1 C 2 −1 1 )ε H (1 C 2 −1 2 1 H 1 ) ⊗ 1 C 2 00 1 C 1 ′ 0 ⊗ 1 H 4 1 C 2 ′ −1 2 ε H (1 C 2 0−1 1 C 1 ′ −1 1 H 3 1 C 2 ′ −1 1 ) ⊗ 1 C 2 ′ 00 ⊗ 1 H 6 ε H (1 C 2 ′ 0−1 1 H 5 ) (CC3) = 1 C 1 0 ⊗ 1 C 2 −1 2 1 H 1 ε H (1 C 1 −1 1 C 2 −1 1 ) ⊗ 1 C 2 0 1 C 1 ′ 0 ⊗ 1 H 2 1 C 2 ′ −1 2 ε H (1 C 2 −1 3 1 C 1 ′ −1 1 C 2 ′ −1 1 ) ⊗ 1 C 2 ′ 0 ⊗ 1 H 4 ε H (1 C 2 ′ −1 3 1 H 3 ) (6) = 1 C 1 0 ⊗ 1 C 2 −1 2 1 H 1 ε H (1 C 1 −1 1 C 2 −1 1 ) ⊗ 1 C 2 0 1 C 1 ′ 0 ⊗ 1 H 2 1 C 2 ′ −1 2 ε H (1 C 1 ′ −1 1 C 2 ′ −1 1 ε t (1 C 2 −1 3 )) ⊗ 1 C 2 ′ 0 ⊗ 1 H 3 (13) = 1 C 1 0 ⊗ 1 H 1 ′ 1 C 2 −1 2 1 H 1 ε H (1 C 1 −1 1 C 2 −1 1 ) ⊗ 1 C 2 0 1 C 1 ′ 0 ⊗ 1 H 2 1 C 2 ′ −1 2 ε H (1 C 1 ′ −1 1 C 2 ′ −1 1 1 H 2 ′ ) ⊗ 1 C 2 ′ 0 ⊗ 1 H 3 = 1 C 1 0 ⊗ ε s (1 H 1 ′ 1 C 1 ′ −1 1 1 C 2 ′ −1 1 )ε H (1 H 2 ′ 1 C 1 ′ −1 2 1 C 2 ′ −1 2 )1 C 2 −1 2 1 H 1 ε H (1 C 1 −1 1 C 2 −1 1 ) ⊗ 1 C 2 0 1 C 1 ′ 0 ⊗ 1 C 2 ′ −1 3 1 H 2 ⊗ 1 C 2 ′ 0 ⊗ 1 H 3 = 1 C 1 0 ⊗ 1 C 2 −1 2 ε s (1 H 1 ′ 1 C 1 ′ −1 1 1 C 2 ′ −1 1 )1 H 1 ε H (1 C 1 −1 1 C 2 −1 1 ) ⊗ 1 C 2 0 1 C 1 ′ 0 ⊗ 1 C 2 ′ −1 3 1 H 2 ε H (1 C 1 ′ −1 2 1 C 2 ′ −1 2 1 H 2 ′ ) ⊗ 1 C 2 ′ 0 ⊗ 1 H 3 (8) = 1 C 1 0 ⊗ 1 H 1 ′ 1 C 2 −1 2 ε s (1 C 1 ′ −1 1 1 C 2 ′ −1 1 )1 H 1 ε H (1 C 1 −1 1 C 2 −1 1 ) ⊗ 1 C 2 0 1 C 1 ′ 0 ⊗ 1 C 2 ′ −1 3 1 H 2 ε H (1 C 1 ′ −1 2 1 C 2 ′ −1 2 1 H 2 ′ ) ⊗ 1 C 2 ′ 0 ⊗ 1 H 3 (13) = 1 C 1 0 ⊗ 1 C 2 −1 2 ε s (1 C 1 ′ −1 1 1 C 2 ′ −1 1 )1 H 1 ε H (1 C 1 −1 1 C 2 −1 1 ) ⊗ 1 C 2 0 1 C 1 ′ 0 ⊗ 1 C 2 ′ −1 3 1 H 2 ε H (1 C 1 ′ −1 2 1 C 2 ′ −1 2 ε t (1 C 2 −1 3 )) ⊗ 1 C 2 ′ 0 ⊗ 1 H 3 (6) = 1 C 1 0 ⊗ 1 C 2 −1 2 ε s (1 C 1 ′ −1 1 1 C 2 ′ −1 1 )1 H 1 ε H (1 C 1 −1 1 C 2 −1 1 ) ⊗ 1 C 2 0 1 C 1 ′ 0 ⊗ 1 C 2 ′ −1 3 1 H 2 ε H (1 C 1 ′ −1 2 1 C 2 ′ −1 2 1 C 2 −1 3 ) ⊗ 1 C 2 ′ 0 ⊗ 1 H 3 (CC3) = 1 C 1 0 ⊗ 1 C 2 −1 2 ε s (1 C 1 ′ −1 1 C 2 ′ −1 )1 H 1 ε H (1 C 1 −1 1 C 2 −1 1 ) ⊗ 1 C 2 0 1 C 1 ′ 00 ⊗ 1 C 2 ′ 0−1 2 1 H 2 ε H (1 C 2 −1 3 1 C 1 ′ 0−1 1 C 2 ′ 0−1 1 ) ⊗ 1 C 2 ′ 00 ⊗ 1 H 3 (CC2) = 1 C 1 0 ⊗ 1 C 2 −1 2 ε s (1 C −1 )1 H 1 ε H (1 C 1 −1 1 C 2 −1 1 ) ⊗ 1 C 2 0 1 C 0 1 0 ⊗ 1 C 0 2 −1 2 1 H 2 ε H (1 C 2 −1 3 1 C 0 1 −1 1 C 0 2 −1 1 ) ⊗ 1 C 0 2 0 ⊗ 1 H 3 ( * ) = 1 C 1 0 ⊗ 1 C 2 −1 2 1 C −1 1 H 1 ε H (1 C 1 −1 1 C 2 −1 1 ) ⊗ 1 C 2 0 1 C 0 1 0 ⊗ 1 C 0 2 −1 2 1 H 2 ε H (1 C 2 −1 3 1 C 0 1 −1 1 C 0 2 −1 1 ) ⊗ 1 C 0 2 0 ⊗ 1 H 3 = 1 C 1 0 ⊗ 1 C 2 −1 2 1 C −1 1 H 1 ε H (1 C 1 −1 1 C 2 −1 1 ) ⊗ 1 C 2 0 1 C 0 1 0 ⊗ 1 C 0 2 −1 2 1 H 2 ε H (1 C 0 2 −1 3 1 H 3 )ε H (1 C 2 −1 3 1 C 0 1 −1 1 C 0 2 −1 1 ) ⊗ 1 C 0 2 0 ⊗ 1 H 4 (CC3) = 1 C 1 0 ⊗ 1 C 2 −1 2 1 C −1 1 H 1 ε H (1 C 1 −1 1 C 2 −1 1 ) ⊗ 1 C 2 00 1 C 0 1 0 ⊗ 1 C 0 2 −1 2 1 H 2 ε H (1 C 2 0−1 1 C 0 1 −1 1 C 0 2 −1 1 ) ⊗ 1 C 0 2 00 ⊗ 1 H 4 ε H (1 C 0 2 0−1 1 H 3 ) (1) = 1 C 1 0 ⊗ 1 C 2 −1 2 1 H 2 ′ 1 C −1 1 H 1 ε H (1 C 1 −1 1 C 2 −1 1 1 H 1 ′ ) ⊗ 1 C 2 00 1 C 0 1 0 ⊗ 1 C 0 2 −1 2 1 H 2 ε H (1 C 2 0−1 1 C 0 1 −1 1 C 0 2 −1 1 ) ⊗ 1 C 0 2 00 ⊗ 1 H 4 ε H (1 C 0 2 0−1 1 H 3 ) (14) = 1 C 1 0 ⊗ 1 C 2 −1 2 1 C −1 2 1 H 1 ε H (ε s (1 C −1 1 )1 C 1 −1 1 C 2 −1 1 ) ⊗ 1 C 2 00 1 C 0 1 0 ⊗ 1 C 0 2 −1 2 1 H 2 ε H (1 C 2 0−1 1 C 0 1 −1 1 C 0 2 −1 1 ) ⊗ 1 C 0 2 00 ⊗ 1 H 4 ε H (1 C 0 2 0−1 1 H 3 ) (7) = 1 C 1 0 ⊗ 1 C 2 −1 2 1 C −1 2 1 H 1 ε H (1 C −1 1 1 C 1 −1 1 C 2 −1 1 ) ⊗ 1 C 2 00 1 C 0 1 0 ⊗ 1 C 0 2 −1 2 1 H 2 ε H (1 C 2 0−1 1 C 0 1 −1 1 C 0 2 −1 1 ) ⊗ 1 C 0 2 00 ⊗ 1 H 4 ε H (1 C 0 2 0−1 1 H 3 ) = 1 C 1 0 ⊗ 1 C 2 −1 2 1 C −1 3 1 H 2 ε H (1 C −1 2 1 H 1 )ε H (1 C 1 −1 1 C 2 −1 1 1 C −1 1 ) ⊗ 1 C 2 00 1 C 0 1 0 ⊗ 1 C 0 2 −1 3 1 H 4 ε H (1 C 0 2 −1 2 1 H 3 )ε H (1 C 2 0−1 1 C 0 1 −1 1 C 0 2 −1 1 ) ⊗ 1 C 0 2 00 ⊗ 1 H 6 ε H (1 C 0 2 0−1 1 H 5 ) = 1 C 1 × 1 C 2 −1 1 C −1 1 H 1 ⊗ 1 C 2 0 1 C 0 1 × 1 C 0 2 −1 1 H 2 ⊗ 1 C 0 2 0 × 1 H 3 (CC2) = 1 C 1 × 1 C 2 −1 1 C 1 ′ −1 1 C 2 ′ −1 1 H 1 ⊗ 1 C 2 0 1 C 1 ′ 0 × 1 C 2 ′ 0−1 1 H 2 ⊗ 1 C 2 ′ 00 × 1 H 3 = 1 C 1 × 1 C 2 −1 1 C 3 −1 1 H 1 ⊗ 1 C 2 0 × 1 C 3 0−1 1 H 2 ⊗ 1 C 3 00 × 1 H 3 (CC2) = 1 C 1 × 1 C 2 −1 1 H 1 ⊗ 1 C 2 0 1 × 1 C 2 0 2 −1 1 H 2 ⊗ 1 C 2 0 2 0 × 1 H 3 = (I ⊗ ∆)∆(1 C×H ). In ( * ) it was used the property ρ(1) ∈ H s ⊗ C of Proposition 4.11 of [5]. Similarly it is possible to show (1 C×H ⊗ ∆(1 C×H ))(∆(1 C×H ) ⊗ 1 C×H ) = (I ⊗ ∆)∆(1 C×H ). Therefore, we obtain the following result. Proposition 4.8. Let C be a weak bialgebra and H a commutative weak Hopf algebra such that C is a Hcomodule bialgebra. Then, C × H is a weak bialgebra. The next step is to know if we can give a weak Hopf algebra structure to C × H. However, it is necessary to impose a natural condition on C, as it can be seen in Theorem 4.9. Teorema 4.9. Let C and H be two weak Hopf algebras such that C is a H-comodule bialgebra and H is commutative. Then, C × H is a weak Hopf algebra. Proof. By Proposition 4.8 we know that C × H is a weak bialgebra, so, it is enough to define a map S C×H in C × H and show that S C×H satisfies the properties of antipode of a weak Hopf algebra. Define S C×H by S C×H (c × h) = S C (c 0 ) × S H (c −1 h), for all c ∈ C and h ∈ H. (I) (c × h) 1 S((c × h) 2 ) = ε t (c × h), indeed for all c ∈ C and h ∈ H (c × h) 1 S((c × h) 2 ) (CC3) = c 1 S C (c 2 0 ) × c 2 −1 1 h 1 S H (c 2 −1 2 h 2 ) (11) = c 1 0 S C (c 2 0 ) 0 ⊗ 1 H 2 ε H (c 1 −1 S H (c 2 0 ) −1 1 H 1 ε t (c 2 −1 h)) (6) = c 1 0 S C (c 2 0 ) 0 ⊗ ε t (c 1 −1 S H (c 2 0 ) −1 c 2 −1 h) (2) = c 1 0 S C (c 2 0 ) 0 ⊗ ε t (ε t (c 1 −1 1 )c 1 −1 2 )ε t (c 2 −1 S H (c 2 0 ) −1 h) (9) = c 1 0 S C (c 2 0 ) 0 ⊗ ε t (c 1 −1 1 )ε t (c 1 −1 2 )ε t (c 2 −1 S H (c 2 0 ) −1 h) (CC3) = c 1 00 S C (c 2 0 ) 0 ⊗ ε t (c 1 0−1 S H (c 2 0 ) −1 c 1 −1 c 2 −1 h) (CC2) = c 0 1 0 S C (c 0 2 ) 0 ⊗ ε t (c 0 1 −1 S H (c 0 2 ) −1 c −1 h) (6) = c 0 1 0 S C (c 0 2 ) 0 ⊗ 1 H 2 ε H (1 H 1 c 0 1 −1 S H (c 0 2 ) −1 ε t (c −1 h)) (11) = c 0 1 0 S C (c 0 2 ) 0 ⊗ ε t (c −1 h) 2 ε H (c 0 1 −1 S H (c 0 2 ) −1 ε t (c −1 h) 1 ) (CC2) = c 1 0 S C (c 2 0 ) × ε t (c 1 −1 c 2 −1 h) (CC2) = 1 C 0 c 0 1 S C (c 0 2 ) × ε t (1 C −1 c −1 h) (14) = ε C (ε s (1 C 0 1 )c 0 )(1 C 0 2 × ε t (1 C −1 c −1 h)) (7) = ε C (1 C 0 1 c 0 )(1 C 0 2 × ε t (1 C −1 c −1 h)) (CC2) = ε C (1 C 1 0 c 0 )(1 C 2 0 × ε t (1 C 1 −1 1 C 2 −1 c −1 h)) = ε t (c × h). (II) S((c × h) 1 )(c × h) 2 = ε s (c × h), indeed for all c ∈ C and h ∈ H S((c × h) 1 )(c × h) 2 (CC2) = S C (c 0 1 )c 0 2 × S H (c −1 h 1 )h 2 = 1 C 1 0 ε C (c 0 1 C 2 ) ⊗ S H (c −1 ) 2 ε s (h) 2 ε H (1 C 1 −1 S H (c −1 ) 1 ε s (h) 1 )(12)= 1 C 1 0 ε C (c 0 1 C 2 ) ⊗ S H (c −1 ) 2 ε s (h)1 H 2 ε H (1 C 1 −1 S H (c −1 ) 1 1 H 1 ) = 1 C 1 0 ε C (c 0 1 C 2 ) ⊗ ε t (S H (c −1 ) 1 )S H (c −1 ) 2 ε s (h)ε t (1 C 1 −1 ) (2) = 1 C 1 0 ε C (c 0 1 C 2 ) ⊗ S H (c −1 )ε s (h)ε t (1 C 1 −1 ) ( * ) = 1 C 1 0 ε C (c 0 1 C 2 0 ) ⊗ S H (c −1 ε t (1 C 2 −1 ))ε s (h)ε t (1 C 1 −1 ) εt=εs = 1 C 1 0 ε C (c 0 1 C 2 0 ) ⊗ S H (c −1 ε s (1 C 2 −1 ))ε s (h)ε t (1 C 1 −1 ) (22) = 1 C 1 0 ε C (c 0 1 C 2 0 ) ⊗ ε t (ε s (1 C 2 −1 ))S H (c −1 )ε s (h)ε t (1 C 1 −1 ) (28) = 1 C 1 0 ε C (c 0 1 C 2 0 ) ⊗ ε t (1 C 2 −1 )S H (c −1 )ε s (h)ε t (1 C 1 −1 ) (CC2) = 1 C 0 1 ε C (c 0 1 C 0 2 ) ⊗ S H (c −1 )ε s (h)ε t (1 C −1 ) (28) = 1 C 0 1 ε C (c 0 1 C 0 2 ) ⊗ S H (c −1 )ε s (h)ε t (ε s (1 C −1 )) (22) = 1 C 0 1 ε C (c 0 1 C 0 2 ) ⊗ S H (c −1 )ε s (h)S H (ε s (1 C −1 )) ( * * ) = 1 C 0 1 ε C (c 0 1 C 0 2 ) ⊗ S H (c −1 1 C −1 )ε s (h) (CC2) = 1 C 1 0 ε C (c 0 1 C 2 0 ) ⊗ S H (c −1 1 C 1 −1 1 C 2 −1 )ε s (h) (23) = 1 C 1 0 ε C (c 0 1 C 2 0 ) ⊗ S H (1 C 1 −1 )ε s (ε t (1 C 2 −1 ))ε s (ε t (c −1 ))ε s (h) (29) = 1 C 1 0 ε C (c 0 1 C 2 0 ) ⊗ S H (1 C 1 −1 )ε s (1 C 2 −1 )ε s (c −1 )ε s (h) (CC3) = 1 C 1 0 ε C (c 0 1 C 2 00 ) ⊗ S H (1 C 1 −1 )S H (1 C 2 −1 )1 C 2 0−1 ε s (c −1 h) (CC2) = 1 C 0 1 ε C (c 0 1 C 0 2 0 ) ⊗ S H (1 C −1 )1 C 0 2 −1 ε s (c −1 h) ( * * ) = 1 C 0 1 ε C (c 0 1 C 0 2 0 ) ⊗ S H (ε s (1 C −1 ))1 C 0 2 −1 ε s (c −1 h) (22) = 1 C 0 1 ε C (c 0 1 C 0 2 0 ) ⊗ ε t (ε s (1 C −1 ))1 C 0 2 −1 ε s (c −1 h) (28) = 1 C 0 1 ε C (c 0 1 C 0 2 0 ) ⊗ ε t (1 C −1 )1 C 0 2 −1 ε s (c −1 h) (CC2) = 1 C 1 0 ε C (c 0 1 C 2 00 ) ⊗ ε t (1 C 1 −1 1 C 2 −1 )1 C 2 0−1 ε s (c −1 h) (CC3) = 1 C 1 0 ε C (c 0 1 C 2 0 ) ⊗ ε t (1 C 1 −1 1 C 2 −1 1 )1 C 2 −1 2 ε s (c −1 h) (4) = 1 C 1 0 ⊗ 1 C 2 −1 2 ε s (c −1 h)ε t (1 C 2 −1 1 )ε t (ε t (1 C 1 −1 ))ε C (c 0 1 C 2 0 ) (9) = 1 C 1 0 ⊗ 1 C 2 −1 2 ε s (c −1 h)ε t (1 C 2 −1 1 1 C 1 −1 )ε C (c 0 1 C 2 0 ) = 1 C 1 0 ⊗ 1 C 2 −1 2 ε s (c −1 h)1 H 2 ε H (1 C 1 −1 1 C 2 −1 1 1 H 1 )ε C (c 0 1 C 2 0 ) (12) = 1 C 1 0 ⊗ 1 C 2 −1 2 ε s (c −1 h) 2 ε H (1 C 1 −1 1 C 2 −1 1 ε s (c −1 h) 1 )ε C (c 0 1 C 2 0 ) = 1 C 1 × 1 C 2 −1 ε s (c −1 h)ε C (c 0 1 C 2 0 ) (3) = 1 C 1 × 1 C 2 −1 1 ε s (1 C 2 −1 2 )ε s (c −1 h)ε C (c 0 1 C 2 0 ) (CC3) = (1 C 1 × 1 C 2 −1 1 H 1 )ε C (c 0 1 C 2 00 )ε H (c −1 1 C 2 0−1 h1 H 2 ) = ε s (c × h). In ( * ) it was used Proposition 4.27 of [5]. In ( * * ) it was used the property ρ(1) ∈ H s ⊗ C of Proposition 4.11 of [5]. Therefore C × H is a weak Hopf algebra. Dualization In order to show that a partial coaction on a coalgebra can generate a partial action on a coalgebra, and vice versa, we need to assume the additional hypothesis that the weak Hopf algebra H is finite dimensional. Thus, we know that the dual H * of H is also a weak Hopf algebra. Teorema 5.1. Let C be a coalgebra and H be a weak Hopf algebra finite dimensional. Then, the following affirmations are equivalent: (i) C is a left partial H-comodule coalgebra; (ii) C is a right partial H * -module coalgebra. Moreover, to say that C is a left symmetric partial H-comodule coalgebra is equivalent to say that C is a right symmetric partial H * -module coalgebra. Proof. Suppose that C is a left symmetric partial H-comodule coalgebra via ρ : C → H ⊗ C c → c −1 ⊗ c 0 . Then, C is a right symmetric partial H * -module coalgebra via ↼: C ⊗ H * → C c ⊗ f → c ↼ f = f (c −1 )c 0 . Conversely, suppose that C is a right symmetric partial H * -module coalgebra via ↼: C ⊗ H * → C c ⊗ f → c ↼ f. Then, C is a left symmetric partial H-comodule coalgebra via ρ : C → H ⊗ C c → n i=1 (h i ⊗ c ↼ h i * ) where, {h i } n i=1 is a basis of H and {h i * } n i=1 is the dual basis of H * . Let λ be an element in H * . We say that C is a (right) partial H-module coalgebra via λ if c ↼ h = cλ(h) defines a partial action of H on the coalgebra C for all h ∈ H and c ∈ C. Moreover, in [7] it was proved that C is a partial H-module coalgebra via λ if and only if for all h, k ∈ H (i) λ(1 H ) = 1 k (ii) λ(h)λ(k) = λ(h 1 )λ(h 2 k). Another result of dualization obtained is the one that says that the element λ defined from the partial action via λ is equal to the element defined from the partial coaction ρ λ presented in Proposition 3.10 as can be seen as follows. Proof. It is enough to show that (i) ε H * (λ) = 1 k , since ε H * (λ) = λ(1 H ). = 1 k (ii) (λ ⊗ 1 H * )∆(λ) = λ ⊗ λ, since for all h, k ∈ H (λ ⊗ 1 H * )∆(λ)(h ⊗ k) = (λλ 1 )(h)(λ 2 )(k) = λ(h 1 )λ 1 (h 2 )λ 2 (k) = λ(h 1 )λ(h 2 k) = λ(h)λ(k) = (λ ⊗ λ)(h ⊗ k). The converse is immediate. Proposition 2 . 5 . 25Let G be a groupoid. Then, for all g, h ∈ G: (i) ∃gh if and only if d(g) = r(h) and, in this case, d(gh) = d(h) and r(gh) = r(g); (ii) ∃gh if and only if ∃h −1 g −1 and, in this case, (gh) −1 = h −1 g −1 . Proposition 3 . 1 . 31Let H be a weak Hopf algebra. If there exists a linear map Example 3 . 9 . 39Consider kG the weak Hopf algebra given in Example 2.8, h an element in G and C a coalgebra. C is a kG-comodule coalgebra via ρ h if and only if G = {1 G }. Example 3 . 313. k is a partial kG-comodule coalgebra via ρ h if and only if h = g∈N 1 |N | δ g , for N some group in G. Example 3 . 314. k is a partial (kG) * -comodule coalgebra via ρ f if and only if f = g∈N p g , where N is a group in G. Proposition 4 . 1 . 41The vector space C ⊗ H has a coassociative coproduct Proposition 4. 2 . 2Let H a weak Hopf algebra and C a H-comodule coalgebra. Then, the vector space Definition 4. 3 . 3Let C a weak bialgebra and H a weak Hopf algebra. We say that C is a (left) H-comodule bialgebra if there exists a linear map ρ : C → H ⊗ C such that ρ defines simultaneously a structure of (left) H-comodule coalgebra and (left) H-comodule algebra in C. Lemma 4 . 4 . 44The product in C ⊗ H induces a product in C × H given by (c × h)(b × k) = (cb × hk), for all c ∈ C and h ∈ H. (III) S((c × h) 1 )(c × h) 2 S((c × h) 3 ) = S(c × h), indeed for all c ∈ C and h ∈ H S((c × h) 1 )(c × h) 2 S((c × h) 3 ) (c × h). Corollary 5.2. C is a right partial H-module coalgebra via c ↼ h = cλ(h) if and only if C is a left partial H * -comodule coalgebra via ρ λ (c) = λ ⊗ c. Therefore, H t and H s are subalgebras of H such that contain 1 H and hk = kh for all h ∈ H t and k ∈ H s .(17)Finally, it is still possible to show that Double categories and quantum groupoids. N Andruskiewitsch, S Natale, math.QA/0308228N. Andruskiewitsch and S. Natale, Double categories and quantum groupoids, math.QA/0308228 (2003). Doi-Hopf modules over weak Hopf algebras. G Böhm, Communications in Algebra. 28G. Böhm, Doi-Hopf modules over weak Hopf algebras. Communications in Algebra (28), 4687 -4698 (2000). On the Double Crossed Product of Weak Hopf Algebras. G Böhm, J Gómes-Torrecillas, Contemporary Mathematics. 585G. Böhm, J. Gómes-Torrecillas, On the Double Crossed Product of Weak Hopf Algebras. Contemporary Mathematics 585, 153-174 (1999). Weak Hopf Algebras I: Integral Theory and C * -Structure. G Böhm, F Nill, K Szlachányi, Journal of Algebra. 2212G. Böhm, F. Nill, K. Szlachányi, Weak Hopf Algebras I: Integral Theory and C * -Structure. Journal of Algebra 221 (2), 385-438 (1999). Modules Over Weak Entwining Structures. S Caenepeel, E De Groot, New trends Hopf Algebra Theory. 267S. Caenepeel, E. De Groot, Modules Over Weak Entwining Structures in "New trends Hopf Algebra Theory", Contemporary Mathematics 267, 31-54 (2000). S Caenepeel, K Janssen, Partial (Co)Actions of hopf algebras and Partial Hopf-Galois Theory. S. Caenepeel, K. Janssen, Partial (Co)Actions of hopf algebras and Partial Hopf-Galois Theory, Communications in Algebra (36), 2923-2946 (2008). E Campos, G Fonseca, G Martini, Partial Actions of Weak Hopf Algebras on Coalgebras. E. Campos, G. Fonseca, G. Martini, Partial Actions of Weak Hopf Algebras on Coalgebras, http://arxiv.org/abs/1810.02872. Sant'Ana, Partial actions of weak Hopf algebras: smash products, globalization and Morita theory. F Castro, A Paques, G Quadros, A , Journal of Pure and Applied Algebra. 29F. Castro, A. Paques, G. Quadros, A. Sant'Ana, Partial actions of weak Hopf algebras: smash products, globalization and Morita theory, Journal of Pure and Applied Algebra 29, 5511 -5538 (2015). F Castro, G Quadros, Globalizations for partial (co)actions on coalgebras. F. Castro, G. Quadros, Globalizations for partial (co)actions on coalgebras, http://arxiv.org/abs/1510.01388v1. Associativity of Crossed Products by Partial Actions, Enveloping Actions and Partial Representations. M Dokuchaev, R Exel, Trans. Amer. Math. Soc. 3575M. Dokuchaev, R. Exel, Associativity of Crossed Products by Partial Actions, Enveloping Actions and Partial Representations, Trans. Amer. Math. Soc. 357 (5), 1931-1952 (2005). Circle Actions on C * −Algebras, Partial Automorphisms and Generalized PimsnerVoiculescu Exect Sequences. R Exel, J. Funct. Anal. 1223R. Exel, Circle Actions on C * −Algebras, Partial Automorphisms and Generalized PimsnerVoiculescu Exect Sequences, J. Funct. Anal.122 (3), 361-401 (1994). Partial (co)actions of weak Hopf algebras: globalizations, Galois theory and Morita theory. G Quadros, UFRGS. Tese de doutoradoG. Quadros, Partial (co)actions of weak Hopf algebras: globalizations, Galois theory and Morita theory. Tese de doutorado, UFRGS (2016). S Majid, Foundations of quantum group theory. Cambridge University PressS. Majid, Foundations of quantum group theory, Cambridge University Press (1995). Matched pairs of groups and bismash products of Hopf algebras. M Takeuchi, Communications in Algebra. 9M. Takeuchi, Matched pairs of groups and bismash products of Hopf algebras, Communications in Algebra (9), 841-882 (1981). (Fonseca) Instituto Federal Sul-Riograndense, Brazil E-mail address: [email protected] (Fontes) Universidade Federal do Rio Grande, Brazil E-mail address: [email protected] (Martini) Universidade Federal do Rio Grande. Yu, L Wang, Yu, Zhang, Mathematical Notes. 881The Structure Theorem for Weak Module Coalgebras. Brazil E-mail address: [email protected]. Wang, L. Yu. Zhang, The Structure Theorem for Weak Module Coalgebras, Mathematical Notes 88 (1), 3 -17 (2010). (Fonseca) Instituto Federal Sul-Riograndense, Brazil E-mail address: [email protected] (Fontes) Universidade Federal do Rio Grande, Brazil E-mail address: [email protected] (Martini) Universidade Federal do Rio Grande, Brazil E-mail address: [email protected]
[]
[ "Quickest Transient-Change Detection under a Sampling Constraint", "Quickest Transient-Change Detection under a Sampling Constraint" ]
[ "Venkat Chandar ", "Aslan Tchamkerten " ]
[]
[]
Consider a sequence of independent random variables that undergo a transient change in distribution from P0 to P1 at an unknown instant. The sequence is sequentially observed under a sampling constraint. This paper addresses the question of finding the minimum sampling rate needed in order to detect the change "as efficiently" as under full sampling.The problem is cast into a Bayesian setup where the change, assumed to be of fixed known duration n, occurs randomly and uniformly within a time frame of size An = 2 αn for some known uncertainty parameter α > 0. It is shown that, for any fixed α ∈ (0, D(P1||P0)), as long as the sampling rate is of order ω(1/n) the change can be detected as quickly as under full sampling, in the limit of vanishing false-alarm probability. The delay, in this case, is some linear function of n. Conversely, if α > D(P1||P0) or if the sampling rate is o(1/n) reliable detection is impossible-the false-alarm probability is bounded away from zero or the delay is Θ(2 αn ).This paper illustrates this result through a recently proposed asynchronous communication framework. Here, the receiver observes mostly pure background noise except for a brief period of time, starting at an unknown instant, when data is transmitted thereby inducing a local change in distribution at the receiver. For this model, capacity per unit cost (minimum energy to transmit one bit of information) and communication delay were characterized and shown to be unaffected by a sparse sampling at the receiver as long as the sampling rate is a non-zero constant. This paper strengthens this result and shows that it continues to hold even if the sampling rate tends to zero at a rate no faster than ω(1/B), where B denotes the number of transmitted message bits. Conversely, if the sampling rate decreases as o(1/B), reliable communication is impossible.
null
[ "https://arxiv.org/pdf/1501.05930v2.pdf" ]
120,865,204
1501.05930
eaae8e3df1416a1bfe323f547b221fff4d7665d8
Quickest Transient-Change Detection under a Sampling Constraint 27 Jan 2015 Venkat Chandar Aslan Tchamkerten Quickest Transient-Change Detection under a Sampling Constraint 27 Jan 20151Index Terms-Asynchronous communicationbursty commu- nicationcapacity per unit costenergychange detectionhy- pothesis testingsequential analysissensor networkssparse communicationsamplingsynchronizationtransient change Consider a sequence of independent random variables that undergo a transient change in distribution from P0 to P1 at an unknown instant. The sequence is sequentially observed under a sampling constraint. This paper addresses the question of finding the minimum sampling rate needed in order to detect the change "as efficiently" as under full sampling.The problem is cast into a Bayesian setup where the change, assumed to be of fixed known duration n, occurs randomly and uniformly within a time frame of size An = 2 αn for some known uncertainty parameter α > 0. It is shown that, for any fixed α ∈ (0, D(P1||P0)), as long as the sampling rate is of order ω(1/n) the change can be detected as quickly as under full sampling, in the limit of vanishing false-alarm probability. The delay, in this case, is some linear function of n. Conversely, if α > D(P1||P0) or if the sampling rate is o(1/n) reliable detection is impossible-the false-alarm probability is bounded away from zero or the delay is Θ(2 αn ).This paper illustrates this result through a recently proposed asynchronous communication framework. Here, the receiver observes mostly pure background noise except for a brief period of time, starting at an unknown instant, when data is transmitted thereby inducing a local change in distribution at the receiver. For this model, capacity per unit cost (minimum energy to transmit one bit of information) and communication delay were characterized and shown to be unaffected by a sparse sampling at the receiver as long as the sampling rate is a non-zero constant. This paper strengthens this result and shows that it continues to hold even if the sampling rate tends to zero at a rate no faster than ω(1/B), where B denotes the number of transmitted message bits. Conversely, if the sampling rate decreases as o(1/B), reliable communication is impossible. I. INTRODUCTION B URSTY and asynchronous communication is investigated when data transmission occurs very infrequently, at random moments. For such a setting [1] characterized information theoretic limits in terms of transmitter/receiver energy consumption and communication delay for discrete-time systems. Energy consumption at the transmitter is modeled in the usual way by assigning a cost to each channel input. Energy consumption at the receiver is captured by the channel output sampling rate. This is motivated by the fact that in practice This work was supported in part by a grant and in part by a Chair of Excellence both from the French National Research Agency (ANR, projects BSC and ACE). V. Chandar is with D. E. Shaw and Co., New York, NY 10036, USA. Email: [email protected]. A. Tchamkerten is with the Department of Communications and Electronics, Telecom ParisTech, 75634 Paris Cedex 13, France. Email: [email protected]. one of the receiver's most power consuming functions is the sampling rate of the analog-to-digital converter. Asynchronism in [1] is caused by the transmitter's source of information, which is assumed to be bursty. Decoding happens on a sequential basis and communication (detection) delay is naturally defined as the elapsed time between the instant when data starts being sent and the instant when it is decoded. In turn, sampling rate is defined as the typical relative number of channel outputs observed until decoding happens. Specifically, sampling rate ρ is said to be achievable if #{output samples taken until time τ } τ ≤ ρ with high probability, where τ denotes the random decoding time. The main result in [1] states that a sparse output sampling impacts neither capacity per unit cost nor communication delay asymptotically: for any ρ > 0 capacity per unit cost and communication delay under sampling constraint ρ are (asymptotically) the same as for ρ = 1. In fact, a stronger result holds as we show in this paper. Capacity per unit cost and communication delay are unaffected even if the sampling rate tends to zero as 1 ρ = ω(1/B) where B denotes the number of information bits to be transmitted. Conversely, if ρ = o(1/B) the error probability of any coding scheme is bounded away from zero. At the heart of the achievability result is a generic sequential procedure for detecting a transient change in distribution of fixed known duration over a time series. Before and after the change observations are supposed to be drawn from a common nominal product distribution, whereas during the change observations are supposed to be distributed according to some "change" product distribution. The detection procedure has the following properties in the limit of vanishing probability of false-alarm: 1. it detects changes of minimal duration, 2. it detects changes with minimal delay, 2. it minimizes sampling rate. Property 1. means that no procedure can detect changes of shorter duration, irrespectively of its delay and sampling rate. Property 2. and 3. mean that among all procedures that detect transient changes, the proposed procedure simultaneously minimizes delay and sampling rate. Related works This work addresses the problem of communicating asynchronously at minimum energy using a sequential analysis framework. Accordingly, we review related works in communication and sequential analysis. COMMUNICATION: The basic asynchronous communication setup investigated in this paper was originally proposed in [3] and developed in a series of works [4]- [10]. These works investigate capacity (or capacity per unit cost) under various notions of delay; codeword length, typical delay, expected delay. In all these works, the communication regime of interest, dubbed "strong asynchronism," is such that data transmission is very bursty. Data transmission typically happens on a time horizon which is exponential in the size of the message-the subexponential regime is of limited theoretical interest as it reduces to the synchronous setting [5]. In the exponential asynchronism regime channel outputs mostly correspond to pure noise. Hence a natural question is whether the receiver needs to constantly be in the "listening" mode and observe (sample) all channel outputs. Surprisingly, full output sampling is not necessary. In [1] it is shown that capacity per unit cost and minimum typical delay, established in [8] under full sampling, are (asymptotically) unaffected by a sparse output sampling as long as the sampling rate ρ is nonzero. For this result to hold, it is important to allow adaptive sampling. In fact, under non-adaptive sampling capacity per unit cost is achievable but delay is impacted by a multiplicative factor of 1/ρ. SEQUENTIAL ANALYSIS: Decoding in the above communication setup can be cast into a change-detection sequential analysis framework. Specifically, decoding amounts to detecting and isolating the cause of a transient change in distribution under a sampling constraint. 2 The correspondences between the two problems are the following: • nominal distribution ↔ pure noise • change duration ↔ codeword length • posterior distributions ↔ set of output distributions induced by the codewords Before data transmission, the receiver observes pure noise generated by some known nominal distribution. Once data transmission starts, the distribution changes according to the sent codeword. Decoding happens based on a sampling strategy, a stopping rule defined on the sampled process, and an isolation rule which maps the stopped sampling process into one of the possible messages. Unlike classical sequential analysis frameworks, the model we consider includes coding which translates into the ability to optimize posterior distributions, say, to maximize rate. Rate, is specific to communication and ties the number of posterior hypothesis with detection delay or the change duration. By contrast, in sequential analysis the number of posterior hypothesis is typically fixed which, in the vanishing error probability regime, would correspond to a zero rate regime. Detection and isolation for non-transient changes and without sampling constraint was investigated in [12]. The purely detection problem (single posterior distribution hence no isolation rule) of reacting to a transient change in distribution was investigated in a variety of works. In [13], [14,Chap. 3], for instance, the CUSUM detection procedure, originally proposed to detect non-transient changes, is investigated for detecting transient changes of given length. In [15,Section II.c], a variation of the CUSUM procedure is shown to achieve minimal detection delay in a certain asymptotic regime where the duration of the change is tied to a (vanishing) falsealarm probability constraint. In [16] the authors investigate a setup where the duration of the change is random and where the evolution of the pre-change, in-change, and postchange states is Markovian. Finally, [17] proposed another interesting variation of the CUSUM procedure that operates under a sampling constraint and that is tailored for detecting non-transient changes. This procedure has the salient feature of skipping samples in the event that a change is unlikely to have occurred. Optimality of this procedure in the minimax (non-Bayesian) setting for both transient and non-transient changes was recently established in [18]. Paper organization Section II is devoted to the transient change-detection problem and Section III is devoted to the asynchronous communication model and considers the performance metrics proposed in [1]. Note that the main result in Section III relies crucially on the detection procedure proposed in Section II. Section IV is devoted to the proofs. II. TRANSIENT CHANGE-DETECTION A. Model Let P 0 and P 1 be distributions defined over some finite alphabet Y and with finite divergence 3 D(P 1 ||P 0 ) def = y P 1 (y) log[P 1 (y)/P 0 (y)]. Let ν be uniformly distributed over {1, 2, . . . , A n = 2 αn } where the integer A n denotes the uncertainty level and where α the corresponding uncertainty exponent, respectively. Given P 0 and P 1 process {Y t } is defined as follows. Conditioned on the value of ν, the Y t 's are i.i.d. according to P 0 for 1 ≤ t < ν or ν + n ≤ t ≤ A n + n − 1 and i.i.d. according to P 1 for ν ≤ t ≤ ν + n − 1. Process {Y t } is thus i.i.d. P 0 except for a brief period of duration n where it is i.i.d. P 1 . A statistician, with the knowledge of n, α, P 0 , and P 1 , observes {Y t } sequentially according to a sampling strategy: Definition 1 (Sampling strategy). A sampling strategy with respect to a stochastic process {Y t } consists of a collection of random time indices S = {S 1 , . . . , S ℓ } ⊆ {1, . . . , A n + n − 1} where S i < S j for i < j. Time index S i is interpreted as the ith sampling time. Sampling may be adaptive which means that each sampling time can be a function of past observations. This means that S 1 is an arbitrary value in {1, . . . , A n +n−1}, possibly random but independent of 4 Y An+n−1 1 , and for j ≥ 2 S j = g j ({Y Si } i<j ) for some (possibly randomized) function g j : Y j−1 → {S j−1 + 1, . . . , A n + n − 1} . The statistician wishes to detect the change in distribution by means of a sampling strategy S and a stopping rule τ relative to the (natural filtration induced by the) sampled process 5 Y S1 , Y S2 , . . . . Since there are at most A n +n−1 sampling times, τ is bounded by A n + n − 1. A given pair (S, τ ), from now on referred to as a "detector," will be evaluated in terms of its probability of false-alarm, its detection delay, and its sampling rate. Definition 2 (False-alarm probability). For a given detector (S, τ ) the probability of false-alarm is defined as P(τ < ν) = P 0 (τ < ν) where P 0 denotes the P 0 -product distribution. Definition 3 (Detection delay). For a given detector (S, τ ) and ε > 0, the delay, denoted by d((S, τ ), ε), is defined as the minimum l ≥ 0 such that P(τ − ν ≤ l − 1) ≥ 1 − ε . Definition 4 (Sampling rate). For a given detector (S, τ ) and ε > 0, the sampling rate, denoted by ρ((S, τ ), ε), is defined as the "typical" relative number of samples taken until time τ . Specifically, it is defined as the minimum r ≥ 0 such that P(|S τ |/τ ≤ r) ≥ 1 − ε where S t def = {S ∈ S : S ≤ t} (1) 1 ). 5 Recall that a (deterministic or randomized) stopping time τ with respect to a sequence of random variables Y 1 , Y 2 , . . . is a positive, integer-valued, random variable such that the event {τ = t}, conditioned on the realization of Y 1 , Y 2 , . . . , Yt, is independent of the realization of Y t+1 , Y t+2 , . . ., for all t ≥ 1. denotes the number of samples taken up to time t. Definition 5 (Achievable sampling rate). Fix α ≥ 0 and fix a sequence of non-increasing values {ρ n } with 0 ≤ ρ n ≤ 1. Sampling rates {ρ n } are said to be achievable at uncertainty exponent α if there exists a sequence of detectors {(S n , τ n )} such that for all n large enough 1) (S n , τ n ) operates under uncertainty level A n = 2 αn , 2) the false-alarm probability is at most ε n , 3) the sampling rate satisfies ρ((S n , τ n ), ε n ) ≤ ρ n , 4) the delay satisfies 1 n log(d((S n , τ n ), ε n )) ≤ ε n for some sequence of nonnegative numbers {ε n } such that ε n n→∞ −→ 0. Two comments are in order. First note that samples taken after time τ play no role in our performance metrics. Hence, from now on and without loss of generality we assume that the last sample is taken at time τ , i.e.,that the sampled process is truncated at time τ . The truncated sampled process is thus given by the collection of sampling times S τ (see (1)). In particular, we have |S τ | ≥ |S t | 1 ≤ t ≤ A n + n − 1.(2) The second comment concerns the delay constraint 4). Without a delay constraint the problem is obviously trivial; by considering the trivial stopping time τ = A n we achieve zero probability of false-alarm and sampling rate 1/A n . The delay constraint intends to capture the fact that the detector locates the change very accurately at the scale of the uncertainty level. However, we will see that this constraint is in fact not very restrictive; whenever detection is possible with subexponential delay detection is also possible with a linear delay. Notational convention We shall use d n and ρ n instead of d((S n , τ n ), ε n ) and ρ((S n , τ n ), ε n ), respectively, leaving out any explicit reference to (S n , τ n ) and to the sequence of nonnegative numbers {ε n } which we assume satisfies ε n → 0. B. Results Define n * (α) def = nα D(P 1 ||P 0 ) = Θ(n).(3) Theorem 1 (Detection, full sampling). Under full sampling (ρ n = 1): 1) the supremum of the set of achievable uncertainty exponents is D(P 1 ||P 0 ); 2) any detector that achieves uncertainty exponent α ∈ (0, D(P 1 ||P 0 )) has a delay that satisfies lim inf n→∞ d n n * (α) ≥ 1; 3) any uncertainty exponent α ∈ (0, D(P 1 ||P 0 )) is achievable with delay satisfying lim sup n→∞ d n n * (α) ≤ 1. Hence, the shortest detectable 6 change is of size n min (A n ) = log A n D(P 1 ||P 0 ) (1 ± o(1))(4) by Claim 1) of Theorem 1, assuming A n ≫ 1. In this regime, change duration and minimum detection delay are essentially the same by Claims 2)-3) and (3), i.e., n * (α = (log A n )/n min (A n )) = n min (A n )(1 ± o(1)) whereas in general minimum detection delay could be smaller than change duration. The next theorem says that the minimum sampling rate needed to achieve the same detection delay as under full sampling decreases essentially as 1/n: Theorem 2 (Detection, sparse sampling). Fix α ∈ (0, D(P 1 ||P 0 )). Any sampling rate ρ n = ω(1/n) is achievable with delay satisfying lim sup n→∞ d n n * (α) ≤ 1. Conversely, if ρ n = o(1/n) the detector samples only from distribution P 0 (i.e.,it completely misses the change) with probability bounded away from zero, regardless of the delay. Moreover, d n = Θ(A n = 2 αn ) for any detector that achieves vanishing false-alarm probability. III. ASYNCHRONOUS COMMUNICATION A. Model We review here the asynchronous communication setup of [1]. The mathematical differences between this model and the previously developed pure detection model are highlighted at the end of this section. Consider discrete-time communication over a discrete memoryless channel characterized by its finite input and output alphabets X ∪ {⋆} and Y , respectively, and transition probability matrix Q(y|x), for all y ∈ Y and x ∈ X ∪ {⋆}. Here ⋆ denotes the "idle" symbol and X represents the set of input symbols that can be used for codebook design. Without loss of generality, we assume that for all y ∈ Y there is some x ∈ X ∪ {⋆} for which Q(y|x) > 0. Given B ≥ 0 information bits to be transmitted, a codebook C consists of M = 2 B codewords of length n ≥ 1 composed of symbols from X. A randomly and uniformly chosen message m is available at the transmitter at a random time ν, independent of m, and uniformly distributed over {1, . . . , A B }, where the integer A B = 2 βB characterizes the asynchronism level between the transmitter and the receiver, and where the constant β ≥ 0 denotes the timing uncertainty per information bit (see Fig. 1). While ν is unknown to the receiver, A B is known by both the transmitter and the receiver. We consider one-shot communication, i.e.,only one message arrives over the period {1, 2, . . . , A B } . If A B = 1, the channel is said to be synchronous. Given ν and m, the transmitter chooses a time σ(ν, m) to start sending codeword c n (m) ∈ C assigned to message m. Transmission cannot start before the message arrives or after the end of the uncertainty window, hence σ(ν, m) must satisfy ν ≤ σ(ν, m) ≤ A B almost surely. In the rest of the paper, we suppress the arguments ν and m of σ when these arguments are clear from context. Before and after the codeword transmission, i.e.,before time σ and after time σ + n − 1, the receiver observes "pure noise," Specifically, conditioned on the event {ν = t}, t ∈ {1, . . . , A B }, and on the message to be conveyed m, the receiver observes independent channel outputs Y 1 , Y 2 , . . . , Y AB +n−1 distributed as follows. For 1 ≤ i ≤ σ(t, m) − 1 or σ(t, m) + n ≤ i ≤ A B + n − 1 , the Y i 's are "pure noise" symbols, i.e., Y i ∼ Q(·|⋆) . For σ ≤ i ≤ σ + n − 1 Y i ∼ Q(·|c i−σ+1 (m)) where c i (m) denotes the ith symbol of the codeword c n (m). The receiver in a synchronous setting operates according to a decoding function only, which is a map from the channel outputs to the message set. In the present context of asynchronous communication with a sampling constraint, decoding involves a three level procedure: • a sampling strategy (in the sense of Definition 1) defined on the channel output process, • a stopping (decoding) time defined on the sampled process, Y1 Y2 YS 1 YS 2 . . . ⋆ ⋆ . . . ⋆ . . . ⋆ c 1 (m) ν m σ . . . Yτ cn (m) ⋆ ⋆ . . . ⋆ Fig. 1. Time representation of what is sent (upper arrow) and what is received (lower arrow). The "⋆" represents the "idle" symbol. Message m arrives at time ν and starts being sent at time σ. The receiver samples at the (random) times S 1 , S 2 , . . . and decodes at time τ based on past samples. • a decoding function defined on the stopped sampled process. Once the sampling strategy is fixed, the receiver decodes by means of a sequential test (τ, φ τ ) where τ denotes a stopping time with respect to the sampled sequence Y S1 , Y S2 , . . . and where φ τ denotes the decoding function φ τ : Y |Oτ | → {1, 2, . . . , M } O τ → φ τ (O τ ). with O t def = {Y Si : S i ≤ t} denoting the set of observations until time t. A code (C, S, (τ, φ τ )) is defined as a codebook, a receiver sampling strategy, and a decoder (decision time and decoding function). Whenever clear from context, we often refer to a code using the codebook symbol C only, leaving out an explicit reference to the sampling strategy and to the decoder. Definition 6 (Error probability). The maximum (over messages) decoding error probability of a code C is defined as max m P m (E m ),(5) where P m (E) def = 1 A A t=1 P m,t (E m ), and where the subscripts "m, t" denote conditioning on the event that message m arrives at time ν = t, and where E m denotes the error event that the decoded message does not correspond to m, i.e., E m def = {φ τ (O τ ) = m} .(6) Definition 7 (Cost of a Code). The (maximum) cost of a code C with respect to a cost function k : X → [0, ∞) is defined as K(C) def = max m n i=1 k(c i (m)). Assumption: we make the assumption that ⋆ has zero cost and that all other symbols have strictly positive costs. This means that the transmitter can stay idle at no cost only if ⋆ ∈ X. When ⋆ / ∈ X then k(x) > 0 for any x ∈ X. The other cases-investigated in [8] under full sampling-are either trivial (when X contains two or more zero cost symbols) or arguably unnatural (X contains a zero cost symbol that differs from ⋆ or when ⋆ ∈ X and all X contains only nonzero cost symbols) and are omitted in this paper. Definition 8 (Decoding delay). Given ε > 0, the (maximum) delay of a code C, denoted by d(C, ε), is defined as the minimum integer l such that min m P m (τ − ν ≤ l − 1) ≥ 1 − ε . Definition 9 (Sampling rate of a code). Given ε > 0, the sampling rate of a code C, denoted by ρ(C, ε), is defined as the minimum r ≥ 0 such that min m P m (|S τ |/τ ≤ r) ≥ 1 − ε. where S t is defined in (1). We now define capacity per unit cost under the constraint that the receiver has access to a limited number of channel outputs: Definition 10 (Asynchronous Capacity per Unit Cost under Sampling Constraint). Fix β ≥ 0 and fix a sequence of non- increasing values {ρ B } with 0 ≤ ρ B ≤ 1. R is an achievable rate per unit cost at timing uncertainty per information bit β and sampling rates {ρ B } if there exists a sequence of codes {C B } such that for all B large enough 1) C B operates at timing uncertainty per information bit β; 2) the maximum error probability is at most ε B ; 3) the rate per unit cost B K(C B ) is at least R − ε B ; 4) the sampling rate satisfies ρ(C B , ε B ) ≤ ρ B ; 5) the delay satisfies 1 B log(d(C B , ε B )) ≤ ε B for some sequence of nonnegative numbers {ε B } such that ε B B→∞ −→ 0. Capacity per unit cost, denoted by C(β, {ρ B }) , is defined as the supremum of achievable rates per unit cost. Capacity per unit cost under full sampling, i.e.,C(β, {ρ B = 1}), is simply denoted by C(β). Let us emphasize the mathematical differences between the model considered in this section and the pure detection problem developed in Section II. While the nominal distribution P 0 corresponds to the pure noise distribution Q ⋆ , the posterior distribution here is no longer unique as it depends on the sent message. Furthermore, the goal of the receiver is to produce a reliable message estimate with a short detection delay. Because of this, the receiver, in addition to a stopping rule uses an isolation rule. Finally notice that here the instant of the change may be tied to the change distribution since the instant of the change σ = σ(ν, m) may be chosen as a function of the message to be conveyed. Notational convention Similarly as the convention for d n and ρ n , we use d B and ρ B instead of d(C B , ε B ) and ρ(C B , ε B ), respectively, leaving out any explicit reference to C B and to the sequence of nonnegative numbers {ε B } which we assume satisfies ε B → 0. The results in the next section adopt a less neutral and more "communication" type of notation and express key quantities such a entropy, mutual information, and divergence using the standard random variable convention. For instance D(Y 1 ||Y 2 ) shall refer to the divergence between the distributions of random variables Y 1 and Y 2 , respectively. B. Results Capacity per unit cost under full sampling is given by the following theorem: Theorem 3 (Capacity, full sampling, Theorem 1 [8] ). For any β ≥ 0 C(β) = max X min I(X; Y ) E[k(X)] , I(X; Y ) + D(Y ||Y ⋆ ) E[k(X)](1 + β) ,(7) where max X denotes maximization with respect to the channel input distribution P X , where (X, Y ) ∼ P X (·)Q(·|·), where Y ⋆ denotes the random output of the channel when the idle symbol ⋆ is transmitted (i.e.,Y ⋆ ∼ Q(·|⋆)), where I(X; Y ) denotes the mutual information between X and Y , and where D(Y ||Y ⋆ ) denotes the divergence between the distributions of Y and Y ⋆ . Theorem 3 characterizes capacity per unit cost under full output sampling and over codes whose delay grow subexponentially with B. As it turns out, the same capacity per unit cost can be achieved with linear delay and sparse output sampling. Define n * B (β, R) def = B R max{E[k(X)] : X ∈ P(R)} = Θ(B) (8) where P(R) def = X : min I(X; Y ) E[k(X)] , I(X; Y ) + D(Y ||Y ⋆ ) E[k(X)](1 + β) ≥ R .(9) The quantity n * B (β, R) plays the same role as n * (α) in Section III and quantifies the minimum detection delay as a function of the asynchronism level and rate per unit cost, under full sampling: Theorem 4 (Capacity, minimum delay, constant sampling rate, Theorem 3 [1]). Fix β ≥ 0 and R ∈ (0, C(β)]. For any codes {C B } that achieve rate per unit cost R at timing uncertainty β and sampling rate ρ B = 1 we have lim inf B→∞ d B n * B (β, R) ≥ 1. Furthermore, for any ρ ∈ (0, 1], there exist codes {C B } that achieve rate R at timing uncertainty β and sampling rate ρ B = ρ and such that lim sup B→∞ d B n * B (β, R) ≤ 1. The first part of Theorem 4 says that under full sampling the minimum delay achieved by rate R ∈ (0, C(β)] codes is n * B (β, R). The second part of the theorem says that this minimum delay can be achieved even if the receiver samples only a constant fraction ρ > 0 of the channel outputs. The natural question is then "What is the minimum sampling rate of codes that achieve rate R and minimum delay n * B (β, R)?" Our main result says that this minimum sampling rate essentially decreases as 1/B: Theorem 5 (Capacity, minimum delay, minimum sampling rate). Consider a sequence of codes {C B } that operate under timing uncertainty per information bit β > 0. If ρ B d B = o(1),(10)d B ≤ n * B (β, R)(1 + o(1)) as long as the sampling rate satisfies ρ B = ω(1/B). If R > 0, the minimum delay is order O(B) by Theorem 4 and (8) and to achieve this minimum delay it is thus necessary that ρ B = Ω(1/B). IV. PROOFS Typicality convention A length q ≥ 1 sequence v q over V q is said to be typical with respect to some distribution P over V if 7 ||P v q − P || ≤ q −1/3 whereP v q denotes the empirical distribution (or type) of v q . Typical sets have large probability in the sense that P q (||P V q − P || ≤ q −1/3 ) → 1 (q → ∞)(11) as can be verified from Chebyshev's inequality-in the above expressions P q denotes the q-fold product distributions of P . Moreover, for any distributionP over V we have P q (||P V q −P || ≤ q −1/3 ) ≤ 2 −q(D(P ||P )−o(1)) .(12) About rounding Throughout computations we ignore issues related to the rounding of non-integer quantities as they play no role asymptotically. A. Proof of Theorem 1 The proof of Theorem 1 is essentially a Corollary of [4,Theorem]. We sketch the main arguments. 1 2 . . . . . . rn Fig. 2. Parsing of the entire sequence of size An + n − 1 into rn blocks of length dn, one of which is generated by P 1 , while all the others are generated by P 0 . 1) : To establish achievability of D(P 1 ||P 0 ) one uses the same sequential typicality detection procedure as in the achievability of [4,Theorem]. For the converse argument, we use essentially the same arguments as for the converse of [4,Theorem]. For this latter setting, achieving α means that we can drive the probability of the event {τ = ν + n − 1} to zero. Although this performance metric differs from ours-vanishing probability of false-alarm and subexponential delay-a closer look at the converse argument of [4,Theorem] reveals that if α > D(P 1 ||P 0 ) there are exponentially many sequences of length n that are "typical" with respect to the posterior distribution. This, in turn, implies that either the probability of false-alarm is bounded away from zero or that the delay is exponential. 2) : Consider stopping times {τ n } that achieve delay {d n }, and vanishing false-alarm probability (recall the notational conventions for d n at the end of Section II-A). We define the "effective process" {Ỹ i } as the process whose change has duration min{d n , n} (instead of n). Effective output process: The effective process is defined as Y i = Y i 1 ≤ i ≤ ν + min{d n , n} − 1 {Ỹ i : ν + min{d n , n} ≤ i ≤ A n + n − 1} i.i.d. P 0 process Hence, the effective process differs from the true process over the period {1, 2, . . . , τ } only when {τ ≥ ν +d n } with d n < n. Genie aided statistician: A genie aided statistician observes the entire effective process (of duration A n + n − 1) and is informed that the change occurred over one of r n A n + n − 1 − ν mod d n d n(13) consecutive (disjoint) blocks of duration d n as shown in Fig. 2. The genie aided statistician produces a time interval of size d n which corresponds to an estimate of the change in distribution and is declared to be correct only if this interval corresponds to the change in distribution. Observe that since τ achieves false-alarm probability ε n and delay d n on the true process {Y i }, the genie aided statistician achieves error probability at most 2ε n . The extra ε n comes from the fact τ stops after time ν + d n − 1 (on {Y i }) with probability at most ε n . Therefore, with probability at most ε n the genie aided statistician observes a process that may differ from the true process. By using the same arguments as for the converse of [4, Theorem] but on process {Ỹ i } parsed into consecutive slots of size d n , one shows that if lim inf n→∞ d n n * (α) < 1 then the error probability of the genie aided decoder tends to one. 3) : To establish achievability apply the same sequential typicality test as in the achievability part of [4,Theorem]. B. Proof of Theorem 2: Converse Consider a sequence of detectors {(S n , τ n )} that achieves, for some ε n → 0, sampling rate {ρ n }, communication delay d n , and error probability ε n (recall the notational conventions for d n and ρ n at the end of Section II-A). We now show that if ρ n = o(1/n),(14) the detectors do not see even one P 1 -generated sample with probability asymptotically bounded away from zero. This, in turn, implies that the delay is exponential (whenever the falsealarm probability vanishes). We lower bound the probability of sampling only from the nominal distribution P 0 as 8 P({ν, ν + 1, . . . , ν + n − 1} ∩ S τ = ∅) (a) ≥ P({ν, ν + 1, . . . , ν + n − 1 ∧ k} ∩ S τ = ∅) (b) = t st j∈J k,s t P(τ = t, S τ = s t , ν = j) (c) = t P 0 (τ = t) st P 0 (S t = s t ) j∈J k,s t P(ν = j) ≥ t P(τ = t) st:|st|≤k/2n P 0 (S τ = s t ) j∈J k,s t P(ν = j) ≥ t P(τ = t) st:|st|≤k/2n P 0 (S τ = s t ) k − |s t |n A n ≥ t P(τ = t) st:|st|≤k/2n P 0 (S τ = s t ) k 2A n = k 2A n P 0 (|S k | ≤ k/2n)(15) where we defined the set of indices J k,st as J k,st = {j : {j, j + 1, . . . , j + n − 1 ∧ k} ∩ s t = ∅}}. Equality (a) holds for any k ≥ 1. For (b), s t ⊆ {1, 2, . . . , A n + n − 1} ranges over all sampling strategies that have non-zero probability when conditioned on τ = t. Equality (c) holds by noting that event {τ = t} is a function of {Y i , i ∈ s t } and that ν = j / ∈ s t for all j ∈ J k,st . Hence samples in s t are all distributed according to the nominal distribution P 0 (P 0product distribution). By assumption on the sampling rate we have 1 − ε n ≤ P(|S τ | ≤ τ ρ n )(16) which implies 1 − ε n ≤ P(|S τ | ≤ (1 + o(1))A n ρ n )(17) for any ε > 0 and n large enough since τ ≤ A n + n − 1. Now, for any fixed 1 ≤ k ≤ A n P(|S τ | ≤ (1 + o(1))A n ρ n ) ≤ P(|S τ | ≤ (1 + o(1))A n ρ n |ν > k)P(ν > k) + P(ν ≤ k) ≤ P(|S k | ≤ (1 + o(1))A n ρ n |ν > k)P(ν > k) + P(ν ≤ k) = P 0 (|S k | ≤ (1 + o(1))A n ρ n )(A n − k)/A n + k/A n(18) where for the second inequality we used the fact that |S τ | ≥ |S t | for any 1 ≤ t ≤ A n + n − 1 (see (2)). Now let k = k n = εA n ε ∈ (0, 1/2). From (17) and (18) we get P 0 (|S k | ≤ (1 + o(1))A n ρ n ) ≥ 1 − 2ε 1 − ε .(19) From (14) and the definition of k n we deduce that k n /2n ≥ A n ρ n for n large enough. Therefore, from (19) P 0 (|S k | ≤ k n /2n) ≥ 1 − 2ε 1 − ε for n large enough. Hence, from (15) P({ν, ν + 1, . . . , ν + n − 1} ∩ S τ = ∅) ≥ (1 − 2ε) 1 − ε ε 2 def = δ(ε)(20) which is strictly positive for any ε ∈ (0, 1/2). Therefore, with probability bounded away from zero the detector will sample only from the nominal distribution. This, as we now show, implies that the delay is exponential. On the one hand, assuming a vanishing false-alarm probability we have for any constant η ∈ [0, 1) o(1) ≥ P(τ < ν) ≥ P(τ < ηA n |ν ≥ ηA n )(1 − η) = P 0 (τ < ηA n )(1 − η). This implies P 0 (τ < ηA n ) ≤ o(1), and, therefore, P(τ ≥ ηA n ) ≥ P(τ ≥ ηA n |ν > ηA n )(1 − η) = P 0 (τ ≥ ηA n )(1 − η) = 1 − η − o(1).(21) Now, define events A 1 : the detector stops at a time ≥ ηA n , A 2 : {|S τ | ≤ τ ρ n }, A 3 : all samples taken up to time τ are distributed according to P 0 , and let A def = A 1 ∩ A 2 ∩ A 3 . From (20), (21), and (16), for any ε ∈ (0, 1/2) one can pick η ∈ [0, 1) small enough such that lim inf n→∞ P(A) > 0. We now argue that when event A happens, the detector misses the change which might have occurred, say, before time ηA n /2, thereby implying a delay Θ(A n ) since τ ≥ ηA n on A. When event A happens, the detector takes o(A n /n) samples (this follows from event A 2 since by assumption ρ n = o(1/n)). Therefore, within {1, 2, . . . , ηA n /2} there are at least ηA n /2−o(A n )) time intervals of length n that are unsampled. Each of these corresponds to a possible change. Therefore, when event A happens, with probability at least η/2−o(1) the change happens before time ηA n /2 whereas τ ≥ ηA n . This implies that the delay is exponential in n since the probability of A is asymptotically bounded away from zero. C. Proof of Theorem 2: Achievability We describe a detection procedure that asymptotically achieves minimum delay n * (α) and any sampling rate that is ω(1/n) whenever α ∈ (0, D(P 0 ||P 1 )). Fix α ∈ (0, D(P 1 ||P 0 ))) and pick ε > 0 small enough so that n * (α)(1 + 2ε) ≤ n.(22) Suppose we want to achieve some sampling rate ρ n = f (n)/n where f (n) = ω(1) is some arbitrary increasing function (upper bounded by n without loss of generality). Definē ∆(n) def = n/f (n) 1/3 s-instants def = {t = j∆(n), j ∈ N * }, and recursively define ∆ 0 (n) def = f (n) 1/3 ∆ i (n) def = min{2 c∆i−1(n) , n * (α)(1 + ε)} for i ∈ 1, 2, . . . , ℓ where ℓ denotes the smallest integer i such that ∆ i (n) = n * (α)(1 + ε). The constant c in the definition of ∆ i (n) is any fixed value so that 0 < c < D(P 1 ||P 0 ). The detector starts sampling in phases at the first s-instant (i.e.,, at time t =∆(n)) as follows: 1 Preamble detection (phase zero): Take ∆ 0 (n) consecutive samples and check if they are typical with respect to P 1 . If the test turns negative, meaning that ∆ 0 (n) samples are not typical, skip samples until the next s-instant and repeat the procedure i.e.,sample and test ∆ 0 (n) observations. If the test turns positive, move to confirmation phases. 2 Preamble confirmations (variable duration, ℓ − 1 phases at most): Take another ∆ 1 (n) consecutive samples and check if they are typical with respect to P 1 . If the test turns negative skip samples until the next sinstant and repeat Phase zero (and test ∆ 0 (n) samples). If the test turns positive, perform a second confirmation phase with ∆ 1 (n) replaced with ∆ 2 (n), and so forth. (Each confirmation phase is performed on a new set of samples.) If ℓ − 1 consecutive confirmation phases (with respect to the same s-instant) turn positive, the receiver moves to the full block sampling phase. 3 Full block sampling (ℓ-th phase): : Take another ∆ ℓ (n) = n * (α)(1 + ε) samples and check if they are typical with respect to P 1 . If they are typical, stop, otherwise skip samples until the next s-instant and repeat Phase zero. If by time A n +n−1 no sequence is found to be typical, stop. For the false-alarm we have P(τ < ν) ≤ 2 αn · 2 −n * (α)(1+ε)(D(P1||P0)−o(1)) = 2 −nαΘ(ε)(23) because whenever the detector stops, last n * (α)(1 + ε) samples are necessarily typical with respect to P 1 . Therefore the inequality (23) follows from (12) and a union bound over time indices. The equality in (23) holds by definition of n * (α) (see (3)). For the delay we get P(τ ≤ ν + (1 + 2ε)n * (α)) = 1 − o(1).(24) To see this note that ∆(n) + ℓ i=0 ∆ i (n) ≤ (1 + 2ε)n * (α) for n large enough and that, by (12), the series of ℓ + 1 hypothesis test will turn positive with probability 1 − o(1) when samples are distributed according to P 1 . Since ε can be made arbitrarily small, from (23) and (24) we deduce that the detector achieves minimum delay (see Theorem 1 Claim 2)) . To show that the above detection procedure achieves sampling rate ρ n = f (n)/n we need to establish that P(|S τ |/τ ≥ ρ n ) n→∞ −→ 0.(25) To prove this we first compute the sampling rate of the detector when entirely run over an i.i.d.-P 0 sequence. As should be intuitively clear, this will essentially give us the desired result since the duration of the change n is negligible with respect to A n and since ν is uniformly distributed over {1, 2, . . . , A n }. We start by computing the expected number of samples N taken by the detector at any given s-instant when the detector is started at that specific s-instant and when the observations are all i.i.d. P 0 . Obviously, by stationarity this expectation does not depend on the s-instant. 9 We have E 0 N = ∆ 0 (n) + ℓ−1 i=0 p i · ∆ i+1 (n)(26) where p i denotes the probability that the i-th confirmation phase turns positive given that the detector is in the i-th confirmation phase. From (12) p i ≤ 2 −∆i(n)(D(P1||P0)−o(1)) hence, using the definition of ∆ i (n) we deduce that E 0 N s = ∆ 0 (n)(1 + o(1)).(27) Therefore, the expected number of samples taken by the detector up to any given time t under P 0 can be upper bounded as E 0 |S t | ≤ t ∆(n) ∆ 0 (n)(1 + o(1)) = t f (n) 2/3 n (1 + o(1)).(28) This, as we now show, implies that the detector has the desired sampling rate. We have P(|S τ |/τ ≥ ρ n ) ≤ P(|S τ |/τ ≥ ρ n , ν ≤ τ ≤ ν + (1 + 2ε)n * (α)) + 1 − P(ν ≤ τ ≤ ν + (1 + 2ε)n * (α)) ≤ P(|S τ |/τ ≥ ρ n , ν ≤ τ ≤ ν + 2n) + 1 − P(ν ≤ τ ≤ ν + (1 + 2ε)n * (α))(29) where the second inequality holds for ε small enough by the definition of n * (α). The fact that 1 − P(ν ≤ τ < ν + (1 + 2ε)n * (α)) = o(1)(30) follows from (23) and (24). For the first term on the right-hand side of the second inequality in (29) we have P(|S τ |/τ ≥ ρ n , ν ≤ τ ≤ ν + 2n) ≤ P(|S ν+2n | ≥ νρ n ) ≤ P(|S ν−1 | ≥ νρ n − 2n − 1).(31) Since S ν−1 represents sampling times before the change (the underlying process is thus i.i.d. P 0 ), assuming ν ≥ A n = 2 αn (32) 9 Boundary effects due to the fact that An need not be a multiple of∆n play no role asymptotically and thus are ignored. we have P(|S ν−1 | ≥ νρ n − 2n − 1|ν) ≤ E 0 |S ν | νρ n − 2n − 1 ≤ f (n) 2/3 (1 + o(1)) n(ρ n − (2n + 1)/ν) ≤ f (n) 2/3 (1 + o(1)) nρ n (1 − o(1)) = (1 + o(1)) f (n) 1/3 (1 − o(1)) = o(1)(33) where the second inequality holds by (28); where the third inequality holds by (32) and because ρ n = ω(1/n); and where the last two equalities hold by the definitions of ρ n and f (n). Removing the conditioning on ν, P(|S ν−1 | ≥ νρ n − 2n − 1) ≤ P(|S ν−1 | ≥ νρ n − 2n − 1, ν ≥ A n ) + P(ν < A n ) = o(1)(34) by (33) and since ν is uniformly distributed over {1, 2, . . . , A n }. Hence, from (31), the first term on the righthand side of the second inequality in (29) vanishes. This yields (25). Discussion There is obviously a lot of flexibility around the quickest detection procedure described in Section IV-C. Its main feature is the multiple binary hypothesis test which rejects the hypothesis that a change occurred under pure noise as soon as possible while controlling false-alarms. It may be tempting to simplify the detection procedure by considering, say, only two phases, the preamble phase and the full block phase. Such a scheme, which is similar in spirit to the one proposed in [1], would not work as it would produce a much higher level of false-alarm. We provide an intuitive justification for this thereby highlighting the role of the multiphase procedure. Consider a two phase procedure, a preamble phase followed by a full block phase. Each time we switch to the second phase we take Θ(n) samples. Therefore, if we want to achieve a vanishing sampling rate, then necessarily the probability of changing mode under pure noise should be o(1/n). By Sanov's theorem, such a probability can be achieved only if the decision to change mode is based on ω(log n) samples taken over time windows of size Θ(n). 10 This translates into a sampling rate of ω((log n)/n) at best, and we know that this is suboptimal since any sampling rate ω(1/n) is achievable. The reason a two-phase scheme does not yield a sampling rate lower than ω((log n)/n) is that it is not progressive enough. To guarantee a vanishing sampling rate, the decision to switch to the full block phase should be based on at least 10 Here we assume without loss of generality that the hypothesis test is nontrivial, i.e.,that the probability of switching to the second phase is non-zero when the observations are pure noise. log(n) samples, which in turn yields a suboptimal sampling rate. The important observation here is that the (average) sampling rate of the two-phase procedure essentially corresponds to the sampling rate of the first phase. And it is in this regime that it is decided when to switch to the full block phase and sample continuously for a long period of order n. In the multiphase procedure, however, this is no longer the case. The sampling rate of the first phase does no more correspond to the sampling rate of the phase that can trigger the densest sampling mode. Note also that the decision to switch to a higher sampling rate can happen more frequently than with a two-phase scheme because multiple phases give us the important ability to take far fewer than n samples when we make the decision to switch to a higher sampling rate. If the intermediate phase lengths are chosen appropriately, then we enter high sampling rates with low probability, and this does not affect the average sampling rate. In general, the lengths and probability thresholds need to be chosen so that the sampling rate is dominated by the first phase. This translates into condition (see (26) and (27)) ℓ−1 i=0 p i · ∆ i+1 (n) = o(∆ 0 (n)). D. Converse of Theorem 5 By using the same arguments as for the converse of Theorem 2 (until equation (20)) with n replaced by d B one shows that if ρ B d B = o(1)(35) the decoder does not even sample one component of the sent codeword with probability asymptotically bounded away from zero. E. Achievability of Theorem 5 Fix β > 0. We show that any R ∈ (0, C(β)] is achievable with codes {C B } whose delays satisfy d(C B , ε B ) ≤ n * B (β, R)(1 + o(1)) whenever the sampling rate ρ B is such that ρ B = f (B) B for some f (B) = ω(1). Let X ∼ P be some channel input and let Y denote the corresponding output, i.e.,(X, Y ) ∼ P (·)Q(·|·). For the moment we only assume that X is such that I(X; Y ) > 0. Further, we suppose that the codeword length n is linearly related to B, i.e., B n = q for some fixed constant q > 0. We shall specify this linear dependency later to accommodate the desired rate R. Further, letf (n) def = f (q · n)/q andρ n def =f (n) n . Hence, by definition we havẽ ρ n = ρ B . Let a be some arbitrary fixed input symbol such that Q(·|a) = Q(·|⋆). Below we introduce the quantities∆(n) and ∆ i (n), 1 ≤ i ≤ ℓ, which are defined as in Section IV-C but with P 0 replaced with Q(·|⋆), P 1 replaced with Q(·|a), f (n) replaced withf (n), and n * (α) replaced with n. 1) Codewords: preamble followed by constant composition information symbols: Each codeword c n (m) starts with a common preamble that consists of∆(n) repetitions of symbol a. The remaining n −∆(n) components c n ∆(n)+1 (m) of c n (m) of each message m carry information and are generated as follows. For message 1, randomly generate length n −∆(n) sequences x n−∆(n) i.i.d. according to P until when x n−∆(n) is typical with respect to P . In this case we let c n ∆(n)+1 (1) def = x n−∆(n) , move to message 2, and repeat the procedure until when a codeword has been assigned to each message. From (11), for any fixed m no repetition will be required to generate c n ∆(n)+1 (m) with probability tending to one as n → ∞. Moreover, by construction codewords are essentially of constant composition-i.e.,each symbol appears roughly the same number of times-and have cost nE[k(X)](1 + o(1)) as n → ∞. 2) Codeword transmission time: Define the set of start instants s-instants def = {t = j∆(n), j ∈ N * }. Codeword transmission start time σ(m, ν) corresponds to the first s-instant ≥ ν (regardless of m). 3) Sampling and decoding procedures: The decoder first tries to detect the preamble by using a similar detection procedure as in the achievability of Theorem 2, then applies a standard message decoding isolation map. Starting at the first s-instant (i.e.,, at time t =∆(n)), the decoder samples in phases as follows. 1 Preamble test (phase zero): Take ∆ 0 (n) consecutive samples and check if they are typical with respect to Q(·|a). If the test turns negative, the decoder skips samples until the next s-instant when it repeats the procedure. If the test turns positive, the decoder moves to the confirmation phases. consecutive samples and checks if they are typical with respect to Q(·|a). If the test turns negative the decoder skips samples until the next s-instant when it repeats Phase zero (and tests ∆ 0 (n) samples). If the test turns positive, the decoder performs a second confirmation phase based on new ∆ 2 (n) samples, and so forth. If ℓ − 1 consecutive confirmation phases (with respect to the same s-instant) turn positive, the decoder moves to the message sampling phase. 3 Message sampling and isolation (ℓ-th phase): : Take another n samples and check if among these samples there are n −∆(n) consecutive samples that are jointly typical with the n −∆(n) information symbols of one of the codewords. If one codeword is typical, stop and declare the corresponding message. If more than one codeword is typical declare one message at random. If no codeword is typical, the decoder stops sampling until the next s-instant and repeats Phase zero. If by time A B + n − 1 no codeword is found to be typical, the decoder declares a random message. 4) Error probability: Error probability and delay are evaluated in the limit B → ∞ with A B = 2 βB and with q = B n < min I(X; Y ), I(X; Y ) + D(Y ||Y ⋆ ) 1 + β .(36) We first compute the error probability averaged over codebooks and messages. Suppose message m is transmitted. The decoding error event E m (see (6)) can be decomposed as E m ⊆ E 0,m ∪ m ′ =m (E 1,m ′ ∪ E 2,m ′ ),(37) where events E 0,m , E 1,m ′ , and E 2,m ′ are defined as • E 0,m : at the s-instant corresponding to σ, the preamble test phase or one of the preamble confirmation phases turns negative, or c n ∆(n)+1 (m) is not found to be typical by time σ + n − 1; • E 1,m ′ : the decoder stops at a time t < σ and declares m ′ ; • E 1,m ′ : the decoder stops at a time t between σ and σ + n − 1 (including σ and σ + n − 1) and declares m ′ . From Sanov's theorem, P m (E 0,m ) = ε 1 (B)(38) where ε 1 (B) = o(1). Note that this equality holds pointwise (and not only on average over codebooks) for any specific (non-random) codeword c n (m) since, by construction, they all satisfy the constant composition property which are both valid for any fixed ε > 0 provided that B is large enough. Hence from the union bound P m (E 1,m ′ ∪ E 2,m ′ ) ≤2 −n(I(X;Y )−o(1)) + 2 βB · 2 −n(I(X;Y )+D(Y ||Y⋆)−o(1)) . Taking a second union bound over all possible wrong messages, we get P m (∪ m ′ =m (E 1,m ′ ∪ E 2,m ′ )) ≤ 2 B 2 −n(I(X;Y )−o(1)) +2 βB · 2 −n(I(X;Y )+D(Y ||Y⋆)−o(1)) def = ε 2 (B)(40) where ε 2 (B) = o(1) because of (36). Combining (37), (38), (40), we get from the union bound P m (E m ) ≤ ε 1 (B) + ε 2 (B) = o(1)(41) for any m. We now show that the delay of our coding scheme is at most n(1 + o (1)). Suppose codeword c n (m) is sent. If τ > σ + n then necessarily c n ∆+1 (m) is not typical with the corresponding channel outputs. Hence P m (τ − σ ≤ n) ≥ 1 − P m (E 0,m ) = 1 − ε 1 (B)(42) by (38). Since σ ≤ ν +∆(n) and∆(n) = o(n) we get 11 P m (τ − ν ≤ n(1 + o(1))) ≥ 1 − ε 1 (B) . Since this inequality holds for any codeword c n (m) that satisfies (39), the delay is no more than n(1 + o(1)) (see Definition 8). Furthermore, from (41) there exists a specific non-random code C whose error probability, averaged over messages, is less than ε 1 (n) + ε 2 (n) = o(1) whenever condition (36) is satisfied. Removing the half of the codewords with the highest error probability, we end up with a set C ′ of 2 B−1 codewords whose maximum error probability satisfies max m P m (E m ) ≤ o(1)(43) whenever condition (36) is satisfied. Since any codeword has cost nE[k(X)](1+o(1)), condition (36) is equivalent to where R def = B K(C ′ ) denotes the rate per unit cost of C ′ . Thus, to achieve a given R ∈ (0, C(β)) it suffices to choose the input distribution and the codeword length as X = arg max{E[k(X ′ )] : X ′ ∈ P(R)} 11 Recall that B/n is kept fixed and B → ∞. and n = n * B (β, R) (see (8) and (9)). By a previous argument the corresponding delay is no larger than n * B (β, R)(1 + o(1)). 5) Sampling rate: For the sampling rate, a very similar analysis as in the achievability of Theorem 2 (see from equation (25) onwards with f (n), ρ n , n * (α), and A n replaced withf (n),ρ n , n * (β, R), and A B , respectively) shows that P m (|S τ |/τ ≥ ρ B ) = P m (|S τ |/τ ≥ρ n ) B,n→∞ −→ 0.(45) Note that the arguments that establish (45) rely only on the preamble detection procedure. In particular, they do not use (44) and hold for any codeword length n B as long as n B = Θ(B). 2 2Preamble confirmations (variable duration, ℓ − 1 phases at most): The decoder takes another ∆ 1 (n) ||P c n ∆+1 (m) − P || ≤ (n −∆) −1/3 = o(1)(39)as n → ∞.Using analogous arguments as in the achievability of [8, Proof of Theorem 1], we obtain the upper boundsP m (E 1,m ′ ) ≤ 2 βB · 2 −n(I(X;Y )+D(Y ||Y⋆)−o(1)) and P m (E 2,m ′ ) ≤ 2 −n(I(X;Y )−o(1)) X; Y ) + D(Y ||Y ⋆ ) E[k(X)](1 + o(1))(1 + β) there exist codes {C B } that achieve rate R at timing uncertainty β and delaythe receiver does not even sample a single component of the sent codeword with probability bounded away from zero. Hence, the average error probability is bounded away from zero whenever d B = O(B) and ρ B = o(1/B). Moreover, for any R ∈ (0, C(β)], Throughout the paper we use the standard "big-O" Landau notation to characterize growth rates (see, e.g.,[2, Chapter 3]). These growth rates, e.g., o(1), are intended in the limit B → ∞, unless stated otherwise. Classically, e.g.,[11], change-point detection problems are formulated assuming that from the time the change occurs, the observed process remains forever distributed according to the posterior distribution. When the process returns to its nominal distribution after a change of limited duration, the change is said to be transient. Throughout the paper logarithms are always intended to be to the base 2. Notice that ℓ, the total number of samples, may be random under adaptive sampling but also under non-adaptive sampling since the strategy may be randomized (but still independent of Y An+n−1 By detectable we mean with vanishing false-alarm probability and subexponential delay. || · || refers to the L 1 -norm. a ∧ b = min{a, b} Energy and sampling constrained asynchronous communication. A Tchamkerten, V Chandar, G Caire, IEEE Transactions on. 6012Information TheoryA. Tchamkerten, V. Chandar, and G. Caire, "Energy and sampling constrained asynchronous communication," Information Theory, IEEE Transactions on, vol. 60, no. 12, pp. 7686-7697, Dec 2014. T H Cormen, C E Leiserson, R L Rivest, C Stein, Introduction to Algorithms. McGraw-Hill Book CompanyMIT Press2nd editionT. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein, Introduction to Algorithms, 2nd edition. MIT Press, McGraw-Hill Book Company, 2000. Information theoretic perspectives on synchronization. A Tchamkerten, A Khisti, G Wornell, Information Theory, 2006 IEEE International Symposium on. IEEEA. Tchamkerten, A. Khisti, and G. Wornell, "Information theoretic perspectives on synchronization," in Information Theory, 2006 IEEE International Symposium on. IEEE, 2006, pp. 371-375. Optimal sequential frame synchronization. V Chandar, A Tchamkerten, G Wornell, Information Theory, IEEE Transactions on. 54V. Chandar, A. Tchamkerten, and G. Wornell, "Optimal sequential frame synchronization," Information Theory, IEEE Transactions on, vol. 54, no. 8, pp. 3725-3728, 2008. Communication under strong asynchronism. A Tchamkerten, V Chandar, G Wornell, IEEE Transactions on. 5510Information TheoryA. Tchamkerten, V. Chandar, and G. Wornell, "Communication un- der strong asynchronism," Information Theory, IEEE Transactions on, vol. 55, no. 10, pp. 4508-4528, 2009. Distinguishing codes from noise: fundamental limits and applications to sparse communication. D Wang, Massachusetts Institute of TechnologyMaster's thesisD. Wang, "Distinguishing codes from noise: fundamental limits and applications to sparse communication," Master's thesis, Massachusetts Institute of Technology, 2010. Asynchronous communication: Capacity bounds and suboptimality of training. A Tchamkerten, V Chandar, G W Wornell, Information Theory, IEEE Transactions on. 59A. Tchamkerten, V. Chandar, and G. W. Wornell, "Asynchronous com- munication: Capacity bounds and suboptimality of training," Information Theory, IEEE Transactions on, vol. 59, no. 3, pp. 1227 -1255, march 2013. Asynchronous capacity per unit cost. V Chandar, A Tchamkerten, D Tse, Information Theory, IEEE Transactions on. 59V. Chandar, A. Tchamkerten, and D. Tse, "Asynchronous capacity per unit cost," Information Theory, IEEE Transactions on, vol. 59, no. 3, pp. 1213 -1226, march 2013. Asynchronous communication: Exact synchronization, universality, and dispersion. Y Polyanskiy, IEEE Transactions on. 593Information TheoryY. Polyanskiy, "Asynchronous communication: Exact synchronization, universality, and dispersion," Information Theory, IEEE Transactions on, vol. 59, no. 3, pp. 1256-1270, 2013. Codeword or noise? exact random coding exponents for joint detection and decoding. N Weinberger, N Merhav, IEEE Transactions on. 609Information TheoryN. Weinberger and N. Merhav, "Codeword or noise? exact random coding exponents for joint detection and decoding," Information Theory, IEEE Transactions on, vol. 60, no. 9, pp. 5077-5094, Sept 2014. Procedures for reacting to a change in distribution. G Lorden, The Annals of Mathematical Statistics. G. Lorden, "Procedures for reacting to a change in distribution," The Annals of Mathematical Statistics, pp. 1897-1908, 1971. A generalized change detection problem. I V Nikiforov, IEEE Trans. Inform. Th. 41I. V. Nikiforov, "A generalized change detection problem," IEEE Trans. Inform. Th., vol. 41, pp. 171-187, January 1995. Quickest detection procedures and transient signal detection. B Broder, S Schwartz, DTIC Document, Tech. Rep. B. Broder and S. Schwartz, "Quickest detection procedures and transient signal detection," DTIC Document, Tech. Rep., 1990. Some methods to evaluate the performance of page's test as used to detect transient signals. C Han, P K Willett, D A Abraham, IEEE Transactions on. 478Signal ProcessingC. Han, P. K. Willett, and D. A. Abraham, "Some methods to evaluate the performance of page's test as used to detect transient signals," Signal Processing, IEEE Transactions on, vol. 47, no. 8, pp. 2112-2127, 1999. Information bounds and quick detection of parameter changes in stochastic systems. T L Lai, Information Theory, IEEE Transactions on. 44T. L. Lai, "Information bounds and quick detection of parameter changes in stochastic systems," Information Theory, IEEE Transactions on, vol. 44, no. 7, pp. 2917-2929, 1998. Bayesian quickest transient change detection. K Premkumar, A Kumar, V V Veeravalli, Proc. Fifth International Workshop on Applied Probability (IWAP). Fifth International Workshop on Applied Probability (IWAP)K. Premkumar, A. Kumar, and V. V. Veeravalli, "Bayesian quickest transient change detection," in Proc. Fifth International Workshop on Applied Probability (IWAP), 2010. Data-efficient quickest change detection in minimax settings. T Banerjee, V Veeravalli, IEEE Transactions on. 5910Information TheoryT. Banerjee and V. Veeravalli, "Data-efficient quickest change detec- tion in minimax settings," Information Theory, IEEE Transactions on, vol. 59, no. 10, pp. 6917-6931, Oct 2013. Sequential detection of transient changes in stochastic systems under a sampling constraint. E Ebrahimzadeh, A Tchamkerten, submitted for publicationE. Ebrahimzadeh and A. Tchamkerten, "Sequential detection of transient changes in stochastic systems under a sampling constraint," submitted for publication.
[]
[ "Reproducing kernel Hilbert space compactification of unitary evolution groups", "Reproducing kernel Hilbert space compactification of unitary evolution groups" ]
[ "Dimitrios Giannakis \nCourant Institute of Mathematical Sciences\nNew York University\n10012New YorkNYUSA\n", "Suddhasattwa Das \nCourant Institute of Mathematical Sciences\nNew York University\n10012New YorkNYUSA\n", "Joanna Slawinska \nDepartment of Physics\nUniversity of Wisconsin-Milwaukee\n53211MilwaukeeWIUSA\n" ]
[ "Courant Institute of Mathematical Sciences\nNew York University\n10012New YorkNYUSA", "Courant Institute of Mathematical Sciences\nNew York University\n10012New YorkNYUSA", "Department of Physics\nUniversity of Wisconsin-Milwaukee\n53211MilwaukeeWIUSA" ]
[]
A framework for coherent pattern extraction and prediction of observables of measure-preserving, ergodic dynamical systems with both atomic and continuous spectral components is developed. This framework is based on an approximation of the unbounded generator of the system by a compact operator W τ on a reproducing kernel Hilbert space (RKHS). A key element of this approach is that W τ is skew-adjoint (unlike regularization approaches based on the addition of diffusion), and thus can be characterized by a unique projection-valued measure, discrete by compactness, and an associated orhtonormal basis of eigenfunctions. These eigenfunctions can be ordered in terms of a measure of roughness (Dirichlet energy) on the RKHS, and provide a notion of coherent observables under the dynamics akin to the Koopman eigenfunctions associated with the atomic part of the spectrum. In addition, the regularized generator has a well-defined Borel functional calculus allowing the construction of an associated unitary evolution group {e tWτ } t∈R on the RKHS by exponentiation, approximating the unitary Koopman evolution group of the original system. We establish convergence results for the spectrum and Borel functional calculus of the regularized generator to those of the original system in the limit τ → 0 + . Convergence results are also established for a data-driven formulation, where these operators are approximated using finite rank operators obtained from observed time series. An advantage of working in spaces of observables with an RKHS structure is that one can perform pointwise evaluation and interpolation through bounded linear operators, which is not possible in L p spaces. This enables the evaluation of data-approximated eigenfunctions on previously unseen states, as well as data-driven forecasts initialized with pointwise initial data (as opposed to probability densities in L p ). The pattern extraction and prediction framework developed here is numerically applied to a number of ergodic dynamical systems with atomic and continuous spectra, with promising results.
10.1016/j.acha.2021.02.004
[ "https://arxiv.org/pdf/1808.01515v6.pdf" ]
52,544,490
1808.01515
11187d26d97136ed0a21e930f3a96a3b22f4dfe9
Reproducing kernel Hilbert space compactification of unitary evolution groups Dimitrios Giannakis Courant Institute of Mathematical Sciences New York University 10012New YorkNYUSA Suddhasattwa Das Courant Institute of Mathematical Sciences New York University 10012New YorkNYUSA Joanna Slawinska Department of Physics University of Wisconsin-Milwaukee 53211MilwaukeeWIUSA Reproducing kernel Hilbert space compactification of unitary evolution groups Koopman operatorsPerron-Frobenius operatorsreproducing kernel Hilbert spacesspectral estimation A framework for coherent pattern extraction and prediction of observables of measure-preserving, ergodic dynamical systems with both atomic and continuous spectral components is developed. This framework is based on an approximation of the unbounded generator of the system by a compact operator W τ on a reproducing kernel Hilbert space (RKHS). A key element of this approach is that W τ is skew-adjoint (unlike regularization approaches based on the addition of diffusion), and thus can be characterized by a unique projection-valued measure, discrete by compactness, and an associated orhtonormal basis of eigenfunctions. These eigenfunctions can be ordered in terms of a measure of roughness (Dirichlet energy) on the RKHS, and provide a notion of coherent observables under the dynamics akin to the Koopman eigenfunctions associated with the atomic part of the spectrum. In addition, the regularized generator has a well-defined Borel functional calculus allowing the construction of an associated unitary evolution group {e tWτ } t∈R on the RKHS by exponentiation, approximating the unitary Koopman evolution group of the original system. We establish convergence results for the spectrum and Borel functional calculus of the regularized generator to those of the original system in the limit τ → 0 + . Convergence results are also established for a data-driven formulation, where these operators are approximated using finite rank operators obtained from observed time series. An advantage of working in spaces of observables with an RKHS structure is that one can perform pointwise evaluation and interpolation through bounded linear operators, which is not possible in L p spaces. This enables the evaluation of data-approximated eigenfunctions on previously unseen states, as well as data-driven forecasts initialized with pointwise initial data (as opposed to probability densities in L p ). The pattern extraction and prediction framework developed here is numerically applied to a number of ergodic dynamical systems with atomic and continuous spectra, with promising results. Introduction Characterizing and predicting the evolution of observables of dynamical systems is an important problem in the mathematical, physical, and engineering sciences, both theoretically and from an applications standpoint. A framework that has been gaining popularity [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22] is the operator-theoretic formulation of ergodic theory [23,24,25], where instead of directly studying the properties of the dynamical flow on the state space, one characterizes the dynamics through its action on linear spaces of observables. The two classes of operators that have been predominantly employed in these approaches are the Koopman and Peron-Frobenius (transfer) operators, which are duals to one another when defind on appropriate spaces of functions and measures, respectively. It is a remarkable fact, realized in the work of Koopman in the 1930s [26], that the action of a general nonlinear system on such spaces can be characterized through linear evolution operators, acting on observables by composition with the flow. Thus, despite the potentially nonlinear nature of the underlying dynamical flow, many relevant problems, such as coherent pattern detection, statistical prediction, and control, can be formulated as intrinisically linear problems, making the full machinery of functional analysis available to construct stable and convergent approximation techniques. The Koopman operator U t associated with a continuous-time, continuous flow Φ t : M → M on a manifold M acts on functions by composition, U t f = f • Φ t . It is a norm-preserving operator on the Banach space C 0 (M ) of bounded continuous functions on M , and a unitary operator on the Hilbert space L 2 (µ) associated with any Borel invariant measure µ. Our main focus will be the latter Hilbert space setting, in which U = {U t } t∈R becomes a unitary evolution group. It is merely a matter of convention to consider Koopman operators instead of transfer operators, for the action of the transfer operator at time t on densities of measures in L 2 (µ) is given by the adjoint U t * = U −t of U t . We seek to address the following two broad classes of problems: 1. Coherent pattern extraction; that is, identification of a collection of observables in L 2 (µ) having high regularity and a natural temporal evolution under U t . 2. Prediction; that is, approximation of U t f at arbitrary t ∈ R for a fixed observable f ∈ L 2 (µ). Throughout, we require that the methods to address these problems are data-driven; i.e., they only utilize information from the values of a function F : M → Y taking values in a data space Y , sampled finitely many times along an orbit of the dynamics. Spectral characterization of unitary evolution groups. By Stone's theorem on one-parameter unitary groups [27,28], the Koopman group U is completely characterized by its generator-a densely defined, skew-adjoint, unbounded operator V : D(V ) → L 2 (µ) with D(V ) ⊆ L 2 (µ) and V f = lim t→0 U t f − f t , f ∈ D(V ). In particular, associated with V is a unique projection-valued measure (PVM) E : B(R) → L(L 2 (µ)) acting on the Borel σ-algebra B(R) on the real line and taking values in the space L(L 2 (µ)) of bounded operators on L 2 (µ), such that V = R iω dE(ω), U t = R e iωt dE(ω).(1) The latter relationship expresses the Koopman operator at time t as an exponentiation of the generator, U t = e tV , which can be thought of as operator-theoretic analog of the exponentiation of a skew-symmetric matrix yielding a unitary matrix. In fact, the construction of the map V → e tV is an example of the Borel functional calculus, whereby one lifts a Borel-measurable function Z : iR → C on the imaginary line iR ⊂ C, to an operator-valued function Z(V ) = R Z(iω) dE(ω),(2) acting on the skew-adjoint operator V via an integral against its corresponding PVM E. The spectral representation of the unitary Koopman group can be further refined by virtue of the fact that H admits the U t -invariant orthogonal splitting L 2 (µ) = H p ⊕ H c , H c = H ⊥ p , where H p and H c are closed orthogonal subspaces of L 2 (µ) associated with the atomic (point) and continuous components of E, respectively. In particular, on these subspaces there exist unique PVMs E p : B(R) → L(H p ) and E c : B(R) → L(H c ), respectively, where E p is atomic and E c is continuous, yielding the spectral decomposition E = E p ⊕ E c .(3) We will refer to E p and E c as the point and continuous spectral components of E, respectively. The subspace H p is the closed linear span of the eigenspaces of V (and thus of U t ). Correspondingly, the atoms of E p , i.e., the singleton sets {ω j } ⊂ R for which E p ({ω j }) = 0, contain the eigenfrequencies of the generator. In particular, for every such ω j , E p ({ω j }) is equal to the orthogonal projector to the eigenspace of V at eigenvalue iω j , and all such eigenvalues are simple by ergodicity of the flow Φ t . As a result, H p admits an orthonormal basis {z j } satisfying V z j = iω j z j , U t z j = e iωj t z j , U t f = j e iωj t z j , f µ z j , ∀f ∈ H p ,(4) where ·, · µ is the inner product on L 2 (µ). It follows from the above that the Koopman eigenfunctions form a distinguished orthonormal basis of H p , whose elements evolve under the dynamics by multiplication by a periodic phase factor at a distinct frequency ω j , even if the underlying dynamical flow is nonlinear and aperiodic. In contrast, observables f ∈ H c do not exhibit an analogous quasiperiodic evolution, and are characterized instead by a weak-mixing property (decay of correlations), 1 t t 0 | g, U s f µ | ds −−−→ t→∞ 0, ∀g ∈ L 2 (µ). This is characteristic of chaotic dynamics. Pointwise and spectral approximation techniques. While the two classes of pattern extraction and prediction problems listed above are obviously related by the fact that they involve the same evolution operators, in some aspects they are fairly distinct, as for the former it is sufficient to perform pointwise (or even weak) approximations of the operators, whereas the latter are fundamentally of a spectral nature. In particular, observe that a convergent approximation technique for the prediction problem can be constructed by taking advantage of the fact that U t is a bounded (and therefore continuous) linear operator, without explict consideration of its spectral properties. That is, given an arbitrary orthonormal basis {φ 0 , φ 1 , . . .} of L 2 (µ) with associated orthogonal projection operators Π l : L 2 (µ) → span{φ 0 , . . . , φ l−1 }, the finite-rank operator U t l = Π l U t Π l is fully characterized by the matrix elements U t ij = φ i , U t φ j µ with 0 ≤ i, j ≤ l − 1, and by continuity of U t , the sequence of operators U t l converges pointwise to U t . Thus, if one has access to data-driven approximations U t ij,N of U t l determined from N measurements of F taken along an orbit of the dynamics, and these approximations converge as N → ∞, then, as l → ∞ and N l, the corresponding finite-rank operators U t l,N converge pointwise to U t . This property was employed in [12] in a technique called diffusion forecasting, whereby the approximate matrix elements U t ij,N are computed in a data-driven basis constructed from samples of F using the diffusion maps algorithm (a kernel algorithm for manifold learning) [29]. By spectral convergence results for kernel integral operators established in [30] and ergodicity, as N → ∞, the data-driven basis functions converge to an orhtonormal basis of L 2 (µ) in an appropriate sense, and thus the corresponding approximate Koopman operators U t l,N converge pointwise to U t as described above. It was demonstrated that diffusion forecasts of observables of the L63 system have skill approaching that of ensemble forecasts with the true model, despite the fact that the Koopman group in this case has a purely continuous spectrum (except from the trivial eigenfrequency at 0). Pointwiseconvergent approximation techniques for Koopman operators were also studied in [31,32], in the context of extended dynamic mode decomposition (EDMD) algorithms [14]. However, these methods require the availability of an orthonormal basis of L 2 (µ) of sufficient regularity, which, apart from special cases, is diffucult to have in practice (particularly in cases where the support of µ is an uknown, measure-zero subset of the state space manifold M ). Of course, this is not to say that the spectral decomposition in (3) is irrelevant in a prediction setting, for it reveals that an orthonormal basis of L 2 (µ) that splits between the invariant subspaces H p and H c would yield a more efficient representation of U t than an arbitrary basis, which could be made even more efficient by choosing the basis of H p to be a Koopman eigenfunction basis (e.g., [17]). Still, so long as a method for approximating a basis of L 2 (µ) is available, arranging for compatibility of the basis with the spectral decomposition of U t is a matter of optimizing performance rather than ensuring convergence. In contrast, as has been recognized since the earliest techniques in this area [1,2,3,4], in coherent pattern extraction problems the spectral properties of the evolution operators play a crucial role from the outset. In the case of measure-preserving ergodic dynamics studied here, the Koopman eigenfunctions in (4) provide a natural notion of temporally coherent observables that capture intrinsic frequencies of the dynamics. Unlike the eigenfunctions of other operators commonly used in data analysis (e.g., the covariance operators employed in the proper orthogonal decomposition [33]), Koopman eigenfunctions have the property of being independent of the observation map F , thus leading to a definition of coherence that is independent of the observation modality used to probe the system. In applications in fluid dynamics [6,34], climate dynamics [35], and many other domains, it has been found that the patterns recovered by Koopman eigenfunction analysis have high physical interpretability and ability to recover dynamically significant timescales from multiscale input data. Yet, despite these and other attractive theoretical properties of evolution operators, the design of datadriven approximation methods with rigorous convergence guarantees that can naturally handle both the point and continuous spectra of the operators is challenging, and several open problems remain. As an illustration of these challenges, and to place our work in context, it is worthwhile noting that besides approximating the continuous spectrum (which is obviously challenging), rigorous approximation of the atomic spectral component E p is also non-trivial, as, apart from the case of circle rotations, it is concentrated on a dense, countable subset of the real line. In applications, the density of the atomic part of the spectrum and the possiblity of the presence of a continuous spectral component necessitates the use of some form of regularization to ensure well-posedness of spectral approximation schemes. In the transfer operator literature, the use of regularization techniques such as domain restriction to function spaces where the operators are quasicompact [2], or compactification by smoothing by kernel integral operators [8], has been prevalent, though these methods generally require more information than the single observable time series assumed to be available here. On the other hand, many of the popular techniques in the Koopman operator literature, including the dynamic mode decomposition (DMD) [6,7] and EDMD [14] do not explicitly consider regularization, and instead implictly regularize the operators by projection onto finite-dimensional subspaces (e.g., Krylov subspaces and subspaces spanned by general dictionaries of observables), with difficult to control asymptotic behavior. To our knowledge, the first spectral convergence results for EDMD [16] were obtained for a variant of the framework called Hankel-matrix DMD [15], which employs dictionaries constructed by application of delay-coordinate maps [36] to the data observation function. However, these results are based on an assumption that the observation map lies in a finite-dimensional Koopman invariant subspace (which must be necessarily a subspace of H p ); an asssumption unlikely to hold in practice. This assumption is relaxed in [32], who establish weak spectral convergence results implied by strongly convergent approximations of the Koopman operator derived through EDMD. This approach, however, makes use of an a priori known orthornormal basis of L 2 (µ), the availability of which is not required in Hankel-matrix DMD. A fairly distinct class of approaches to (E)DMD perform spectral estimation for Koopman operators using harmonic analysis techniques [3,4,21,22]. Among these, [3,4] consider a spectral decomposition of the Koopman operator closely related to (3), though expressed in terms of spectral measures on S 1 as appropriate for unitary operators, and utilize harmonic averaging (discrete Fourier transform) techniques to estimate eigenfrequencies and the projections of the data onto Koopman eigenfunctions. While this approach can theoretically recover the correct eigenfrequencies corresponding to eigenfunctions with nonzero projections onto the observation map, its asymptotic behavior in the limit of large data exhibits a highly singular dependence on the frequency employed for harmonic averaging-this hinders the contrsuction of practical algorithms that converge to the true eigenfrequencies by examining candidate eigenfrequencies in finite sets. The method also does not address the problem of approximating the continuous spectrum, or the computation of Koopman eigenfunctions on the whole state space (as opposed to eigenfunctions computed on orbits). The latter problem was addressed in [22], who employed the theory of reproducing kernel Hilbert spaces (RKHSs) [37] to identify conditions for a cancidate frequency ω ∈ R to be a Koopman eigenfrequency based on the RKHS norm of the corresponding Fourier function e iωt sampled on an orbit. For the frequencies meeting these criteria, they constructed pointwise-defined Koopman eigenfunctions in RKHS using out-ofsample extension techniques [38]. While this method also suffers from a singular behavior in ω, it was found to significantly outperofm conventional harmonic averaging techniques, particularly in mixed-spectrum systems with non-trivial atomic and continuous spectral components simultaneously present. However, the question of approximating the continuous spectrum remains moot. In [21], a promising approach for estimating both the atomic and continuous parts of the spectrum was introduced, based on spectral moment estimation techniques. This approach consistently approximates the spectral measure of the Koopman operator on the cyclic subspace associated with a given scalar-valued observable, and is also capable of identifying its atomic, absolutely continuous, and singular continuous components. However, since it operates on cyclic subspaces associated with individual observables, it appears difficult to extend to applications involving a high-dimensional data space Y , including spatiotemporal systems where the dimension of Y is formally infinite. In [13,17,18,19] a different approach was taken, focusing on approximations of the eigenvalue problem for the skew-adjoint generator V , as opposed to the unitary Koopman operators U t , in an orthonormal basis of an invariant subspace of H p (of possibly infinite dimension) learned from observed data via kernel algorithms [29,30,39,40,41] as in diffusion forecasting. A key ingredient of these techniques is a family K 1 , K 2 , . . . of kernel integral operators on L 2 (µ) constructed from delay-coordinate-mapped data with Q delays, such that, in the infinite-delay limit, K Q converges in norm to a compact integral operator K ∞ : L 2 (µ) → L 2 (µ) commuting with U t for all t ∈ R. Because commuting operators have common eigenspaces, and the eigenspaces of compact operators at nonzero corresponding eigenvalues are finite-dimensional, the eigenfunctions of K ∞ (approximated by eigenfunctions of K Q at large Q) provide a highly efficient basis to perform Galerkin approximation of the Koopman eigenvalue problem. In [13,17,18,19], a well-posed Galerkin method was fomulated by regularizing the raw generator V by the addition of a small amount of diffusion, represented by a positive-semidefinite self-adjoint operator ∆ : D(∆) → L 2 (µ) on a suitable domain D(∆) ⊂ D(V ) . This leads to an advection-diffusion operator L = V − θ∆, θ > 0,(5) whose eigenvalues and eigenfunctions can be computed through provably convergent Galerkin schemes based on classical approximation theory for variational eigenvalue problems [42]. The diffusion operator in (5) is constructed so as to compute with V , so that every eigenfunction of L is a Koopman eigenfunction, with eigenfrequency equal to the imaginary part of the corresponding eigenvalue. Moreover, it was shown that the variational eigenvalue problem for L can be consistently approximated from data under realistic assumptions on the dynamical system and observation map. Advection-diffusion operators as in (5) can, in some cases, also provide a notion of coherent observables in the continuous spectrum subspace H c , although from this standpoint the results are arguably not very satisfactory. In particular, it follows from results obtained in [43], that if the support X ⊆ M of the invariant measure µ has manifold structure, and ∆ is chosen to be a Laplacian or weighted Laplacian for a suitable Reimannian metric, then the spectrum of L contains only isolated eigenvalues, irrespective of the presence of continuous spectrum [17]. However, if V has a non-empty continuous spectrum, then there exists no smooth Riemannian metric whose corresponding Laplacian ∆ commutes with V , meaning that L is necessarily non-normal. As is well known, the spectra of non-normal operators can have several undesirable, or difficult to control, properties, including extreme sensitivity to perturbations and failure to have a complete basis of eigenvectors. The behavior of L is even more difficult to understand if X is not a smooth manifold, and V possesses continuous spectrum. In [13,17,18], these difficulties are avoided by effectively restricting V to an invariant subspace of H p through a careful choice of data-driven basis, but this restriction precludes the method from identifying coherent observables in H c . Put together, these facts call for a different regularization approach to (5) that can seamlessly handle both the point and continuous spectra of V . Contributions of this work. In this paper, we propose a data-driven framework for pattern extraction and prediction in ergodic dynamical systems, which retains the advantageous aspects of [12,13,17,18] through the use of kernel integral operators to provide orthonormal bases of appropriate regularity, while being naturally adapted to dynamical systems with arbitrary (pure point, mixed, or continuous) spectral characteristics. The key element of our approach is to replace the diffusion regularization in (5) by a compactification of the skew-adjoint generator V of such systems (which is unbounded, and has complicated spectral behavior), mapping it to a family of compact, skew-adjoint operators W τ : H τ → H τ , τ > 0, each acting on an RKHS H τ of functions on the state space manifold M . In fact, the operators W τ are not only compact, they are Hilbert-Schmidt integral operators with continuous kernels. Moreover, the spaces H τ employed in this framework are universal, i.e., lie dense in the space of continuous functions on M [44], and have Markovian reproducing kernels. We use the unitary operator group {e tWτ } t∈R generated by W τ as an approximation of the Koopman group U , and establish spectral and pointwise convergence as τ → 0 in an appropriate sense. This RKHS approach has the following advantages. (i) The fact that W τ is skew-adjoint avoids non-normality issues, and allows decomposition of these operators in terms of unique spectral measuresẼ τ : B(R) → L(H τ ). The existence ofẼ τ allows in turn the construction of a Borel functional calculus for W τ , meaning in particular that operator exponentiation, e tWτ , is well defined. Moreover, by compactness of W τ , the measuresẼ τ are purely atomic, have bounded support, and thus characterized by a countable set of bounded, real-valued eigenfrequencies with a corresponding orthonormal eigenbasis of H τ . (ii) For systems that do possess nontrivial Koopman eigenfunctions, there exists a subset of the eigenfunctions of W τ converging to them as τ → 0. Crucially, however, the eigenfunctions of W τ provide a basis for the whole of L 2 (µ), including the continuous spectrum subspace H c , that evolves coherently under the dynamics as an approximate Koopman eigenfunction basis. (iii) The evaluation of e tWτ in the eigenbasis of W τ leads to a stable and efficient scheme for predicting observables, which can be initialized with pointwise initial data in M . This improves upon diffusion forecasting [12], as well as comparable prediction techniques operating directly on L 2 (µ), which produce "weak" forecasts (i.e., expectation values of observables with respect to probability densities in L 2 (µ)). (iv) Our framework is well-suited for data-driven approximation using techniques from statistics and machine learing [30,38,45]. In particular, the theory of interpolation and out-of-sample extension in RKHS allows for consistent and stable approximation of quantities of interest (e.g., the eigenfunctions of W τ and the action of e tWτ on a prediction observable), based on data acquired on a finite trajectory in the state space M . In our main results, Theorems 1, 2 and Corollary 3, we prove the spectral convergence of W τ to V in an appropriate sense by defining auxillary compact operators acting on L 2 (µ). In Theorem 16, we give a data-driven analog of our main results, indicating how to construct finite-rank operators from finite datasets without prior knowledge of the unerlying system and/or state space, and how spectral convergence still holds in an appropriate sense. Outline of the paper. In Section 2, we make our assumptions on the underlying system precise, and also state our main results, Theorems 1, 2 and Corollary 3 there. This is followed by results on compactification of operators in RKHS, Theorems 4-9 in Section 3, which will be useful for the proofs of the main results. Before proving the main results, we also review some concepts from ergodic theory and functional analysis in Section 4. Then, in Sections 5 and 6, we prove Theorems 4-7 and 8, 9, respectively, while Section 7 contains the proof of our main results. In Section 8, we describe a data-driven method to approximate the compactified generator W τ , and establish its convergence (Theorem 16). In Section 9, we present illustrative numerical examples of our framework applied to dynamical systems with both purely atomic and continuous Koopman spectra, namely a quasiperiodic rotation on a 2-torus, and the Rossler and Lorenz 63 (L63) systems. The paper ends in Section 9 with concluding remarks. Main results All of our main results will use the following standing assumptions and notations. This assumption is met by many dynamical systems encountered in applications, including ergodic flows on compact manifolds with regular invariant measures (in which case M = M = X), certain dissipative ordinary differential equations on noncompact manifolds (e.g., the Lorenz 63 (L63) system [46], where M = R 3 , M is an appropriate absorbing ball [47], and X a fractal attractor [48]), and certain dissipative partial equations with inertial manifolds [49] (where M is an infinite-dimensional function space). In what follows, we seek to compactify the generator V , whose action is similar to that of a differentiation operator along the trajectories of the flow. Intuitively, one way of achieving this is to compose V with appropriate smoothing operators. To that end, we will employ kernel integral operators associated with reproducing kernel Hilbert spaces (RKHSs). Kernels and their associated integral operators. In the context of interest here, a kernel will be a continuous function k : M × M → C, which can be thought of as a measure of similarity or correlation between pairs of points in M . Associated with every kernel k and every finite, compactly supported Borel measure ν (e.g., the invariant measure µ) is an integral operator K : L 2 (ν) → C 0 (M ), acting on f ∈ L 2 (ν) as Kf := M k(·, y)f (y) dν(y).(6) If, in addition, k lies in C r (M × M ), then K imparts this smoothness to Kf , i.e., Kf ∈ C r (M ). Note that the compactness of supp(ν) is important for this conclusion to hold. The kernel k is said to be Hermitian if k(x, y) = k * (y, x) for all x, y ∈ M . It will be called positive-definite if for every x 1 , . . . , x n ∈ M and a 1 , . . . , a n ∈ C, the sum n i,j=1 a * i a j k(x i , x j ) is non-negative, and strictly positive-definite if the sum is zero iff each of the a 1 , . . . , a n equals zero. Clearly, every real, Hermitian kernel is symmetric, i.e., k(x, y) = k(y, x) for all x, y ∈ M . Aside from inducing an operator mapping into C r (M ), a kernel k also induces an operator G = ιK on L 2 (µ), where ι : C 0 (M ) → L 2 (ν) is the canonical L 2 inclusion map on continuous functions. The operator G is a Hilbert-Schmidt integral operator, i.e., it is compact and has finite Hilbert-Schmidt norm, G HS := tr(G * G) = k L 2 (ν×ν) . Moreover, if k is Hermitian, K is self-adjoint, and there exists an orthonormal basis of L 2 (µ) consisting of its eigenfunctions. A kernel k will be called a Markov kernel with respect to ν if the associated integral operator G : L 2 (ν) → L 2 (ν) is Markov, i.e., (i) Gf ≥ 0 if f ≥ 0; (ii) Gf = f if f is constant; and (iii) Gf L 2 (ν) ≤ f L 2 (ν) , for all f ∈ L 2 (ν). Reproducing kernel Hilbert spaces. An RKHS on M is a Hilbert space H of complex-valued functions on M with the special property that for every x ∈ M , the point-evaluation map δ x : H → C, δ x f = f (x), is a bounded, and thus continuous, linear functional. By the Riesz representation theorem, every RKHS has a unique reproducing kernel, i.e., a kernel k : M × M → C such that for every x ∈ M the kernel section k(x, ·) lies in H, and for every f ∈ H, f (x) = δ x f = k(x, ·), f H , where ·, ·, H is the inner product of H, assumed conjugate-linear in the first argument. It then follows that k is Hermitian. Moreover, according to the Moore-Aronsajn theorem [50], given a symmetric, positivedefinite kernel k : M × M → C, there exists a unique RKHS H for which k is the reproducing kernel. If k is continuous and strictly positive-definite, H is a dense subset of C 0 (M ) [44]. In fact, for every [37]. Moreover, the range of K from (6) lies in H, so we can view K as an operator K : L 2 (ν) → H between Hilbert spaces. With this definition, K is compact, and the adjoint operator K * : H → L 2 (ν) maps f ∈ H into its L 2 (ν) equivalence class, i.e., K * = ι| H and G = K * K. For any compact subset S ⊆ M , one can similarly define H(S) to be the RKHS induced on S by the kernel k| S×S . In fact, upon restriction to the support of ν, the range of K is a dense subspace of H(supp(ν)). In particular, if k is strictly positive-definite, H(supp(ν)), and thus ran K| supp(ν) , are dense subspaces of C 0 (supp(ν)). r ≥ 0, if k ∈ C r (M × M ), then H is a dense subset of C r (M ) Nyström extension. Let H be an RKHS on M with reproducing kernel k. Then, the Nyström extension operator N : D(N ) → H acts on a subspace D(N ) of L 2 (ν), mapping each element f in its domain to a function N f ∈ H, such that N f lies in the same L 2 (ν) equivalence class as f . In other words, N f (x) = f (x) for ν-a.e. x ∈ M , and K * N is the identity on D(N ). It can also be shown that N K * is the identity on H(supp(ν)). If the kernel k is strictly positive-definite, then D(N ) is a dense subspace of L 2 (ν). We will give a precise constructive definition of N in Section 4. The following assumption specifies our nominal requirements on kernels pertaining to regularity and existence of an associated RKHS. Assumption 2. p : M × M → R is a C r , symmetric, strictly positive-definite Markov kernel with respect to the invariant measure µ, with p > 0. We will later describe how kernels satisfying Assumption 2 can easily be constructed from real, symmetric, strictly positive-definite, C r kernels using the bistochastic kernel normalization technique proposed in [51]. It should be noted that many of our results will require r = 2 differentiability class in Assumptions 1 and (2), but in some cases that requirement will be relaxed to r = 1 or 0. One-parameter kernel families. Let P : L 2 (µ) → H be the integral operator associated with a kernel p satisfying Assumption 2, taking values in the corresponding RKHS H. The associated operator G = P * P on L 2 (µ) has positive eigenvalues, which can be ordered as 1 = λ 0 > λ 1 ≥ . . .. Given a real, orthonormal basis {φ 0 , φ 1 , . . .} of L 2 (µ) consisting of corresponding eigenfunctions, it is known from RKHS theory [37] that {ψ 0 , ψ 1 , . . .} with ψ j = λ −1/2 j P φ j is an orthonormal basis of H(X). Defining λ τ,j := exp τ (1 − λ −1 j ) , ψ τ,j := λ τ,j /λ j ψ j , p τ (x, y) := ∞ j=0 ψ τ,j (x)ψ τ,j (y),(7) where τ > 0, and x, y are arbitrary points in M , the following theorem establishes the existence of a one-parameter family of RKHSs, indexed by τ , and an associated Markov semigroup on L 2 (µ). Theorem 1 (Markov kernels). Let Assumptions 1 and 2 hold. Then, the series expansion for p τ (x, y) in (7) converges to the values of C r , symmetric, strictly positive-definite Markov kernels on M , and the convergence is uniform on X × X. Moreover, the following hold: (i) The RKHS H τ on M corresponding to p τ lies dense in C 0 (M ), and for every 0 < τ 1 < τ 2 , the inclusions H τ2 (X) ⊆ H τ1 (X) ⊂ H(X) hold. Moreover, {ψ τ,0 , ψ τ,1 , . . .} is an orthonormal basis of H τ (X). (ii) For every τ > 0, the operator G τ = P * τ P τ , where P τ : L 2 (µ) → H τ is the integral operator associated with p τ , is a positive-definite, self-adjoint, compact Markov operator on L 2 (µ). (iii) Define G 0 := I L 2 (µ) . Then, the family {G τ } τ ≥0 forms a strongly continuous Markov semigroup; i.e., G τ1+τ2 = G τ1 •G τ2 for every τ 1 , τ 2 ≥ 0, and G τ converges pointwise to the identity operator as τ → 0 + . We will now use Markov operators G τ from Theorem 1 to compactify the generator V , and then establish various ways that these compactifications converge to V . In what follows, N τ : D(N τ ) → H τ will be the Nyström operator associated with H τ . We also let H ∞ = τ >0 D(N τ ) be the dense subspace of L 2 (µ) whose elements have H τ representatives for every τ > 0. Note that H ∞ is dense since it contains all finite linear combinations of the φ j . Similarly, setting H ∞ = τ >0 H τ , it follows that H ∞ (X) is a dense subspace of H(X). Given a Borel-measurable function Z : iR → C and a densely-defined skew-adjoint operator A, Z(A) will denote the operator-valued function obtained through the Borel functional calculus as in Section 1. For every set Ω ⊂ C, ∂Ω denotes its boundary. Theorem 2 (Main theorem). Under Assumptions 1, 2 with r = 2, and the definitions in (7), the following hold for every τ > 0: (i) The operator W τ P τ V P * τ : H τ → H τ is a well-defined, skew-adjoint, Hilbert-Schmidt operator. (ii) The operator G τ V : D(V ) → L 2 (µ) extends to a Hilbert-Schmidt integral operator B τ : L 2 (µ) → L 2 (µ). Moreover, the restrictions of G τ V and B τ on the dense subspace D(N τ ) ⊆ D(V ) coincide with the operator P * τ W τ N τ . 8 (iii) The operators B τ and W τ have purely atomic spectral measures E τ : B(R) → L(L 2 (µ)) and E τ : B(R) → L(H τ ), respectively, with the same eigenvalues. (iv) For every bounded Borel measurable set Ω ⊂ R such that E(∂Ω) = 0, E τ (Ω) and P * τẼτ (Ω)N τ converge strongly to E(Ω), respectively on L 2 (µ) and H ∞ , as τ → 0 + . Moreover, for every τ > 0, there exists a unitary operator U τ : L 2 (µ) → H τ such that U * τẼτ (Ω)U τ converges to E(Ω), strongly on L 2 (µ). (v) For every bounded continuous function Z : iR → C, as τ → 0 + , P * τ Z(W τ )N τ and U * τ Z(W τ )U τ converge to Z(V ) , in the strong operator topologies of H ∞ and L 2 (µ), respectively. (vi) For every element iω, ω ∈ R, of the spectrum of the generator V , there exists a continuous curve τ → ω τ such that iω τ is an eigenvalue of B τ and W τ , and lim τ →0 + ω τ = ω. Remark. In general, the operators B τ are non-normal, and have non-orthogonal eigenspaces. As a result, unlike PVMs associated with skew-adjoint operators (including the measures E τ associated with W τ ), the corresponding spectral measures E τ take values of non-orthogonal projection operators on L 2 (µ). A complete definition of E τ will be given in Section 3. The skew-adjoint operator W τ from Theorem 2 can be viewed as a compact approximation to the generator V , which is unbounded and exhibits complex spectral behavior (see Section 1). This approximation has a number of advantages for both coherent extraction and precition. In particular, regardless of the spectrum of V , W τ has a complete orthonormal basis of eigenfunctions, which are C 2 functions lying in H τ . These eigenfunctions recover Koopman eigenfunctions as special cases, in the sense that for every Koopman eigenfunction, there exists a sequence of eigenfunctions of W τ that converges to it as τ → 0 (in L 2 (µ) norm). This suggests that the eigenfunctions of W τ are good candidates for coherent observables of high regularity, which are well defined for systems with general spectral characteristics. Moreover, the discrete spectra of compact, skew-adjoint operators can be used to construct and approximate to any degree of accuracy the Borel functional calculi of these operators, and in particular perform prediction through exponentiation of W τ . The eigenvalues and eigenfunctions of the smoothing operators P τ employed in the construction of W τ can can also be easily derived from those of P with little computational overhead. With regards to prediction, let {iω τ,0 , iω τ,1 , . . .} be the set of eigenvalues of W τ (and V τ ), ordered so that |ω τ,0 | ≥ |ω τ,1 | ≥ · · · ≥ 0. Note that the iω j,τ come in complex-conjugate pairs, and 0 is the only accumulation point of the sequence ω τ,0 , ω τ,1 , . . . by skew-adjointness and compactness of W τ , respectively. Let also {ζ τ,0 , ζ τ,1 , . . .} be an orthonormal basis of H τ consisting of corresponding eigenfunctions. The following is a corollary of Theorem 2, which shows that the evolution of an observable in L 2 (µ) under U t can be evaluated to any degree of accuracy by evolution of an approximating observable in H ∞ under e tWτ . Corollary 3 (Prediction). For every τ > 0, W τ generates a norm-continuous group of unitary operators e tWτ : H τ → H τ , t ∈ R. Moreover, for any observable f ∈ L 2 (µ) and error bound > 0, there exists f ∈ H ∞ such that for all t ∈ R, U t f − U t f µ < , and lim τ →0 + U t f − e tWτ f µ = 0, e tWτ f = ∞ j=0 e tiωτ,j ζ τ,j , f Hτ ζ τ,j . Remark. The function e tWτ f lies in H τ , and is therefore a continuous function which we employ as a predictor for the evolution of the observable f under U t . Corollary 3 suggests that to obtain this predictor, we first regularize f by approximating it by a function f ∈ H ∞ , and then invoke the functional calculus for the compact operator W τ to evolve f as an approximation of U t f . Note that analogous error bounds to that in Corollary 3 can be obtained for other operator-valued functions of the generator than the exponential functions, U t = e tV . A constructive procedure for obtaining the predictor in a data-driven setting will be described in Section 8. Compactification schemes for the generator In this section, we lay out various schemes for obtaining compact operators by composing the generator V with operators derived from kernels. The are of independent interest, as, with appropriate modifications, they apply for more general classes of unbounded, skew-of self-adjoint operators obtained by extension of differentiation operators. In some cases, the following weaker analog of Assumption 2 will be sufficient. Assumption 3. k : M × M → R is a C 1 , symmetric positive-definite kernel. Given the RKHS H ⊂ C 1 (M ) associated with k, and the corresponding integral operators K : L 2 (µ) → H and G = K * K : L 2 (µ) → L 2 (µ), we begin by formally introducing the operators A : L 2 (µ) → L 2 (µ) and W : H → H, where A = V G, W = KV K * .(8) Note that it is not necessarily the case that these operators are well defined, for the ranges of G and K * may lie outside of the domain of V . Nevertheless, as the following two theorems establish, under sufficient regularity conditions on the kernel, A and W are well-defined, and in fact compact, operators. Theorem 4 (Pre-smoothing). Let Assumptions 1 and 3 hold, and define k : M × M → R as the C 0 kernel with k (x, y) := V k(·, y)(x) . Then: (8) is a well-defined, Hilbert-Schmidt integral operator on L 2 (µ) with kernel k , and thus operator norm bounded by (i) The range of G lies in the domain of V . (ii) The operator A fromA ≤ A HS = k L 2 (µ×µ) ≤ k C 0 (X×X) . (iii) A is equal to the negative adjoint, −(GV ) * , of the densely defined operator GV : D(V ) → L 2 (µ). Remark. As stated in Section 1, V is an unbounded operator, whose domain is a strict subspace of L 2 (µ). Theorem 4 thus shows that if we regularize this operator by first applying the smoothing operator G, then not only is V bounded, it is also Hilbert-Schmidt, and thus compact. In essence, this property follows from the C 1 regularity of the kernel. Arguably, the regularization scheme leading to A, which involves first smoothing by application of G, followed by application of V , is among the simplest and most intuitive ways of regularizing V . However, the resulting operator A will generally not be skew-symmetric; in fact, apart from special cases, A will be non-normal. Theorem 5 below provides an alternative regularization approach for V , leading to a Hilbert-Schmidt operator on H which is additionally skew-adjoint. Working with this operator also takes advantage of the RKHS structure, allowing pointwise function evaluation by bounded linear functionals. Theorem 5 (Compactification in RKHS). Let Assumptions 1 and 3 hold, and definek : M × M → R as the C 0 kernel withk (x, y) = −k (y, x). Then: (i) The range of K * lies in the domain of V , and V K * : H → L 2 (µ) is a bounded operator. (ii) The operator W from (8) is a well-defined, Hilbert-Schmidt, skew-adjoint, real operator on H, satisfying W f = Mk (·, y)f (y) dµ(y). Remark. Because W is skew-adjoint, real, and compact, it has the following properties, which we will later use. In the next theorem, we connect the operators A and W through the adjoint of A. Theorem 6 (Post-smoothing). Let Assumptions 1 and 3 hold. Then, the adjoint of −A from (8) is a Hilbert-Schmidt integral operator B : L 2 (µ) → L 2 (µ) with kernelk . In addition: (i) The densely-defined operator GV : D(V ) → L 2 (µ) is bounded, and B is equal to its closure, GV := (GV ) * * . Moreover, B is a closed extension of KW N : D(N ) → L 2 (µ), and if the kernel k is strictly positive-definite, i.e., D(N ) is a dense subspace of L 2 (µ), that extension is unique. (ii) B generates a norm-continuous, 1-parameter group of bounded operators e tB : L 2 (µ) → L 2 (µ), t ∈ R, satisfying K * e tW = e tB K * , K * e tW N = e tB | D(N ) , ∀t ∈ R. Remark. Because V is an unbounded operator, defined on a dense subset D(V ) ⊂ L 2 (µ), the domain of GV is also restricted to D(V ). It is therefore a non-intuitive result that a regularization of V after an application of G could still result in a bounded operator that can be extended to the entire space L 2 (µ). Theorem 6(i) shows that, on the subspace D(N ) ⊂ L 2 (µ), B acts by first performing Nyström extension, then acting by W , then mapping back to L 2 (µ) by inclusion via K * . In other words, B is a natural analog of W acting on L 2 (µ), though note that, unlike W , B is generally not skew-adjoint. To summarize, on the basis of Theorems 4-6, we have obtained the following sequence of operator extensions: KW N ⊆ GV ⊂ B = GV = (GV ) * * . As our final compactification of V , we will construct a skew-adjoint operatorṼ on L 2 (µ) by conjugation by a compact operator. In particular, since G is positive-semidefinite, it has a square root G 1/2 : L 2 (µ) → L 2 (µ), which is the unique positive-semidefinite operator satisfying G 1/2 G 1/2 = G. Note that by compactness of G, G 1/2 is compact, and its action on functions can be conveniently evaluated in an eigenbasis of G. Using this operator, we will show in Theorem 7 below that the operator G 1/2 V G 1/2 , defined on the subspace {f ∈ L 2 (µ) : G 1/2 f ∈ D(V )}, actually extends to a well-defined compact operator. Theorem 7 (Skew-adjoint compactification). Let Assumptions 1 and 3 hold with r = 1. Then, G 1/2 V G 1/2 is a densely defined, bounded operator with a unique skew-adjoint extension to a Hilbert-Schmidt, skew-adjoint operatorṼ : L 2 (µ) → L 2 (µ). Moreover,Ṽ is unitarily equivalent to the operator W from Theorem 5; that is, there exists a unitary operator U : L 2 (µ) → H such thatṼ = U * W U. This completes the statement of our compactification schemes for V . Since these schemes are all carried out using the same kernel k, one might expect that the spectral properties of the compact operators A, B,Ṽ , and W , exhibit non-trivial relationships. These relationships will be made precise in Theorems 8 and 9 below. Before stating these theorems, we introduce the spectral measures appropriate for the spectral decomposition of the non-normal, compact operators A and B. Non-orthogonal projection-valued measures. In what follows, a non-orthogonal projection valued measure (NPVM) on a Hilbert space H, will be a mapping E from the Borel sets of C into the space of linear, bounded maps L(H) with the following properties. (i) E(C) = I H and E(∅) = 0 . (ii) For each S ∈ B(C), E(S) is a projection map, i.e., E(S)E(S) = E(S). (iii) For any disjoint collection S 1 , S 2 . . . ∈ B(C), E(∪ j∈N S j ) = E(S 1 ) + E(S 2 ) + · · · , where this countable sum of operators is meant to converge in the strong operator topology. (iv) For any S 1 , S 2 . . . ∈ B(C), E(∩ j∈N S j ) = E(S 1 )E(S 2 ) · · · . An NPVM E will be called a spectral measure for a (possibly unbounded) operator T : D(T ) → H, D(T ) ⊆ H f, T g H = C zdE f g , ∀u ∈ H, . where E f g : B(C) → C is the complex-valued Borel measure given by E f g (S) = f, T g L 2 (µ) . E(S) is not necessarily an orthogonal map. Spectral measures are known to exist uniquely for normal operators. We construct spectral measures for families of skew-adjoint as well as non-normal compact operators, in Theorem 8. It is well known [52] that if T is a normal operator, then the support of E equals σ(T ). The spectral measure provides a decomposition of the operator T , and and also provides a way to define its functional calculus, as shown in (1) (2) . Theorem 8 (Spectra of the compactified generators). Let Assumptions 1 and 3 hold with r = 1, and assume further that the kernel k is strictly positive definite. Let also {z 0 ,z 1 , . . .} be an orthonormal basis of L 2 (µ), consisting of eigenfunctionsz j ofṼ corresponding to purely imaginary eigenvalues iω j . Then: (i) A, B, and W have the same spectra asṼ , including multiplicities of eigenvalues. In addition, if the kernel k is Markov: Remark. (i) The compactness ofṼ , and W allows for simple expressions for the functional calculi of these operators. For instance, for every Borel-measurable function Z : iR → C, we have (ii) 0E (U ) = j:ωj ∈U z j , · L 2 (µ) z j , E(U ) = j:ωj ∈U z j , · L 2 (µ) z j ,Ẽ(U ) = j:ωj ∈U z j , · L 2 (µ)zj , E(U ) = j:ωj ∈U ζ j , · H ζ j ,Z(W ) = R Z(iω) dE(ω) = ∞ j=0 Z(iω j ) ζ j , · H ζ j , and analogous relationships hold for Z(A), Z(B), and Z(Ṽ ). (ii) Because the operators A and B are generally non-normal and have non-orthogonal eigenspaces, the corresponding projection-valued measures, respectively E and E , may take values in the set of nonorthogonal projections on L 2 (µ). While a non-orthogonal projection may be unbounded operator, in this case, E and E always result in bounded projection operators. (iii) The Markovianity assumption on the kernel was important to conclude that A, B,Ṽ , and W have finite-dimensional nullspaces (which may not be the case for a general compact operator), allowing us to establish a one-to-one correspondence of the spectra of these operators, including eigenvalue multiplicities. The results in Theorems 4-8 are for compactifications based on general kernels satisfying Assumptions 1 and 3 and their associated integral operators. Next, we establish spectral convergence results for oneparameter families of kernels that include the kernels p τ associated with the Markov semigroups in our main result, Theorem 2. Specifically, we assume: Assumption 4. {k τ : M × M → R} with τ > 0 is a one-parameter family of C 1 , symmetric, strictly positive-definite kernels, such that, as τ → 0 + , the sequence of the corresponding compact operators G τ = K * τ K τ on L 2 (µ) converges strongly to the identity, and the sequence of skew-adjoint compactified generators V τ ⊇ G 1/2 τ V G 1/2 τ converges strongly to V on the subspace D(V 2 ) ⊂ D(V ). Under this assumption, we establish the following notion of spectral convergence for approximations of the generator V by compact operators. Theorem 9 (Spectral convergence). Suppose that Assumptions 1 and 4 hold with r = 1, and let W τ : H τ → H τ and B τ : L 2 (µ) → L 2 (µ), with τ > 0, be the Hilbert-Schmidt operators constructed via (8) and Theorem 6, respectively, applied for the kernels k τ from Assumption 4. Let also E τ and E τ be the (purely atomic) spectral measures of B τ and W τ , respectively, constructed as in Theorem 8. Then: (i) As τ → 0 + , the operators A τ and B τ converges strongly to V on D(V ). (ii) For every bounded continuous function Z : iR → C and as τ → 0 + , Z(A τ ), Z(B τ ) and K * τ Z(W τ )N τ converge to Z(V ), in the strong operator topology of D(Z(V )). (iii) For every bounded Borel measurable set U ⊂ R such that E(∂U ) = 0, and as τ → 0 + , E τ (U ), E τ (U ) and K * τ E τ (U )N τ converge strongly to E(U ). (iv) For every element of the spectrum iω of the generator V , there exists a sequence of eigenvalues iω τ of B τ (and A τ , W τ ) depending continuously on τ , and converging to iω as τ → 0 + . Note that Theorem 9 makes several of the statements of our main result, Theorem 2. In Section 5, we will prove that theorem by invoking Theorems 4-9 for the family of Markov kernels p τ . Results from functional analysis and analysis on manifolds In this section, we review some basic concepts from RKHS theory, spectral approximation of operators, and analysis on manifolds that will be useful in our proofs of the theorems stated in Sections 2 and 3. ψ j = λ −1/2 j Kφ j , j ∈ J,(9) where {φ 0 , φ 1 , . . .} is an orthonormal set in L 2 (ν) consisting of eigenfunctions of G = K * K, corresponding to strictly positive eigenvalues λ 0 ≥ λ 1 ≥ · · · , and J = {j ∈ N 0 : λ j > 0}, we define D(N ) =    j∈J a j φ j : j∈J |a j | 2 /λ j < ∞    , N   j∈J a j φ j   := j∈J a j λ −1/2 j ψ j .(10) It follows directly from these definitions that {ψ j } j∈J is an orthormal set in H satisfying K * ψ j = λ 1/2 j φ j , and N is a closed-range, closed operator with ran N = ran K = span{ψ j } j∈J . Moreover, K * N and N K * reduce to the identity operators on D(N ) and ran N , respectively. In fact, upon restriction to S, ran N coincides with the RKHS H(S), and {ψ j | S } j∈J forms an orthonormal basis of the latter space. If, in addition, the kernel k is strictly positive definite, as we frequently require in this paper, then D(N ) is a dense subspace of L 2 (ν), and K * coincides with the pseudoinverse of N , defined as the unique bounded operator N † : H → L 2 (µ) satisfying (i) ker N † = ran N ⊥ ; (ii) ran N † = ker N ⊥ ; and (iii) N N † f = f , for all f ∈ ran N . Note that we have described the Nyström extension for the L 2 space associated with an arbitrary compactly supported Borel measure ν since later on we will be interested in applying this procedure for the invariant measure µ of the system, but also for discrete sampling measures encountered in data-driven approximation schemes. Strong resolvent convergence. In order to prove the various spectral convergence claims made in Sections 2 and 3, we need appropriate notions of convergence of operators approximating the generator V that imply spectral convergence. Clearly, because V is unbounded, it is not possible to employ convergence in operator norm for that purpose. In fact, for the approximations studied here, even strong convergence on the domain of V appears difficult to verify. For example, in an approximation of V byṼ τ = G 1/2 τ V τ G 1/2 τ , even though G 1/2 τ f converges to f as τ → 0 + for every f ∈ D(V ), V G 1/2 τ may not converge to V f , as V is unbounded. On the other hand, for every f ∈ D(V ) and g ∈ L 2 (µ), we have g, G 1/2 τ V G 1/2 τ f µ = − V g, (I −B τ )f µ → 0, which shows that B τ converges to V weakly, but this type of convergence turns out to be too weak for the types of spectral convergence we seek to establish. Instead, for our purposes, it will be sufficient to establish convergence in the strong resolvent sense (e.g., [53]), which sits between weak and strong convergence, and is sufficient to establish our spectral convergence claims. To wit, let T : D(T ) → H be a closed operator on a Hilbert space H, and consider a sequence of operators T τ : D(T τ ) → H indexed by a parameter τ > 0. The sequence T τ is said to converge to T as τ → 0 + in strong resolvent sence if for every complex number ρ in the resolvent set of T , not lying on the imaginary line, the resolvents (ρ − T τ ) −1 converge to (ρ − T ) −1 strongly. Following [54], we will say that the sequence T τ is p2-continuous if every T τ is bounded, and the function τ → P 2 (iT τ ) is continuous for all quadratic polynomials P 2 with real coefficients. In the case of skew-adjoint operators, which are necessarily denselydefined and closed, strong resolvent convergnence implies the following convergence results for spectra and Borel functional calculi. φ ∈ L 2 (µ), lim sup τ →0 + 1 J (T τ )φ µ ≤ 1 J (T )φ µ . (iii) For every bounded, Borel-measurable set U such that Θ(∂U ) = 0, Θ τ (U ) converges strongly to Θ(U ). (iv) For every bounded, Borel-measurable function Z : iR → C of bounded support, Z(T τ ) converges strongly to Z(T ), provided that Θ(S) = 0, where S ⊂ R is a closed set such that iS contains the discontinuities of Z. (v) If T is bounded, (ii) hold for every Borel-measurable set U ⊆ R, and (iii) for every bounded Borelmeasurable function Z : iR → C, and (vi) If the operators T τ are compact, then for every element θ ∈ iR of the spectrum of T , there exists a one-parameter family of θ τ ∈ iR of eigenvalues of T τ such that lim τ →0 + θ τ = θ. Moreover, if the sequence T τ is p2-continuous, the curve τ → θ τ is continuous. 1 J (A n )φ µ ≤ lim sup n→∞ f (A n )φ µ = f (A)φ µ ≤ 1 J (A)φ µ . We will now prove Claim (iii). For every subset A ⊆ R, define I A : R → R, I A (x) := x1 A (x). Note that if A is a bounded, Borel measurable set, then I A is a bounded, Borel measurable function. For any operator T , the operator I A (T ) is the spectral truncation of T to the set A. Moreover, if T is self-adjoint, then so is I A (T ). Thus, I A (T τ ), I A (T ) are bounded, skew-adjoint operators for every τ > 0 and bounded Borel set A ⊂ R. Next, assume for simplicity, that U = [a, b], where a ≤ b are not eigenvalues of V . For every w > 0, once can construct a continuous function f w : R → R such that f w (x) = x for x ∈ [a, b], equals 0 outside [a − w, b + w], and is linear on the intervals [a − w, a] and [b, b + w]. By Claim (i), lim τ →0 + f w (T τ ) = f w (T ). The operators f w (T τ ), f w (T ) are bounded and skew-adjoint, therefore, by Claim (v), for every bounded, measurable g : R → R, lim τ →0 + (g • f w )(T τ ) = lim τ →0 + g(f w (T τ )) = g(f w (T )) = (g • f w )(T ) . Now take g = 1 U , the indicator function on U . Then, g = 1 U ⇒ g • f w = 1 U + 1 Jw , where J w := [b, b + w] ∩ f −1 w (U ) . Substituting this identity into the strong operator limit, and re-arranging gives For every w > 0, lim τ →0 + [Θ τ (U ) + Θ τ (J w ) − Θ(J w )] = Θ(U ).(11) Here we have used the fact that for any Borel set A ⊂ R, Θ(A) = 1 A (T ), and a similar fact for T τ . The operator Θ(J w ) is the spectral projection to a subspace H w ⊆ H. Since Θ(∂U = {a, b}) = 0 and ∩ w>0 J w = {b}, we have ∩ w>0 H w = {0}. As a result, the spaces H ⊥ w form an increasing sequence of spaces, i.e., H = ∪ w>0 H ⊥ w . Thus to prove that Θ τ (U ) converges strongly to Θ(U ), it is enough to prove it on H ⊥ w0 for every fixed w 0 > 0. So let w 0 > 0 be fixed, and φ ∈ H ⊥ w0 . Then for every 0 < w < w 0 , by construction, Θ(J w )φ ⊥ w0 = 0. Moreover, by Claim (ii), lim τ →0 + Θ τ (J w )φ = 0. Thus substituting this into (11) gives Θ(U )φ = lim τ →0 + [Θ τ (U ) + Θ τ (J w ) − Θ(J w )] φ = lim τ →0 + Θ τ (U )φ + lim τ →0 + Θ τ (J w )φ + Θ(J w )φ = lim τ →0 + Θ τ (U )φ. This completes the proof of Claim (iii). It is a standard result from analysis that the functional calculus for unbounded skew-adjoint operators is continuous as a map from L ∞ (iR) → L(H) in the operator norm topology, see for example Corollary 4.43 [52]. Thus the convergence established in Claim (iii) can be extended to the L ∞ closure of the space of simple functions with bounded support. This is precisely the set of bounded measurable functions with bounded support, thus proving Claim (iv) and the Proposition. Lemma 10 lays the foundation for many of the spectral convergence results in Theorem 9, and thus Theorem 2. It also highlights, through Claim (iii), the convergence properties for the functional calculus and spectrum lost from the fact that V is unbounded. Yet, despite the usefulness of the results stated in Theorem 10, the basic assumption made, namely that T τ converges to T in strong resolvent sense, is oftentimes difficult to explicitly verify. Fortunately, in the case of skew-adjoint operators of interest here, there exist sufficient conditions for strong resolvent convergence, which are easier to verify. Before stating these conditions, we recall that a core for a closed operator T : D(T ) → H on a Hilbert space H is any subspace C ⊆ D(T ) such that T is the closure of the restricted operator T | C . In other words, the closure of the graph of T | C , as a subset of H × H, is the graph of T . Note that T may not have a unique core. We also introduce the notion of convergence in the strong dynamical sense [53]. Specifically, a sequence T τ : D(T τ ) → H, τ > 0, of skew-adjoint operators is said to converge to T : D(T ) → H as τ → 0 in the strong dynamical sense if e tTτ converges strongly to e tT for every t ∈ R. Note that in the case of the operatorsṼ τ from (4) approximating the generator V , strong dynamical convergence means that the unitary operators e tVτ converge strongly to the Koopman operator U t = e tV for every time t ∈ R. Lemma 11. Let T τ : D(T τ ) → H and T : D(T ) → H be the skew-adjoint operators from Theorem 10. Then, the following hold: (i) The domain D(T 2 ) of the operator T 2 is a core for T . (ii) If T τ converges pointwise to T on a core of T , then it also converges in strong resolvent sense. (iii) Strong resolvent convergence of T τ to T is equivalent to strong dynamical convergence. Proof. Claim (i) follows from Theorem 5 of [56]. Claims (ii), (iii) and (v) follow from Propositions 10.1.18 and 10.1.8 respectively of [53]. There the statements are for self-adjoint operators, but they apply to skewadjoint oeprators as well. Results from analysis on manifolds. For the remainder of this section, we state a number of standard results from analysis on manifolds that will be used in the proofs presented in Sections 5 and 7. In what follows, we consider that M is a C r compact manifold, equipped with an arbitrary C r−1 Riemannian metric (e.g., a metric induced from the ambient space M, or the embedding F : M → Y into the data space Y from Setion 8), and an associated covariant derivative operator ∇. Proof. Denoting the gradient operator associated with the Riemannian metric on M by grad, the claim follows by an application of the Cauchy-Schwartz inequality for the Riemannian inner product, viz. Ξf C 0 (M ) = Ξ · grad f C 0 (M ) ≤ Ξ C 0 (M ;T M ) grad f C 0 (M ;T M ) = Ξ C 0 (M ;T M ) ∇f C 0 (M ;T * M ) ≤ Ξ C 0 (M ;T M ) f C 1 (M ) . In particular, under Assumption 1, the dynamical flow Φ t on M is generated by a vector field V ∈ C 0 (M ; T M ), for which Lemma 12 applies. This vector field is related to the generator V by a conjugacy with the inclusion maps ι : C 0 (M ) → L 2 (µ) ι 1 : C 1 (M ) → L 2 (µ), namely, ι V = V ι 1 . The following is a well known result from analysis, and the proof is left to the reader. Lemma 13 (C 1 convergence theorem). Let M be a compact, connected, C 1 manifold equipped with a C 0 Riemannian metric. Let also f j : M → R be a sequence of tensor fields in C 1 (M ; T * n M ), such that the sequence { ∇f j C 0 (M ;T * (n+1) M ) } j∈N is summable. Then, if there exists x ∈ M such that the series F x := j∈N f j (x) converges in Remannian norm, the series j∈N f j converges uniformly to a tensor field F ∈ C 1 (M ; T * (n+1) M ) such that F (x) = F x . This lemma leads to the following C r convergence result for functions, which will be useful for establishing the smoothness of kernels constructed as infinite sums of C r eigenfunctions. Lemma 14. Let M be a compact, connected, C r manifold with r ≥ 1, equipped with a C r−1 Riemannian metric. Suppose that f j : M → R is a sequence of real-valued C r (M ) functions such that the sequence { f j C r (M ) } j∈N is summable, and there exists x ∈ M such that the series F x = ∞ j=0 f j (x) converges. Then, the series ∞ j=0 f j converges absolutely and in C r (M ) norm to a C r function F , such that F (x) = F x . Proof. We will prove this lemma by induction over q ∈ {1, . . . , r}, invoking Lemma 13 as needed. First, note that summability of { f j C r (M ) } j∈N implies summability of { ∇ q f j C 0 (M ;T * q M ) } j∈N } for all q ∈ {1, . . . , r}. Because of this, and the fact that j∈N f j (x) converges, it follows from Lemma 13 that j∈N f j converges in C 1 norm to some C 1 function F . This establishes the base case for the induction (q = 1). Now suppose that it has been shown that j∈N f j converges to F in C q (M ) norm for 1 < q < r. In that case, j∈N ∇ q f j (x) converges, and by summability of { ∇ q+1 f j C 0 (M ;T * (q+1) M ) } j∈N , it follows from Lemma 13 that ∇ q F = j∈N ∇ q f j converges in C 1 (M ; T * q M ) norm. Thus, ∇ q+1 F = j∈N ∇ q+1 f j converges in C 0 (M ; T * (q+1) M ) norm, which in turn implies that j∈N f j converges to F in C q+1 (M ) norm, and the lemma is proved by induction. Proof of Theorems 4-7 Proof of Theorem 4. By Assumption 3, H is a subspace of C 1 (M ), and therefore for every f ∈ H, K * f = ι 1 f , where ι 1 is the C 1 (M ) → L 2 (µ) inclusion map. Claim (i) then follows from the facts that ran ι 1 ⊂ D(V ), and K is bounded. To prove Claim (ii), let K : L 2 (µ) → C 0 (M ) be the kernel integral operator associated with the continuous kernel k , and ι the C 0 (M ) → L 2 (µ) inclusion map. Because ιK is a Hilbert-Schmidt integral operator on L 2 (µ), with operator norm bounded above by its Hilbert-Schmidt norm, ιK k L 2 (µ×µ) ≤ k C 0 (X×X) , the claim will follow if it can be shown that ιK = V G. To that end, note that for every f ∈ L 2 (µ) and x ∈ M we have K f (x) = k (x, ·), f µ . Now because k lies in C 1 (M × M ), for every x ∈ M the function k (x, ·) is the C 0 (M ) limit k (x, ·) = lim t→0 g t , where g t = (k(Φ t (x), ·) − k(x, ·))/t, and by continuity of inner products, K f (x) = k (x, ·), f µ = lim t→0 g t , f µ = lim t→0 g t , f µ = lim t→0 1 t [ k(Φ t (x), ·), f µ − k(x, ·), f µ ] = V Kf (x). Therefore, because ran K ⊂ C 1 (M ), for any f ∈ L 2 (µ) we have ιK f = ι V Kf = V ι 1 Kf = V K * Kf = V Gf, proving Claim (ii). Finally, to prove Claim (iii), note that by definition, D((GV ) * ) := {f ∈ L 2 (µ) : ∃!h ∈ L 2 (µ) such that ∀g ∈ D(V ), f, GV g µ = h, g µ }, (GV ) * f := h. We will now use this definition to show that (GV ) * = −V G = −A. Indeed, for every f ∈ D(A) = L 2 (µ) and every g ∈ D(V ), setting h = −Af , we obtain h, g µ = −Af, g µ = − V Gf, g µ = Gf, V g µ = f, GV g µ . This satisfies the definition of (GV ) * , proving the claim and the theorem. Proof of Theorem 5. We begin with the proof of Claim (i). The inclusion ran K * ⊂ D(V ) holds because H is a subspace of C 1 . To prove that V K * is bounded, we make use of Using Lemma 12, and the fact that the inclusion map ι H : H → C 1 (M ) is bounded [see Propositions 6.1-6.2, [37] ] , we compute V K * f L 2 (µ) = ι V f L 2 (µ) ≤ V f C 0 (M ) ≤ V f C 1 (M ) ≤ ι H V f H , proving that V K * is bounded and completing the proof of Claim (i). Turning to Claim (ii), that W is compact follows from the fact that it is a composition of a compact operator, K, by a bounded operator, V K * . Moreover, W is skew-symmetric by skew-adjointness of V , and thus skew-adjoint because it is bounded. W is also real because K and V are real opeators. It thus remains to verify the integral formula for W f stated in the theorem. For that, note it follows from the Leibniz rule for vector fields and the fact that k lies in C 1 (M × M ) that for every f ∈ C 1 (M ) and x ∈ X, , ·), f µ vanishes by skew-adjointness of V . Using these results, we obtain k(x, ·) V f = V (k(x, ·)f ) − ( V k(x, ·))f = V (k(x, ·)f ) +k (x, ·)f. Moreover, M V (k(x, ·)f ) dµ = 1 M , V (k(xW f (x) = KV K * f (x) = KV ι 1 f (x) = K V f (x) = M k(x, ·) V f dµ = Mk (x, ·)f dµ. Proof of Theorem 6. That B = −A * is a Hilbert-Schmidt integral operator with kernelk follows from standard properties of integral operators. Next, to prove Claim (i), note that GV is bounded as it has a bounded adjoint, (GV ) * = −A, by Theorem 4, and therefore has a unique closed extension GV : L 2 (µ) → L 2 (µ) equal to (GV ) * * . In order to verify that GV = B, it suffices to show that GV f = Bf for all f in any dense subspace of D(V ); in particular, we can choose the subspace ι 1 C 1 (M ). For any observable ι 1 f in this subspace, we have Bf = ιK f and GV f = ι 1 K f , whereK : L 2 (µ) → C 0 (M ) is the integral operator with kernelK , defined analously to the operator K in the proof of Theorem 4. Employing the Leibniz rule as in the proof of Theorem 5, it is straightforward to verify that Bf is indeed equal to GV f , proving that B is the unique closed extension of GV . Next, to show that B is also an extension of K * W N , it suffices to show that GV ⊇ K * W N . To that end, note first that K * W N is a well defined operator by Theorem 5, and thus, substituting the definition for W in (8), and using the fact that K * N is the identity on D(N ), we obtain K * W N = GV K * N = GV | D(N ) . This shows that K * KW N ⊆ GV ⊂ B, confirming that B is a closed extension of K * W N . If k is strictly positive, then D(N ) is dense, and by the bounded linear transformation theorem, B is the unique closed extension of K * W N . This completes the proof of Claim (i). Next, to prove Claim (ii), note that because B is compact, the Taylor series e tB = ∞ n=0 (tB) n /n! converges in operator norm for every t ∈ R, and the set {e tB } t∈R clearly forms a group under composition of operators. This group is norm-continuous by boundedness of B. Similarly, we have e tW = ∞ n=0 (tW ) n /n! in operator norm, and observing that for every n ∈ N, K * W n = B n , we arrive at the claimed identity, K * e tW = ∞ n=0 1 n! t n K * W n = ∞ n=0 1 n! t n B n = e tB . The identity K * e tW N = e tB | D(N ) then follows from the fact that K * N is the identity on D(N ). Proof of Theorem 7. Let {φ j } ∞ j=0 be an orthonormal basis of L 2 (µ) consisting of eigenfunctions φ j of G corresponding to eigenvalues λ j ordered in decreasing order. Let also {ψ j } ∞ j=0 be an orthonormal basis of H, whose first J elements are given by (9) (with some abuse of notation as J may be infinite). Let U : L 2 (µ) → H be the unitary operator mapping φ j to ψ j . To prove the theorem, it suffices to show that G 1/2 V G 1/2 is well-defined on a dense subspace of L 2 (µ), and on that subspace, G 1/2 V G 1/2 and U * W U are equal. To verify that G 1/2 V G 1/2 is densely defined, note first that G 1/2 φ j trivially vanishes for j / ∈ J, and therefore G 1/2 V G 1/2 φ j is well-defined and vanishes too. Moreover if j ∈ J, G 1/2 φ j = K * ψ j , and G 1/2 V G 1/2 φ j is again well defined since ran K * ⊂ D(V ). As a result, the domain of G 1/2 V G 1/2 contains all linear combinations of φ j with j / ∈ J, and all finite combinations with j ∈ J, and is therefore a dense subspace of L 2 (µ). Next, to show that U * W U and G 1/2 V G 1/2 are equal on this subspace, it suffices to show that they have the same matrix elements in the {φ j } basis of L 2 (µ), i.e., that φ i , G 1/2 V G 1/2 φ j L 2 (µ) is equal to φ i , U * W Uφ j H for all i, j ∈ N 0 . To verify this, note first that Uφ j = ψ j for any j ∈ N 0 , but because ker K * = ran K ⊥ , K * ψ j and therefore K * Uφ j vanish when j / ∈ J. Because G 1/2 φ j also vanishes in this case, we deduce that if either of i and j does not lie in J, the matrix elements φ i , U * W Uφ j µ and φ j , G 1/2 W G 1/2 φ j µ both vanish. On the other hand, if i, j ∈ J, we have φ i , U * W Uφ j µ = ψ i , W ψ j H = K * ψ i , V K * ψ j µ = λ −1/2 i K * Kφ i , λ −1/2 j K * Kφ j µ = G 1/2 φ i , V G 1/2 φ j µ = φ i , G 1/2 V G 1/2 φ j µ . We have thus shown that U * W U and G 1/2 V G 1/2 have the same matrix elements in an orthonormal basis of L 2 (µ), and because the former operator is defined on the whole of L 2 (µ) and the latter is densely defined, this implies thatṼ = U * W U is the unique closed extension of G 1/2 V G 1/2 . ThatṼ is skew-adjoint and Hilbert-Schmidt follows immediately. Proof of Theorems 8 and 9 We will need the following lemma, describing how to convert between eigenfunctions of the operators A, B,Ṽ , and W . The proof follows directly from the definitions of these operators, so the details will be ommitted. Lemma 15. Let Assumptions 1 and 3 hold with r = 1. Then, (i) If ζ ∈ H is an eigenfunction of W at eigenvalue iω, then K * ζ is an eigenfunction of B at eigenvalue iω. (ii) z is an eigenfunction of A at eigenvalue iω iff Kz is an eigenfunction of W at eigenvalue iω. (iii) If z ∈ L 2 (µ) is an eigenfunction of A with at eigenvalue iω, then G 1/2 z is an eigenfunction ofṼ at eigenvalue iω. (iv) Ifz is an eigenfunction ofṼ at eigenvalue iω, then G 1/2z is an eigenfunction of B at eigenvalue iω. Proof of Theorem 8. In what follows, σ a (T ) will denote the set of eigenvalues of a linear operator T , including multiplicities. Starting from Claim (i), because all of A, B,Ṽ , and W are compact operators, in order to verify equality of their spectra, it is sufficient to show that σ a (A), σ a (B), σ a (Ṽ ), and σ a (W ) are equal. First, note that σ a (W ) = σ a (Ṽ ) follows from the fact that W andṼ are unitarily equivalent. Moreover, by Lemma 15, σ a (A) ⊆ σ a (W ) ⊂ iR, and because A is a real operator, it follows that σ a (A) is symmetric about the origin of the imaginary line iR, so that σ a (A) = −σ a (A) = −σ a (A) * = −σ a (A * ) = −σ a (−B) = σ a (B). Thus, the equality of σ a (A), σ a (B), σ a (Ṽ ), and σ a (W ) will follow if it can be shown that σ a (A) = σ a (Ṽ ). Indeed, it follows from Lemmas 15(iii) and 15(iv) that σ a (A) ⊆ σ a (Ṽ ) and σ a (Ṽ ) ⊆ σ a (B). These relationships, together with the fact that σ a (A) = σ a (B), imply that σ a (A) = σ a (Ṽ ), and thus σ(A) = σ(B) = σ(Ṽ ) = σ(W ), as claimed. This completes the proof of Claim (i). To prove Claim (ii), note that under Markovianity and strict positive-definiteness of k, Gf = f implies that f is µ-a.e. constant. In addition, by ergodicity, V f = 0 implies again that f is µ-a.e. constant. It then follows that Af = 0 =⇒ V (Gf ) = 0 =⇒ Gf = µ-a.e. constant =⇒ f = µ-a.e. constant. This shows that 0 is a simple eigenvalue of A with constant corresponding eigenfunctions. Therefore, since σ a (A) = σ a (B) = σ a (Ṽ ) = σ a (W ), 0 is also a simple eigenvalue of B,Ṽ , and W , and the constancy of the corresponding eigenfunctions follows directly from the definition of these operators. Next, for Claim (iii), fix a nonzero eigenvalue iω of A. By compactness of this operator, the corresponding eigenspace is finite-dimensional, and thus the injective operator G 1/2 maps every basis of this eigenspace to a linearly independent set. By Lemma 15(iii) and Claim (i) this set is actually a basis of the eigenspace ofṼ at eigenvalue iω j . As result, every eigenfunction ofṼ at nonzero corresponding eigenvalue lies in the range of G 1/2 . Moreover, it follows from Claim (iii) that every eigenfunction ofṼ at eigenvalue 0 is constant, and thus also lies in the range of G 1/2 . We therefore conclude that every eigenfunction ofṼ lies in the range of V , and thus in the domain of G −1/2 , as claimed. Next, sincez j ∈ ran G 1/2 , Vz j =Ṽ G 1/2 G −1/2z j = G 1/2 V GG −1/2z j = G 1/2 V GG −1/2z j = G 1/2 Az j . It then follows that Az j = iω j z j . It remains to be proved that the z j form an unconditional Schauder basis. First note that in Claim (iv), each of the z j is an eigenfunction of B with eigenvalue iω j , by Lemma 15(iv). Moreover, for every j, k ∈ N 0 , we have z j , z k µ = G −1/2z j , G 1/2z k µ = z j ,z k µ = δ jk , which shows that {z 0 , z 1 , . . .} is a dual sequence to {z 0 , z 1 , . . .}. Fix the φ j s from (9) as an orthonormal basis for L 2 (µ), and in what follows, this basis will be used to represent operators and collections of vectors as N × N matrices. Let Z , L and U denote the matrices Z i,j := φ i , z j µ , L i,j := z i , φ j µ , U i, j ∈ N 0 Then note that Z and L have 2 summable columns and rows respectively. Moreover, LZ = Id, L = U * Λ 1/2 , Z = Λ −1/2 U, where Λ = diag(λ 0 , λ 1 , . . . , ). Now define I k to be the the diagonal matrix with the first k entries 1 and the rest 0. Then, Z I k L = Λ −1/2 U I k U * Λ 1/2 = Λ −1/2 I k Λ 1/2 = I k , lim k→∞ Z I k L strong = Id(12) By Lemma 2.1, [57], (12) implies that the columns of Z correspond to a Schauder basis in the chosen basis. This means that the z j form a Schauder basis. To see that it is also unconditional, not that if the z j s are permuted, (12) would still hold, but with the rows and columns of U , Z , L and Λ permuted. Now every Schauder basis has a unique dual sequence, which is also a Schauder basis (e.g., [58]). Thus {z j : j ∈ N 0 } is an unconditional Schauder basis too, proving claims (iii and (iv). In Claim (v), the fact that the ζ j are eigenfunctions of W follow from Lemma 15 (ii). We also have ζ i , ζ j H = Kz i , Kz j H = z i , Gz j L 2 (µ) = z i , z j L 2 (µ) = δ ij , establishing orthonormality of the ζ j , and proving the claim. Finally, in Claim (vi), first note that all of the summations are well defined and independent of ordering due to the unconditionality of all the bases involved. The results forṼ and W follow from standard properties of compact, skew-adjoint operators. we will only prove the representation of B, and omit the proofs for the representations of A as it is exactly analogous to the proof for B. Every f ∈ L 2 (µ) has an expansion f = ∞ j=0 a j z j , with the summation holding in L 2 (µ)-sense. Then since Bz j = iω j z j and since B is a bounded operator, B(f ) = B ∞ j=0 a j z j = ∞ j=0 a j B(z j ) = ∞ j=0 a j iω j z j . The fact that B(f ) = ∞ j=0 z j , f µ iω j z j will follow from this identity for the coefficients a j , z j , f µ = z j , k∈N a k z k µ = k∈N a k z j , z k µ = k∈N a k δ j,k = a j . We have thus shown that B = R iωdE(ω), and E(∅) = 0. For a fixed Borel set U ⊂ C, π B (U ) is a bounded linear operator. The countable additivity π B follows from directly from the definition. Note that for each i, j ∈ N 0 , each of the maps E j is a projection, satisfying E i • E j = δ i,j E j . Thus, E(U )E(V ) = E(U ∩ V ). Thus, π B is a projection-valued spectral measure, with a discrete support {iω j : j ∈ N 0 }. This completes the proof of the claim, and also of Theorem 8. Proof of Theorem 9. We will only prove the claim for B τ , as the proofs for A τ are exactly analogous. We begin with the proof of Claim (i). Since G τ → Id strongly by Assumption 4, lim τ →0 + (B τ − V )(f ) µ = lim τ →0 + (G τ V − V )(f ) µ = lim τ →0 + (G τ − Id)(V f ) µ = 0, ∀f ∈ D(V ). LetẼ τ denote the spectral measure ofṼ τ . Now note that for every τ > 0, B τ • G 1/2 τ = G 1/2 τ •Ṽ τ , E τ • G 1/2 τ = G 1/2 τ •Ẽ τ , K * • W τ = B τ • K * , K * • E τ = E τ • K * .(13) The first equality follows from definition, the second follows from the first and from the representation formulae in Theorem 8. To prove Claim (ii), we use the condition in Assumption 4 thatṼ τ converges strongly to V on the space D(V 2 ), as τ → 0 + . SinceṼ τ is skew-adjoint, by Lemma 11 (i),Ṽ τ converges to V in the strong resolvent sense. Thus by Proposition 10(i), for every continuous, bounded function Z : iR → R, Z(Ṽ τ ) converges strongly to Z(V ). Since G 1/2 τ is a uniformly bounded family of operators converging to the identity, we have G 1/2 τ Z(Ṽ τ ) s − → Z(V ), as τ → 0 + . 20 Thus by (13), Z(B τ )G 1/2 τ also converges strongly to Z(V ). Again, by the Uniform boundedness principle, Z(B τ ) is a uniformly bounded family of operators, hence Z(B τ )(Id − G 1/2 τ ) converges strongly to zero. Adding these two limits gives that Z(B τ ) converges strongly to Z(V ), as claimed. Claim (ii) now follows from the fact that for every bounded Borel-measurable function φ : iR → C, φ(B) and K * φ(W )N are equal on D(N ). Claim (iii) follows in a similar manner from Proposition 10 (iii). Claim (iv) follows from Proposition 10 (vi). Proof of Theorems 1, 2 and Corollary 3. Proof of Theorem 1. Since p τ if it is well defined, is symmetric in x and y, it is enough to prove that for fixed y ∈ M , p τ (·, y) is a C r function. Note that r τ,j := λ (τ ) j /λ j , r τ,j λ −1/2 j ∞ j=0 ∈ 1 , ∀τ > 0. By Mercer's theorem [59], p(x, y) := ∞ j=0 ψ j (x)ψ j (y) converges absolutely and uniformly on X × X. Since r τ,j converges to zero, the series for p τ (x, y) also converges absolutely and uniformly on X × X. Thus condition (i) of Lemma 14 is satisfied. For every j ∈ N 0 and α ∈ {1, . . . , r}, ψ τ,j = r τ,j λ −1/2 j X p(·, y)φ j (y)dµ(y), ∇ α ψ τ,j = r τ,j λ −1/2 j X ∇ α p(·, y)φ j (y)dµ(y). Thus ψ τ,j C r ≤ r τ,j λ −1/2 j p C r . Let y be fixed, and let f j = ψ τ,j (y)ψ j . Then f j C r ≤ ψ τ,j (y) C 0 (M ) ψ τ,j C r ≤ p C r r τ,j λ −1 j . Thus { f j C r } ∞ j=0 ∈ 1 , and condition (ii) of Lemma 14 is satisfied. So Lemma 14 applies and therefore, p τ (x, y) = ∞ j=0 ψ τ,j (x)ψ τ,j (y) = ∞ j=0 r τ,j ψ j (x)ψ j (y) = ∞ j=0 f j , converges in C r (M ) norm, and therefore, the limit p τ (x, y) is defined everywhere and forms a C r sum as claimed. The symmetry of p τ follows again from its Mercer representation. Moreover, the functions φ j are still eigenfunctions of P τ , with eigenvalues λ τ j . This in turn implies that P τ is also positive definite and p τ is a Markov kernel. In Claim (i), the set {ψ τ,j : j ∈ N 0 } is analogous to the ψ j -basis from (9) and is therefore an orthonormal basis. For every j ∈ N 0 , the function 1/ λ j ψ j equals λ −1/2 τ,j ψ τ,j and thus lies in H τ . However, this is in the same L 2 (µ) equivalence class as φ j , and the φ j s form an orthonormal basis for L 2 (µ). Thus H τ is dense in L 2 (µ). The show that H ⊃ H τ1 ⊃ H τ2 we need the following inequality. λ j > λ (τ2) j = exp τ 2 (λ −1 j − 1) > exp τ 1 (λ −1 j − 1) = λ (τ1) j . The claim now follows from the fact that H = { ∞ j=0 a j / λ j φ j : ∞ j=0 |a j | 2 < ∞} and H τ = { ∞ j=0 a j / λ (τ ) j φ j : ∞ j=0 |a j | 2 < ∞}. This proves Claim (i). Note that the orthonormal basis of φ j s are also eigenfunctions of G τ , with eigenvalues λ τ,j . Thus G τ is strictly positive definite and self adjoint and compact. The Markov property follows from the observation that 1 = λ τ,0 > λ τ,1 ≥ . . ., proving Claim (ii). The semi-group property of G τ follows from the fact the φ j =s form an orthonormal eigenbasis for all the G τ s with eigenvalues λ (τ ) j , and for each j ∈ N 0 , λ (τ1+τ2) j = λ (τ1) j λ (τ2) j . To prove strong continuity of this semi-group, it is enough to prove continuity as τ → 0 + . Let f ∈ L 2 (µ), then f can be written as the sum f = ∞ j=0 a j φ j . For simplicity, assume that f 2 µ = ∞ j=0 |a j | 2 = 1. Let > 0 be fixed, it is enough to 21 show that lim τ →0 + (P τ − Id)(f ) < 2 . Let f N be the partial sum −1 j=0 a j φ j . Then for N large enough, f − f N µ < . Then, (P τ − Id)(f ) = (P τ − Id)(f ) + (P τ − Id)(f − f N ) = N j=0 a j λ τ j − 1 φ j + (P τ − Id)(f − f N ) Since P τ is a positive definite Markov operator, P τ ≤ 1 and therefore, the last term can be bounded as (P τ − Id)(f − f N ) µ < 2 . Now note that for each j, λ τ j − 1 = exp τ (1 − λ −1 j ) − 1 converges to 0 as τ → 0 + . This concludes the proof of Theorem 1. Proof of Theorem 2. Claim (i) of the theorem follows from Theorem 5 (ii); Claim (ii) follows from theorem 6; and Claims (iii)-(vi) will follow from Theorems 8 and 9 if we can show that p τ satisfies Assumption 4. The condition in this assumption that G τ converges pointwise to the identity has already been proven to hold, in Theorem 1 (iii). It thus remains to be shown thatṼ τ converges pointwise to V on D(V 2 ), as τ → 0 + . Since G τ is a semigroup, it is equivalent to show that G τ V G τ converges pointwise to V on D(V 2 ). Since G τ is uniformly bounded and converges to the identity, it is equivalent to show that A τ = V G τ converges pointwise to V on D(V 2 ). We will prove this using three small observations (A1)-(A3). Fix an f ∈ D(V 2 ) and define f τ = P τ f . (A1) P * f τ ∈ D(V 2 ) ∩ H, for every τ > 0 : Note that f τ ∈ ran P τ , and since p τ is a C 2 kernel, ran(P τ ) ⊂ C 2 (M ) ⊂ D(V 2 ). Since the range of P τ lies in H τ , f τ ∈ H τ , and H τ ⊂ H by Theorem 1 (i). (A2) f τ converges to f in H-norm as τ → 0 + : Expanding f in the φ j basis gives f = ∞ j=0 a j φ j and f τ = ∞ j=0 a j λ τ,j φ j , thus f − f τ 2 H = ∞ j=0 (1 − λ τ,j ) 2 |a j | 2 /λ j = N j=0 (1 − λ τ,j ) 2 |a j | 2 /λ j + ∞ j=N +1 (1 − λ τ,j ) 2 |a j | 2 /λ j . lim τ →0 + f − f τ 2 H ≤ N j=0 |a j | 2 /λ j lim τ →0 + (1 − λ τ,j ) 2 + ∞ j=N +1 |a j | 2 /λ j = ∞ j=N +1 |a j | 2 /λ j . The inequality on the second line follows from the fact that λ τ,j ∈ (0, 1]. Since f H = ∞ j=0 |a j | 2 /λ j , the last term converges to 0 as N → ∞. Since N was arbitrary, the limit must be 0, as claimed. (A3) V P * τ f τ is a Cauchy sequence in L 2 (µ) : V P * τ : H τ → L 2 (µ) is a bounded operator by Theorem 5 (i), thus V P * τ is also bounded as an operator H τ → L 2 (µ). Therefore, since f τ is a Cauchy sequence in H, V P * τ f τ = V P * τ P τ f = A τ f is a Cauchy sequence in L 2 (µ). Thus, f τ is a family of functions in H and must converge to V f , since f ∈ D(V 2 ) and D(V 2 ) is a core for D(V ). This proves the claim and completes the proof of Theorem 2. Proof of Corollary 3. The functions φ j form an orthonormal basis of L 2 (µ) and therefore, for L ∈ Z large enough,if f is defined to be the projection of f to the subspace spanned by {φ 0 , . . . , φ L−1 }, then f − f µ < . Then since U t is an isometry on L 2 (µ), for all t ∈ R, U t f − U t f µ < . Secondly, each φ j lies in H ∞ , and since f is a finite linear combination of the φ j s, f ∈ H ∞ . This proves the first half of Corollary 3. The limit involving τ → 0 + follows from Theorem 2 (v), and the formula for e tWτ f follows from the functional calculus for W τ . This completes the proof of Corollary 3. Data-driven approximations and convergence We now take up the problem of approximating the operators in Theorem 2 from a finite time series of observed data and without prior knowledge of the dynamical flow Φ t . As already alluded to in Section 1, besides a lack of knowledge of the underlying dynamics, this problem has the following issues : 1. The support X of the ergodic invariant measure µ is generally a non-smooth subset of the state space manifold L, of zero Lebesgue measure (e.g., a fractal attractor of a dissipative dynamical system). As a result, it is not possible to construct bases of L 2 (µ) (to be used for function and/or operator approximation) by restriction of smooth basis functions defined on L. Moreover, one does not have direct access to the invariant measure µ and the associated L 2 (µ) space, but is limited to working with the sampling measure µ N := 1 N N n=0 δ xn supported on the finite trajectory {x 0 , . . . , x N −1 }. Here, δ y is the Dirac delta measure, supported on the point y,. 2. In realistic experimental scenarios, the sampled states will not lie exactly on X. 3. Measurements are not taken continuously in time, preventing direct evaluation of the action of the generator V on functions. This is caled a data-driven setting and the general assumptions are the following. The manifold Y will be referred to as the data space. While it usually has the structure of a linear space (e.g., Y = R m ), in a number of scenarios Y can be nonlinear (e.g., directional measurements with Y = S 2 ). Data-driven Hilbert spaces. Associated with the sampling measure µ N is an N -dimensional Hilbert space, L 2 (µ N ), equipped with the inner product f, g µ N : = 1 N N −1 n=0 f * (x n )g(x n ) . This space consists of equivalence classes of complex-valued functions on X having common values at the sampled states x 0 , . . . , x N −1 (i.e., the support of µ N ). It is clear that L 2 (µ N ) is isomorphic to the space C N equipped with a normalized Euclidean inner product. While, in general, there is no correspondence between the elements of L 2 (µ N ) and L 2 (µ) (allowing one, e.g., to perform approximation of functions and operators on L 2 (µ) in subspaces constructed via L 2 (µ N ) functions), the fact that our approximations of V are based on operators acting on RKHSs allows us to construct data-driven subspaces for operator approximation through integral operators on L 2 (µ N ) without invoking the, data -inacessible, L 2 (µ) space. We provide below a numerical procedure that has as input the assumptions in Assumption 5, and outputs a predictor for an arbitrary C 0 observable f . We then prove in Theorem 16 that the prediction converges to the true observable, in the limit of infinitely many data-points, i.e., as N → ∞. Input. A continuous observable f ∈ C 0 (M ); a continuous, bounded function Z : iR → R; a parameter L ∈ Z; and the data set of size N mentioned in Assumption 5. Assume that the underlying dynamical system satisfies Assumption 1. Step 1. The first step would be to construct a pull-back kernel from the kernel κ on the data-space. This is done as follows. k(x, y) := ρ (F (x), F (y)) , ∀x, y ∈ L. The assumptions on ρ and F in Assumption 5 ensure that k is also a C 2 symmetric, strictly positive-definite kernel on L. See our discussion later on how to construct ρ when the injectivity conditions on F is not satisfied. Step 2. Next, we construct a C 2 , symmetric kernel p N from the kernel k using Coifman and Hirn's [51] method to obtain bistochastic kernels, and then use p to construct a data-driven family of kernels p τ,N mentioned in Theorems 1, 2. Note that p N is a C 2 function on M × M , and forms a strictly positive definite kernel, which is positive valued everywhere on X × X. It therefore induces an RKHS H N on M . Let P N : L 2 (µ N ) → H N be the integral operator with kernel p N . Then by construction, P N is a Markov operator, i.e., P N 1 N = 1 N , where 1 N is the constant function on L 2 (µ N ) equal to 1 everywhere. P N is symmetric and positive definite, it thus has N positive eigenvalues 1 = λ N,0 > λ N,1 ≥ . . . ≥ λ N,N −1 > 0, and corresponding eigenfunctions φ N,j , which form an orthonormal basis for L 2 (µ N ). Step 3. We will use the kernel p N to define a new family of kernels similar to (7). For each τ > 0, λ τ,N,j := e τ (1−λ −1 N,j ) , ψ (L) τ,N,j := λ 1/2 τ,N,j λ −1/2 N,j ψ N,j , 0 ≤ j < N. ψ N,j := λ −1 N,j 1 N N −1 n=0 φ N,j (x n )p N (x n , ·), p τ,N (x, y) := N −1 j=0 ψ (L) τ,N,j (x)ψ (L) τ,N,j (y). For each 0 ≤ j < N , φ N,j ∈ H N . The kernel p τ,N is symmetric and so induces an RKHS H τ,N . Step 4. We now compute data-driven versions of W , let V ∆t be any finite difference operator on L 2 (µ N ). For the integer parameter L taken as input, define the following L × L matrices V (L) τ,N,∆t i,j := φ τ,N,i , V ∆t φ τ,N,j µ N , W (L) τ,N,∆t i,j := (λ τ,N,i λ τ,N,j ) 1/2 V (L) τ,N,∆t i,j . The matrices V Step 5. Finally we construct a continuous, data-driven predictor for Z(V )f . Let Π N : C 0 (M ) → L 2 (µ N ) be the inclusion map. Define, lim τ →0 + lim L→∞ lim ∆t→0 + ,N ∆t→∞ Z(V )f −f τ,N,∆t,L µ = 0.(14) (i) Recall the eigenfunctions ζ τ,j and eigenvalues ω τ,j of W τ , from Corollary 3. Then, Remark. The functionf τ,N,∆t,L is a continuous function which can be constructed from data, and which can be evaluated at points outside the sampled set {x 0 , . . . , x N −1 }. This latter property is known as out of sample evaluation.f τ,N,∆t,L plays the role of a data-driven approximation of Z(V )f , and in the case when Z(t) = e it , it is the time-t predictor for the observable f . Remark. The order in which the limits are taken in Theorem 16 is important and cannot be changed. The parameter L is the number of eigenfunctions z τ,j used in our prediction and is therefore like a resolution parameter. The first limits taken are of N and ∆t, this is the data-driven limit, that is, the limit at infinitely many samples taken at arbitrarily small sampling intervals. For the spectral convergence of the data-driven operators to hold, the data-driven limits must be taken at a fixed resolution L. After this limit has been taken, L may be increased to facilitate a finite rank approximation of the compact operator Z(W τ ). Finally, the limit τ → 0 + is taken as the 0-limit of the Markov semi-group P τ . Remark. The choice of L is independent of the prediction time t, which is an important requirement for a numerical implementation off τ,N,∆t,L , since one can only compute only a finite number of eigenfunctions with reasonable accuracy. Notice that as τ is changed, the eigenfunctions z τ,j and z τ,j just get scaled, so they need not be computed separately for every τ > 0. Proof of Theorem 16. First note that Claim (ii) follows directly from (14). We will prove (14) by performing a series of simplifications. Let f L denote the projection onto the span of the functions φ 0 , . . . , φ L−1 , the eigenfunctions of the integral operator P (see (7)). For every L ∈ N, f L ∈ D(N τ ). Now observe, lim L→∞ Z(V )f − Z(V )f L µ = 0, for fixed L : lim τ →0 + Z(V )f L − P * τ Z(W τ )N τ f L µ = 0.(15) The first limit follows from Theorem 2 (v), and the second follows from the fact that Z(V ) is a bounded operator. Let π τ L be the orthogonal projection onto the subspace of H τ , spanned by {N τ φ j : j = 0, . . . , L − 1}. Define W (L) τ to be the rank-L approximation of W τ , i.e., W (L) τ := π τ L W τ π τ L , lim L→∞ W (L) τ − W τ Hτ = 0, lim L→∞ Z(W τ )N τ f L − Z(W (L) τ )N τ f L C 0 (X) = 0. (16) The second equality follows from the fact that W τ is a compact operator, so it is limit in the operator norm topology on H τ of these finite rank approximations. The third limit follows from this operator norm convergence. Recall the RKHS H τ,N defined after Step 3, the kernel p τ,N induces an integral operator P τ,N : L 2 (µ N ) → H τ,N . Similarly, one can define a Nystrom extension N τ,N : L 2 (µ N ) → H τ,N . Now analogously to (8), define W τ,N,∆t : H τ N → H τ N as W τ,N,∆t = P τ,N V ∆t P * τ,N . The relations between these various operators is summarized below. C 0 (M ) L 2 (µ N ) H τ N L 2 (µ N ) L 2 (µ N ) H τ N . Π N N τ,N Id P * τ,N W τ,N,∆t V∆t P τ,N Similarly to W (L) τ , define W (L) τ,N,∆t to be the restriction of W τ,N,∆t to the L-dimensional subspace of H τ,N spanned by ψ τ,N,0 , . . . , ψ τ,N,L−1 . An important observation now is that f τ,N,∆t,L = W (L) τ,N,∆t N τ,N Π N f.(17) Now note that (14) is proved by the following limit, by combining it with the limits in (15), (16) and (17) . lim ∆t→0 + ,N ∆t→∞ Z(W (L) τ,N,∆t )N τ,N Π N f − Z(W (L) τ )N τ f L C 0 (X) = 0(18) To prove (18), we have to compare the two L-dimensional operators W (b) Prove that the basis functions themselves converge in C 0 (X) norm as N → ∞. This will not only prove (18) but also claim (i) of the theorem. To prove (a), note that ψ τ,j and ψ τ,N,j are respectively, the j-th eigenfunctions of the kernel integral operators P τ and P τ,N . The kernel p τ,N is the data driven version of the kernel p τ , and it was shown in [30] that lim N →∞ λ τ,N,j = λ τ,j , lim N →∞ ψ τ,N,j − ψ τ,j C 0 (X) = 0, ∀j ∈ N 0 . It remains to prove (b). In the bases specified, the (i, j)-th entry of the matrix representing W (L) τ is W (L) τ i,j = (λ τ,i λ τ,j ) 1/2 φ τ,i , V ∆t φ τ,j µ , W (L) τ,N,∆t i,j = (λ τ,i λ τ,j ) 1/2 φ τ,N,i , V ∆t φ τ,N,j µ N The inner products were shown to converge as N ∆t → ∞, ∆t → 0 + in Proposition 36, [18], and we have already established convergence of the λs. This completes the proof of Theorem 16. Kernels on data space. To make the reproducing kernel k : X × X → C strictly positive definite, the kernel κ on the data-space Y itself must be strictly positive definite. There are many ways to chose such kernels, we use the following class of kernels called radial Gaussian kernels, κ(y 1 , y 2 ) = exp − d 2 (y 1 , y 2 ) , where d : Y × Y → R is the Euclidean metric and a positive bandwidth parameter, are positive definite. Another requirement for the pull-back k (Step 1) to be strictly positive definite is that the observation map F : X → Y be injective. This is often not satisfied in real-world applications, for example when F is a low-dimensional observation map providing only partial information. The remedy for such cases is to incorporate delays into the map, i.e., using the following as a new observation map. It can be shown [36] that under mild genericity assumptions on F and Φ ∆t , for Q large enough, F Q is a diffeomorphism onto its image. Examples and discussions In this section, we apply the procedure described in Section 8 to ergodic dynamical systems with different types of spectra. The goal is to demonstrate that the results of Theorems 2 and 16 hold, and that the method is effective in (i) identifying Koopman eigenfunctions and eigenfrequencies, and (ii) forecasting in chaotic systems. We consider the following two systems, whose spectra are respectively pure point and absolutely continuous. 1. A linear quasiperiodic flow R α1,α2 on T 2 , defined as dR t α1,α2 (θ)/dt = (α 1 , α 2 ), θ = (θ 1 , θ 2 ) ∈ T 2 , α 1 = 1, α 2 = √ 30,(19) and observed through the an embedding F : T 2 → R 2 given by, F (θ 1 , θ 2 ) = (sin θ 1 cos θ 2 + sin θ 2 , − cos θ 1 cos θ 2 , sin θ 2 ). This system has a pure point Koopman spectrum, consisting of eigenfrequencies of the form n 1 α 1 +n 2 α 2 with n 1 , n 2 ∈ Z. Because α 1 and α 2 are rationally independent, the set of eigenfrequencies lies dense in R, which makes the problem of numerically distinguishing eigenfrequencies from non-eigenfrequencies non-trivial despite the simplicity of the underlying dynamics. 2. The Lorenz 63 (L63) flow [46], Φ t l63 : R 3 → R 3 , generated by the C ∞ vector field V with components (V (x) , V (y) , V (z) ) at (x, y, z) ∈ R 3 given by V (x) = σ(y − x), V (y) = x(ρ − z) − y, V (z) = xy − βz,(21) where β = 8/3, ρ = 28, and σ = 10. The system is sampled through the observation map F : R 3 → R 3 with F (x, y, z) := (x, y, z) . The L63 flow is known to have a chaotic attractor X l63 ⊂ R 3 with fractal dimension ≈ 2.06 [60], supporting a physical invariant measure [48] with a corresponding purely continuous spectrum of the generator [61]. That is, there exist no nonzero Koopman eigenfrequencies for this system. Methodology. The following steps describe sequentially the entire numerical procedure carried out. 1. Numerical trajectories x 0 , x 1 , . . . , x N −1 , with x n = Φ n∆t (x 0 ) of length N are generated, using a sampling interval ∆t = 0.01 in all cases. In the L63 experiments, we let the system relax towards the attractor, and set x 0 to a state sampled after a long spinup time (4000 time units); that is, we formally assume that x 0 has converged to the ergodic attractor. We use the ode45 solver of Matlab to compute the trajectories. For both the systems R t α1,α2 and Φ t l63 , N = 64000. 2. The observation map F described for each system is used to generate the respective time series F (x 0 ), F (x 1 ), . . . , F (x N −1 ). This dataset forms the basis of all subsequent computations. The kernel k is obtained by starting with a Gaussian kernel ρ on the data-space. 3. The eigenpairs (φ N,j , λ N,j ) are computed for j ∈ {0, . . . , L} using Matlab's eigs iterative solver, for L = 500, 750 respectively. Results. See figures 1 -5 for the results of applying our methods. 1. The flow R t α1,α2 (19) has a purely discrete spectrum, and the Koopman eigenfunctions are the usual Fourier basis on the 2-torus. Fig. 1 shows that the eigenfunctions and eigenvalues of W τ converge to those of V . The time-plot of these eigenfunctions show periodicity, which supports the fact that they converge to the Koopman eigenfunctions which themselves are time-periodic. 2. The flow Φ t l63 (21) has a purely continuous spectrum, Fig. 2 shows that the eigenfunctions of the approximation W τ have no periodicity in time. The first and the third eigenfunctions appear to have a support which is localized in the phase space, and therefore in time since it is a continuos time flow. 3. Figure 3 checks the convergence of the spectrum of W τ , which is discrete, to the spectrum of V . A quantity which is a measure of the smoothness / oscillatory nature of the eigenfunctions is their Dirichlet energy, which is defined below. E Dir (f ) = f 2 H / f 2 µ − 1, ∀f ∈ H. The Dirichlet energies of the W τ -eigenfunctions ζ τ,j are seen to converge for the R t α1,α2 -flow. This seems to indicate that the ζ τ,j converge in L 2 (µ) sense. The eigenvalues λ τ,j which are continuous functions of τ are seen to converge for each j as τ → 0 + , as stated in Theorem 2 (vi). For the Loren63 flow, for each j, the Dirichlet energy of ζ τ,j keeps increasing as τ → 0 + . This indicates the lack of any actual eigenfunctions to converge too. 4. Figures 4 and 5 show that the discrete spectrum of W τ , in combination witht he functional calculus of compact skew-adjoint operators, can be used for prediciton purposes. Corollary 3 and Theorem 16 suggest that the L 2 (µ) error of the forecasts converge to 0 in the limit of N → ∞, L → ∞ and τ → 0 + . The forecast error for R t α1,α2 is seen to grow linearly with prediction time t. This lack of exponential growth of errors is supported by the fact that R t α1,α2 has zero Lyapunov exponents. 5. The forecast error grows much faster for Φ t l63 . The graph shows that initially, there is an exponential growth in the error, as expected from the presence of a positive Lyapunov exponent for Φ t l63 . 29 Figure 4: Data-driven prediction of the components F 1 and F 3 of the embedding F of the 2-torus into R 3 (left and center columns) and the observable exp(F 1 + F 3 ) (right column) for the quasiperiodic torus rotation (19), using the operator e tWτ with τ = 10 −5 . Top row: Comparison of the true and predicted signals as a function of time for a fixed initial condition. Bottom row: Normalized L 2 error as a function of lead time computed for an ensemble of forecasts initialized from 60,000 initial conditions sampled along an L63 trajectory independent from the training data. Figure 5: Data-driven prediction of the L63 (21) state vector components x 1 , x 2 , and x 3 using the operator e tWτ with τ = 10 −5 . Top row: Comparison of the true and predicted signals as a function of time for a fixed initial condition. Bottom row: Normalized L 2 error as a function of lead time computed for an ensemble of forecasts initialized from 60,000 initial conditions sampled along an L63 trajectory independent from the training data. 6. Note that the limit in Corollary 3 holds for a fixed prediction time t, so the value of the resolution parameter L which achieves a desired level of accuracy for a certain value of t, may not do so for a different value of t. The two flows that we investigated illustrates shows that this dependence of L on t is different for the two systems. Summary. We have thus demonstrated that our methods are effective in approximating the functional calculus of the generator. Assumption 1 . 1Φ t : M → M, t ∈ R is a continuous-time, continuous flow on a metric space M. There exists a forward-invariant, m-dimensional C r manifold M ⊆ M, such that the restricted flow map Φ t | M is also C r . X ⊆ M is a compact invariant set, supporting a Borel, ergodic, invariant probability measure µ. (i) Its nonzero eigenvalues are purely imaginary, occur in complex-conjugate pairs, and accumulate only at zero. Moreover, there exists an orthonormal basis of H consisting of corresponding eigenfunctions. (ii) It generates a norm-continuous, one-parameter group of unitary operators e tW : H → H, t ∈ R. is a simple eigenvalue of each of the operators A, B,Ṽ , and W , and thecorresponding eigenfunctions are the constant functions. (iii) Everyz j lies in the domain of G −1/2 . The set {z j = G −1/2z j : j ∈ N 0 } are eigenfunctions of A corresponding to the eigenvalues {iω j : j ∈ N 0 } , and forms an unconditional Schauder basis of L 2 (µ). (iv) The set {z 0 , z 1 , . . .} with z j = G 1/2z j is a (not necessarily orthogonal), unconditional Schauder basis of L 2 (µ), consisting of eigenfunctions of B corresponding to the same eigenvalues, {iω 0 , iω 1 , . . .}. Moreveover, it is the unique dual sequence to the {z j }, satisfying z j , z l µ = δ jl . (v) The set {ζ 0 , ζ 1 , . . .} with ζ j = Kz j is an orthonormal basis of H consisting of eigenfunctions ζ j of W corresponding to the eigenvalues iω j . (vi) The operators A, B,Ṽ , and W are respectively characterized by the projection-valued measures E , E,Ẽ : B(R) → L(L 2 (µ)) and E : B(R) → L(H), such that Moreover,Ẽ and E are unitarily equivalent under conjugation under the unitary operator U : L 2 (µ) → H from Theorem 7, i.e., E(U ) = U * E(U )U. Nyström extension. We begin by describing the Nyström extension in RKHS. In what follows, H is an RKHS on M with reproducing kernel k, ν an arbitrary finite Borel measure with compact support S ⊆ M , and K : L 2 (ν) → H the corresponding integral operator defined via (6). The Nyström extension operator N : D(N ) → H, with D(N ) ⊂ L 2 (ν), extends elements of its domain, which are equivalence classes of functions defined up to sets of ν measure zero, to functions in H, which are defined at every point in M and can be pointwise evaluated by continuous linear functionals. Specifically, introducing the functions Proposition 10 . 10Suppose that T τ : D(T τ ) → H is a sequence of skew-adjoint operators converging in strong resolvent sense as τ → 0 + to an operator T : D(T ) → H, (T is necessarily skew-adjoint). Let Θ τ : B(R) → L(H) and Θ : B(R) → L(H) be the spectral measures of T τ and T , respectively. Then: (i) For every bounded, continuous function Z : iR → C, Z(T τ ) converges strongly to Z(T ). (ii) Let J ⊂ J ⊂ iR be two bounded intervals. Then for every Proof. By Proposition 10.1.9, [53] Claim (i) is actually an equivalent characterization of strong resolvent convergence. Claims (v) and (vi) are results from the perturbation theory of normal operators, proved in Theorem 2, Chapter 8, [55], and Corollary 10.2.2. [53] respectively. To prove Claim (ii), let f : R → R be a piecewise linear continuous function which equals 1 on J and with support contained in J. Then note that the inequality 1 J ≤ f ≤ 1 J everywhere. Thus, for each n ∈ N, 1 J (A n ) ≤ f (A n ) and f (A) ≤ 1 J (A). Since f is continuous and bounded, by Claim (ii), f (A n ) converges strongly to f (A). The proof of Claim (ii) can now be completed by the following inequality. lim sup n→∞ We let C 0 (M ; T M ) denote the vector space of continuous vector fields on M (continuous sections of the tangent bundle T M ), and C q (M ; T * n M ) with 0 ≤ q ≤ r the vector space of tensor fields α of type (0, n) having continuous covariant derivaties ∇ j α ∈ C q−j (M ; T * n+j ) up to order j = r. The Riemannian metric induces norms on these spaces defined byΞ C 0 (M ;T M ) = max x∈M Ξ x , α C 0 (M ;T * n ) = max x∈M α x , and α C q (M ;T * n M ) = q j=0 ∇ j α C 0 (M ;T * (q+j) M ) ,where · x denotes pointwise Riemannian norms on tensors. The case C q (M ; T * n M ) with n = 0 corresponds to the C q (M ) spaces of functions. All of the C 0 (M ; T M ) and C q (M ; T * n M ) spaces become Banach spaces with the norms defined above, and by compactness of M , the topology of these spaces is independent of the choice of Riemannian metric. Giver an RKHS H on M with a C q reproducing kernel, the inclusion map ι H : H → C q (M ) is bounded[37].The following result expresses how vector fields can be viewed as bounded operators on functions.Lemma 12. Let M be a compact, C 1 manifold, equipped with a C 0 Riemannian metric. Then, as an operator from C 1 (M ) to C 0 (M ), every vector field Ξ ∈ C 0 (M ; T M ) is bounded, with operator norm Ξ bounded above by Ξ C 0 (M ;T M ) . Assumption 5 . 5There is a time-ordered dataset consisting consisting of the values F (x 0 ), F (x 1 ), . . . , F (x N −1 ), of an injective C 2 observation map F : M → Y into a C 2 manifold Y , evaluated on the trajectory x n = Φ n,∆t (x 0 ) of the discrete time map Φ n ∆t : L → L. Here ∆t > 0 is a a fixed sampling interval. ρ : Y × Y → R is a C 2 symmetric, strictly positive-definite kernel on Y . x, y)dµ N (y), q N (y) := X k(x, y)/d N (x)dµ N (x). p N (x, y) := X k(x, z)k(y, z)/d N (x)d N (y)q N (z)dµ N (z). N,∆t are both L × L skew-symmetric matrices. Hence the former has L eigenpairs which are indexed below by j = 0, . . . , L − 1. N,∆t,j ∈ H τ,N . Note that this is a continuous function. These vectors will be the basis in which we perform the functional calculus Z(W (L) τ,N,∆t ). N,∆t,j ) φ τ,N,j , Π N f µ N ζ (L) τ,N,∆t,j Theorem 16 (Pointwise forecasting). Let Assumptions 1, 2 and 5 hold. Let f ∈ C 0 (M ) be a continuous observable and Z : iR → R is some continuous, bounded function. Then N,∆t,j − ω τ,j | = 0, ∀τ > 0, L ∈ N. ( ii ) iiIn particular, taking Z(it) = e it makesf τ,N,∆t,L a predictor for U t f lim τ →0 + lim L→∞ lim ∆t→0 + ,N ∆t→∞ U t f −f τ,N,∆t,L µ = 0. N,∆t . We need two steps. (a) Compare the matrix representations of these operators in the bases {ψ τ,0 , . . . , ψ τ,L−1 } and {ψ τ,N,0 , . . . , ψ τ,N,L−1 } 25 respectively. F Q : X → Y Q , F Q (x) = F (x), F (Φ −∆t x), . . . , F (Φ −(Q−1)∆t x) , Q ∈ N. Figure 1 : 1Representative eigenfunctions ζ j of the compactified generator Wτ with τ = 10 −5 for the quasiperiodic rotation on the 2-torus(19). Top row: Scatterplots of Re(ζ j ) on the training dataset embedded in R 3 . Bottom row: Time series of the eigenfunctions sampled along a portion of the dynamical trajectory in the training data. We have indicated the Dirichlet energy (23) of the computed eigenfunctions, and the value of ω for each eigenvalue iω. Figure 2 : 2Representative eigenfunctions ζ j of the compactified generator Wτ with τ = 10 −4 for the L63 system(21). Top row: Scatterplots of Re(ζ j ) on the training dataset embedded in R 3 . Bottom row: Time series of the eigenfunctions sampled along portions of the dynamical trajectory in the training data. Observe the qualitatively different geometrical structure of the eigenfunctions on the Lorenz attractor. Despite these differences, the corresponding eigenfunction time series have the structure of amplitude-modulated wavetrains with a fairly distinct carrier frequency. We have indicated the Dirichlet energy (23) of the computed eigenfunctions, and the value of ω for each eigenvalue iω. Figure 3 : 3Eigenfrequencies ω j of the compactified generators Wτ as a function of τ > 0, for the torus rotation (20) and the L63 system(21). At each value of τ > 0, we have calculated the Dirichlet energies of L eigenfunctions of Wτ and then calculates the ratio of these energies to the smallest among these L values. The frequencies are colored by the logarithm of the corresponding energy ratio. Only positive frequencies are shown for clarity. (i) Reread abstract, (ii) shorten theorem 9 ? (iii) improve pictures (iv) spell check On the approximation of complicated dynamical behavior. M Dellnitz, O Junge, 10.1137/S0036142996313002SIAM J. Numer. Anal. 36491M. Dellnitz, O. Junge, On the approximation of complicated dynamical behavior, SIAM J. Numer. Anal. 36 (1999) 491. doi:10.1137/S0036142996313002. On the isolated spectrum of the Perron-Frobenius operator. M Dellnitz, G Froyland, 10.1088/0951-7715/13/4/310Nonlinearity. 1171M. Dellnitz, G. Froyland, On the isolated spectrum of the Perron-Frobenius operator, Nonlinearity (2000) 1171-1188doi: 10.1088/0951-7715/13/4/310. Comparison of systems with complex behavior. I Mezić, A Banaszuk, 10.1016/j.physd.2004.06.015doi:10.1016/j. physd.2004.06.015Phys. D. 197I. Mezić, A. Banaszuk, Comparison of systems with complex behavior, Phys. D. 197 (2004) 101-133. doi:10.1016/j. physd.2004.06.015. Spectral properties of dynamical systems, model reduction and decompositions. I Mezić, 10.1007/s11071-005-2824-xNonlinear Dyn. 41I. Mezić, Spectral properties of dynamical systems, model reduction and decompositions, Nonlinear Dyn. 41 (2005) 309- 325. doi:10.1007/s11071-005-2824-x. Almost-invariant sets and invariant manifolds -Connecting probabilistic and geometric descriptions of coherent structures in flows. G Froland, K Padberg, 10.1016/j.physd.2009.03.002Phys. D. 238G. Froland, K. Padberg, Almost-invariant sets and invariant manifolds -Connecting probabilistic and geometric descrip- tions of coherent structures in flows, Phys. D 238 (2009) 1507-1523. doi:10.1016/j.physd.2009.03.002. Spectral analysis of nonlinear flows. C W Rowley, I Mezić, S Bagheri, P Schlatter, D S Henningson, 10.1017/s0022112009992059J. Fluid Mech. 641C. W. Rowley, I. Mezić, S. Bagheri, P. Schlatter, D. S. Henningson, Spectral analysis of nonlinear flows, J. Fluid Mech. 641 (2009) 115-127. doi:10.1017/s0022112009992059. Dynamic mode decomposition of numerical and experimental data. P J Schmid, 10.1017/S0022112010001217J. Fluid Mech. 656P. J. Schmid, Dynamic mode decomposition of numerical and experimental data, J. Fluid Mech. 656 (2010) 5-28. doi: 10.1017/S0022112010001217. An analytic framework for identifying finite-time coherent sets in time-dependent dynamical systems. G Froyland, 10.1016/j.physd.2013.01.013Phys. D. 250G. Froyland, An analytic framework for identifying finite-time coherent sets in time-dependent dynamical systems, Phys. D 250 (2013) 1-19. doi:10.1016/j.physd.2013.01.013. Detecting isolated spectrum of transfer and Koopman operators with Fourier analytic tools. G Froyland, C González-Tokman, A Quas, 10.3934/jcd.2014.1.249J. Comput. Dyn. 12G. Froyland, C. González-Tokman, A. Quas, Detecting isolated spectrum of transfer and Koopman operators with Fourier analytic tools, J. Comput. Dyn. 1 (2) (2014) 249-278. doi:10.3934/jcd.2014.1.249. A computational method to extract macroscopic variables and their dynamics in multiscale systems. G Froyland, G A Gottwald, A Hammerlindl, 10.1137/130943637SIAM J. Appl. Dyn. Sys. 134G. Froyland, G. A. Gottwald, A. Hammerlindl, A computational method to extract macroscopic variables and their dynamics in multiscale systems, SIAM J. Appl. Dyn. Sys. 13 (4) (2014) 1816-1846. doi:10.1137/130943637. On dynamic mode decomposition: Theory and applications. J H Tu, C W Rowley, C M Lucthenburg, S L Brunton, J N Kutz, 10.3934/jcd.2014.1.391J. Comput. Dyn. 12J. H. Tu, C. W. Rowley, C. M. Lucthenburg, S. L. Brunton, J. N. Kutz, On dynamic mode decomposition: Theory and applications, J. Comput. Dyn. 1 (2) (2014) 391-421. doi:10.3934/jcd.2014.1.391. Nonparametric forecasting of low-dimensional dynamical systems. T Berry, D Giannakis, J Harlim, 10.1103/PhysRevE.91.032915Phys. Rev. E. 9132915T. Berry, D. Giannakis, J. Harlim, Nonparametric forecasting of low-dimensional dynamical systems, Phys. Rev. E. 91 (2015) 032915. doi:10.1103/PhysRevE.91.032915. Spatiotemporal feature extraction with data-driven Koopman operators. D Gannakis, J Slawinska, Z Zhao, J. Mach. Learn. Res. Proceedings. 44D. Gannakis, J. Slawinska, Z. Zhao, Spatiotemporal feature extraction with data-driven Koopman operators, J. Mach. Learn. Res. Proceedings 44 (2015) 103-115. A data-driven approximation of the Koopman operator: Extending dynamic mode decomposition. M O Williams, I G Kevrekidis, C W Rowley, 10.1007/s00332-015-9258-5J. Nonlinear Sci. M. O. Williams, I. G. Kevrekidis, C. W. Rowley, A data-driven approximation of the Koopman operator: Extending dynamic mode decomposition, J. Nonlinear Sci. (2015). doi:10.1007/s00332-015-9258-5. Chaos as an intermittently forced linear system. S L Brunton, B W Brunton, J L Proctor, E Kaiser, J N Kutz, 10.1038/s41467-017-00030-8Nat. Commun. 8S. L. Brunton, B. W. Brunton, J. L. Proctor, E. Kaiser, J. N. Kutz, Chaos as an intermittently forced linear system, Nat. Commun. 8 (19). doi:10.1038/s41467-017-00030-8. Ergodic theory, dynamic mode decomposition and computation of spectral properties of the Koopman operator. H Arbabi, I Mezić, arXiv:1611.06664H. Arbabi, I. Mezić, Ergodic theory, dynamic mode decomposition and computation of spectral properties of the Koopman operator (2016). arXiv:1611.06664. Data-driven spectral decomposition and forecasting of ergodic dynamical systems. D Giannakis, 10.1016/j.acha.2017.09.001Appl. Comput. Harmon. Anal.In press. D. Giannakis, Data-driven spectral decomposition and forecasting of ergodic dynamical systems, Appl. Comput. Harmon. Anal.In press. doi:10.1016/j.acha.2017.09.001. Delay-coordinate maps and the spectra of Koopman operators. S Das, D Giannakis, S. Das, D. Giannakis, Delay-coordinate maps and the spectra of Koopman operators, Preprint https://arxiv.org/pdf/ 1706.08544.pdf (2017). Extraction and prediction of coherent patterns in incompressible flows through space-time Koopman analysis. D Giannakis, S Das, D. Giannakis, S. Das, Extraction and prediction of coherent patterns in incompressible flows through space-time Koopman analysis, Preprint https://arxiv.org/pdf/1706.06450.pdf (2017). On convergence of extended dynamic mode decomposition to the koopman operator. M Korda, I Mezić, arXiv:1703.04680M. Korda, I. Mezić, On convergence of extended dynamic mode decomposition to the koopman operator (2017). arXiv: 1703.04680. Data-driven spectral analysis of the Koopman operator. M Korda, M Putinar, I Mezić, arXiv:1710.06532M. Korda, M. Putinar, I. Mezić, Data-driven spectral analysis of the Koopman operator (2017). arXiv:1710.06532. Koopman spectra in reproducing kernel Hilbert spaces. S Das, D Giannakis, arXiv:1801.07799S. Das, D. Giannakis, Koopman spectra in reproducing kernel Hilbert spaces (2018). arXiv:1801.07799. URL https://arxiv.org/pdf/1801.07799.pdf . M Budisić, R Mohr, I Mezić, Applied Koopmanism, 10.1063/1.4772195Chaos. 2247510M. Budisić, R. Mohr, I. Mezić, Applied Koopmanism, Chaos 22 (2012) 047510. doi:10.1063/1.4772195. A critical comparison of Lagrangian methods for coherent structure detection. A Hadjighasem, M Farazmand, D Blazevski, G Froyland, G Haller, 10.1063/1.4982720Chaos. 2753104A. Hadjighasem, M. Farazmand, D. Blazevski, G. Froyland, G. Haller, A critical comparison of Lagrangian methods for coherent structure detection, Chaos 27 (2017) 053104. doi:10.1063/1.4982720. . T Eisner, B Farkas, M Haase, R Nagel, Operator Theoretic Aspects of Ergodic Theory. 272SpringerGraduate Texts in MathematicsT. Eisner, B. Farkas, M. Haase, R. Nagel, Operator Theoretic Aspects of Ergodic Theory, Vol. 272 of Graduate Texts in Mathematics, Springer, 2015. Hamiltonian systems and transformation in Hilbert space. B O Koopman, Proc. Natl. Acad. Sci. 175B. O. Koopman, Hamiltonian systems and transformation in Hilbert space, Proc. Natl. Acad. Sci. 17 (5) (1931) 315-318. On one-parameter unitary groups in Hilbert space. M H Stone, 10.2307/1968538,JSTOR1968538JSTOR1968538Ann. Math. 33643648M. H. Stone, On one-parameter unitary groups in Hilbert space, Ann. Math. 33 (1932) 643648. doi:10.2307/1968538, JSTOR1968538. K Schmüdgen, 10.1007/978-94-007-4753-1_2Unbounded and Self-Adjoint Operators on Hilbert Space. DordrechtSpringer265K. Schmüdgen, Unbounded and Self-Adjoint Operators on Hilbert Space, Vol. 265 of Graduate Texts in Mathematics, Springer, Dordrecht, 2012. doi:10.1007/978-94-007-4753-1_2. Diffusion maps. R Coifman, S Lafon, 10.1016/j.acha.2006.04.006Appl. Comput. Harmon. Anal. 21530R. Coifman, S. Lafon, Diffusion maps, Appl. Comput. Harmon. Anal. 21 (2006) 530. doi:10.1016/j.acha.2006.04.006. Consistency of spectral clustering. U Luxburg, M Belkin, O Bousquet, 10.1214/009053607000000640Ann. Stat. 262U. von Luxburg, M. Belkin, O. Bousquet, Consistency of spectral clustering, Ann. Stat. 26 (2) (2008) 555-586. doi: 10.1214/009053607000000640. On the numerical approximation of the PerronFrobenius and Koopman operator. S Klus, P Koltai, C Schütte, 10.3934/jcd.2016003J. Comput. Dyn. 31S. Klus, P. Koltai, C. Schütte, On the numerical approximation of the PerronFrobenius and Koopman operator, J. Comput. Dyn. 3 (1) (2016) 51-79. doi:10.3934/jcd.2016003. On convergence of extended dynamic mode decomposition to the Koopman operator. M Korda, I Mezić, 10.1007/s00332-017-9423-0J. Nonlinear Sci. 282M. Korda, I. Mezić, On convergence of extended dynamic mode decomposition to the Koopman operator, J. Nonlinear Sci. 28 (2) (2018) 687-710. doi:10.1007/s00332-017-9423-0. Spatiotemporal analysis of complex signals: Theory and applications. N Aubry, R Guyonnet, R Lima, 10.1007/bf01048312J. Stat. Phys. 64N. Aubry, R. Guyonnet, R. Lima, Spatiotemporal analysis of complex signals: Theory and applications, J. Stat. Phys. 64 (1991) 683-739. doi:10.1007/bf01048312. Koopman analysis of the long-term evolution in a turbulent convection cell. D Giannakis, A Kolchinskaya, D Krasnov, J Schumacher, 10.1017/jfm.2018.297J. Fluid Mech. 847D. Giannakis, A. Kolchinskaya, D. Krasnov, J. Schumacher, Koopman analysis of the long-term evolution in a turbulent convection cell, J. Fluid Mech. 847 (2018) 735-767. doi:10.1017/jfm.2018.297. Spatiotemporal pattern extraction with data-driven Koopman operators for convectively coupled equatorial waves. J Slawinska, D Giannakis, 10.5065/D6K072N6Proceedings of the 6th International Workshop on Climate Informatics. A. Banerjee, W. Ding, J. Dy, V. Lyubchich, A. Rhinesthe 6th International Workshop on Climate InformaticsBoulder, ColoradoJ. Slawinska, D. Giannakis, Spatiotemporal pattern extraction with data-driven Koopman operators for convectively coupled equatorial waves, in: A. Banerjee, W. Ding, J. Dy, V. Lyubchich, A. Rhines (Eds.), Proceedings of the 6th International Workshop on Climate Informatics, Boulder, Colorado, 2016, pp. 49-52. doi:10.5065/D6K072N6. . T Sauer, J A Yorke, M Casdagli, J Embedology, 10.1007/bf01053745Stat. Phys. 653-4T. Sauer, J. A. Yorke, M. Casdagli, Embedology, J. Stat. Phys. 65 (3-4) (1991) 579-616. doi:10.1007/bf01053745. Positive definiteness, reproducing kernel Hilbert spaces and beyond. J C Ferreira, V A Menegatto, Ann. Funct. Anal. 4J. C. Ferreira, V. A. Menegatto, Positive definiteness, reproducing kernel Hilbert spaces and beyond, Ann. Funct. Anal. 4 (2013) 64-88. Geometric harmonics: A novel tool for multiscale out-of-sample extension of empirical functions. R R Coifman, S Lafon, doi:j.acha.2005.07.005Appl. Comput. Harmon. Anal. 21R. R. Coifman, S. Lafon, Geometric harmonics: A novel tool for multiscale out-of-sample extension of empirical functions, Appl. Comput. Harmon. Anal. 21 (2006) 31-52. doi:j.acha.2005.07.005. Laplacian eigenmaps for dimensionality reduction and data representation. M Belkin, P Niyogi, 10.1162/089976603321780317Neural Comput. 15M. Belkin, P. Niyogi, Laplacian eigenmaps for dimensionality reduction and data representation, Neural Comput. 15 (2003) 1373-1396. doi:10.1162/089976603321780317. Variable bandwidth diffusion kernels. T Berry, J Harlim, 10.1016/j.acha.2015.01.001J. Appl. Comput. Harmon. Anal. T. Berry, J. Harlim, Variable bandwidth diffusion kernels, J. Appl. Comput. Harmon. Anal. (2015). doi:10.1016/j.acha. 2015.01.001. Local kernels and the geometric structure of data. T Berry, T Sauer, 10.1016/j.acha.2015.03.002Appl. Comput. Harmon. Anal. 40T. Berry, T. Sauer, Local kernels and the geometric structure of data, Appl. Comput. Harmon. Anal. 40 (2016) 439-469. doi:10.1016/j.acha.2015.03.002. of Handbook of Numerical Analysis. I Babuška, J Osborn, Eigenvalue Problems. 2I. Babuška, J. Osborn, Eigenvalue Problems, Vol. 2 of Handbook of Numerical Analysis, North Holland, Amsterdam, 1991. The behavior of the spectral gap under growing drift. B Franke, 10.1090/S0002-9947-09-04939-3Trans. Amer. Math. Soc. 362B. Franke, et al., The behavior of the spectral gap under growing drift, Trans. Amer. Math. Soc. 362 (2010) 1325-1350. doi:10.1090/S0002-9947-09-04939-3. Universal kernels. C Micchelli, Y Xu, H Zhang, 10.1073/pnas.1517384113J. Mach. Learn. Res. 7C. Micchelli, Y. Xu, H. Zhang, Universal kernels, J. Mach. Learn. Res. 7 (2006) 2651-2667. doi:10.1073/pnas.1517384113. On the mathematical foundations of learning. F Cucker, S Smale, 10.1090/S0273-0979-01-00923-5Bull. Amer. Math. Soc. 391F. Cucker, S. Smale, On the mathematical foundations of learning, Bull. Amer. Math. Soc. 39 (1) (2001) 1-49. doi: 10.1090/S0273-0979-01-00923-5. Deterministic nonperiodic flow. E N Lorenz, J. Atmos. Sci. 20E. N. Lorenz, Deterministic nonperiodic flow, J. Atmos. Sci. 20 (1963) 130-141. K Law, A Stuart, K Zygalakis, 10.1007/978-3-319-20325-6Data Assimilation: A Mathematical Introduction. Springer62K. Law, A. Stuart, K. Zygalakis, Data Assimilation: A Mathematical Introduction, Vol. 62 of Texts in Applied Mathe- matics, Springer, 2015. doi:10.1007/978-3-319-20325-6. The Lorenz attractor exists. W Tucker, C. R. Acad. Sci. Paris, Ser. I. 328W. Tucker, The Lorenz attractor exists, C. R. Acad. Sci. Paris, Ser. I 328 (1999) 1197-1202. P Constantin, C Foias, B Nicolaenko, R Témam, 10.1007/978-1-4612-3506-4Integral Manifolds and Inertial Manifolds for Dissipative Partial Differential Equations. New YorkSpringerP. Constantin, C. Foias, B. Nicolaenko, R. Témam, Integral Manifolds and Inertial Manifolds for Dissipative Partial Differential Equations, Springer, New York, 1989. doi:10.1007/978-1-4612-3506-4. N Aronszajn, 10.1090/S0002-9947-1950-0051437-7doi:10.1090/ S0002-9947-1950-0051437-7Theory of reproducing kernels. 63337404N. Aronszajn, Theory of reproducing kernels, Trans. Amer. Math. Soc. 63 (1950) 337404. doi:10.1090/ S0002-9947-1950-0051437-7. Bi-stochastic kernels via asymmetric affinity functions. R Coifman, M Hirn, 10.1016/j.acha.2013.01.001Appl. Comput. Harmon. Anal. 351R. Coifman, M. Hirn, Bi-stochastic kernels via asymmetric affinity functions, Appl. Comput. Harmon. Anal. 35 (1) (2013) 177-180. doi:10.1016/j.acha.2013.01.001. URL https://www.sciencedirect.com/science/article/pii/S1063520313000031 E Kowalski, Spectral theory in hilbert spaces. E. Kowalski, Spectral theory in hilbert spaces, http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.400.6646&rep=rep1&type=pdf. URL http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.400.6646&rep=rep1&type=pdf Intermediate spectral theory and quantum dynamics. C R De Oliveira, Springer Science & Business Media54C. R. de Oliveira, Intermediate spectral theory and quantum dynamics, Vol. 54, Springer Science & Business Media, 2008. URL https://books.google.com/books?id=FCYZ1Zf3uLQC Continuity of the spectrum of a field of self-adjoint operators. S Beckus, J Bellissard, 10.1007/s00023-016-0496-3Ann. Henri Poincaré. 17S. Beckus, J. Bellissard, Continuity of the spectrum of a field of self-adjoint operators, Ann. Henri Poincaré 17 (2016) 3425-3442. doi:10.1007/s00023-016-0496-3. . N Dunford, J T Schwartz, Linear operators. 2WileyN. Dunford, J. T. Schwartz, Linear operators, Vol. 2, Wiley, 1988. Domination of unbounded operators and commutativity. J Stochel, F Szafraniec, 10.2969/jmsj/1191419124J. Math. Soc. Japan. 552J. Stochel, F. Szafraniec, Domination of unbounded operators and commutativity, J. Math. Soc. Japan 55 (2) (2003) 405-437. doi:10.2969/jmsj/1191419124. G Tian, Y Ji, Y Cao, arXiv:1203.3603Schauder bases and operator theory. arXiv preprintG. Tian, Y. Ji, Y. Cao, Schauder bases and operator theory, arXiv preprint arXiv:1203.3603. On complete biorthogonal systems. R Young, 10.1090/S0002-9939-1981-0627686-9doi:10.1090/ S0002-9939-1981-0627686-9Proc. Amer. Math. Soc. 833R. Young, On complete biorthogonal systems, Proc. Amer. Math. Soc. 83 (3) (1981) 537-540. doi:10.1090/ S0002-9939-1981-0627686-9. Functions of positive and negative type and their connection with the theory of integral equations. J Mercer, 10.1098/rsta.1909.0016Philos. Trans. R. Soc. Lond. Ser. A. 20441458J. Mercer, Functions of positive and negative type and their connection with the theory of integral equations, Philos. Trans. R. Soc. Lond. Ser. A 20 (1909) 441458. doi:10.1098/rsta.1909.0016. The fractal dimension of the Lorenz attractor. M J Mcguinness, 10.1098/rsta.1968.0001Philos. Trans. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci. 262M. J. McGuinness, The fractal dimension of the Lorenz attractor, Philos. Trans. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci. 262 (1968) 413-458. doi:10.1098/rsta.1968.0001. The Lorenz attractor is mixing. S Luzzatto, I Melbourne, F Paccaut, Comm. Math. Phys. 2602S. Luzzatto, I. Melbourne, F. Paccaut, The Lorenz attractor is mixing, Comm. Math. Phys. 260 (2) (2005) 393-401.
[]
[ "FORMING PLANETESIMALS BY GRAVITATIONAL INSTABILITY: I. THE ROLE OF THE RICHARDSON NUMBER IN TRIGGERING THE KELVIN-HELMHOLTZ INSTABILITY", "FORMING PLANETESIMALS BY GRAVITATIONAL INSTABILITY: I. THE ROLE OF THE RICHARDSON NUMBER IN TRIGGERING THE KELVIN-HELMHOLTZ INSTABILITY" ]
[ "Aaron T Lee ", "Eugene Chiang ", "Xylar Asay-Davis ", "Joseph Barranco " ]
[]
[]
Gravitational instability (GI) of a dust-rich layer at the midplane of a gaseous circumstellar disk is one proposed mechanism to form planetesimals, the building blocks of rocky planets and gas giant cores. Self-gravity competes against the Kelvin-Helmholtz instability (KHI): gradients in dust content drive a vertical shear which risks overturning the dusty subdisk and forestalling GI. To understand the conditions under which the disk can resist the KHI, we perform three-dimensional simulations of stratified subdisks in the limit that dust particles are small and aerodynamically well coupled to gas, thereby screening out the streaming instability and isolating the KHI. Each subdisk is assumed to have a vertical density profile given by a spatially constant Richardson number Ri. We vary Ri and the midplane dust-to-gas ratio µ 0 and find that the critical Richardson number dividing KHunstable from KH-stable flows is not unique; rather Ri crit grows nearly linearly with µ 0 for µ 0 = 0.3-10. Plausibly a linear dependence arises for µ 0 ≪ 1 because in this regime the radial Kepler shear replaces vertical buoyancy as the dominant stabilizing influence. Why this dependence should persist at µ 0 > 1 is a new puzzle. The bulk (height-integrated) metallicity is uniquely determined by Ri and µ 0 . Only for disks of bulk solar metallicity is Ri crit ≈ 0.2, close to the classical value. Our empirical stability boundary is such that a dusty sublayer can gravitationally fragment and presumably spawn planetesimals if embedded within a solar metallicity gas disk ∼4× more massive than the minimum-mass solar nebula; or a minimum-mass disk having ∼3× solar metallicity; or some intermediate combination of these two possibilities. Gravitational instability seems possible without resorting to the streaming instability or to turbulent concentration of particles.
10.1088/0004-637x/718/2/1367
[ "https://arxiv.org/pdf/1010.0248v1.pdf" ]
14,302,925
1010.0248
cb703de51288a471c6e1a31d2c30c97d3a8e736a
FORMING PLANETESIMALS BY GRAVITATIONAL INSTABILITY: I. THE ROLE OF THE RICHARDSON NUMBER IN TRIGGERING THE KELVIN-HELMHOLTZ INSTABILITY 1 Oct 2010 Draft version October 5, 2010 Draft version October 5, 2010 Aaron T Lee Eugene Chiang Xylar Asay-Davis Joseph Barranco FORMING PLANETESIMALS BY GRAVITATIONAL INSTABILITY: I. THE ROLE OF THE RICHARDSON NUMBER IN TRIGGERING THE KELVIN-HELMHOLTZ INSTABILITY 1 Oct 2010 Draft version October 5, 2010 Draft version October 5, 2010Preprint typeset using L A T E X style emulateapj v. 11/10/09Subject headings: hydrodynamics -instabilities -planetary systems: protoplanetary disks -planets and satellites: formation Gravitational instability (GI) of a dust-rich layer at the midplane of a gaseous circumstellar disk is one proposed mechanism to form planetesimals, the building blocks of rocky planets and gas giant cores. Self-gravity competes against the Kelvin-Helmholtz instability (KHI): gradients in dust content drive a vertical shear which risks overturning the dusty subdisk and forestalling GI. To understand the conditions under which the disk can resist the KHI, we perform three-dimensional simulations of stratified subdisks in the limit that dust particles are small and aerodynamically well coupled to gas, thereby screening out the streaming instability and isolating the KHI. Each subdisk is assumed to have a vertical density profile given by a spatially constant Richardson number Ri. We vary Ri and the midplane dust-to-gas ratio µ 0 and find that the critical Richardson number dividing KHunstable from KH-stable flows is not unique; rather Ri crit grows nearly linearly with µ 0 for µ 0 = 0.3-10. Plausibly a linear dependence arises for µ 0 ≪ 1 because in this regime the radial Kepler shear replaces vertical buoyancy as the dominant stabilizing influence. Why this dependence should persist at µ 0 > 1 is a new puzzle. The bulk (height-integrated) metallicity is uniquely determined by Ri and µ 0 . Only for disks of bulk solar metallicity is Ri crit ≈ 0.2, close to the classical value. Our empirical stability boundary is such that a dusty sublayer can gravitationally fragment and presumably spawn planetesimals if embedded within a solar metallicity gas disk ∼4× more massive than the minimum-mass solar nebula; or a minimum-mass disk having ∼3× solar metallicity; or some intermediate combination of these two possibilities. Gravitational instability seems possible without resorting to the streaming instability or to turbulent concentration of particles. INTRODUCTION In the most venerable scenario for forming planetesimals, dust particles in circumstellar gas disks are imagined to settle vertically into thin sublayers ("subdisks") sufficiently dense to undergo gravitational instability (Safronov 1969;Goldreich & Ward 1973; for a review of this and other ways in which planetesimals may form, see Chiang & Youdin 2010, hereafter CY10). Along with this longstanding hope comes a longstanding fear that dust remains lofted up by turbulence. Even if we suppose that certain regions of the disk are devoid of magnetized turbulence because they are too poorly ionized to sustain magnetic activity (Gammie 1996;Bai & Goodman 2009), the dusty sublayer is susceptible to a Kelvin-Helmholtz shearing instability (KHI; Weidenschilling 1980). 5 Basic Estimates The KHI arises because dust-rich gas at the midplane rotates at a different speed from dust-poor gas at altitude. The background radial pressure gradient ∂P/∂r causes dust-free gas at disk radius r to rotate at the slightly non-Keplerian rate Ω F = Ω K (1 − η)(1) where Ω K is the Kepler angular frequency, η = −(1/ρ g )∂P/∂r 2Ω 2 K r ≈ 8 × 10 −4 r AU 4/7(2) is a dimensionless measure of centrifugal support by pressure, and ρ g is the density of gas (e.g., Nakagawa et al. 1986;Cuzzi et al. 1993). The numerical evaluation is based on the minimum-mass solar nebula derived by CY10. Unlike dust-free gas, dust-rich gas is loaded by the extra inertia of solids and must rotate at more nearly the full Keplerian rate to remain in centrifugal balance. Variations in the dust-to-gas ratio ρ d /ρ g with height z result in a vertical shear ∂v φ /∂z from which free energy is available to overturn the dust layer. The shearing rate across a layer of thickness ∆z is given to order of magnitude by ∂v φ ∂z ∼ ∆v φ ∆z = 1 ∆z µ 0 1 + µ 0 ηΩ K r ≈ 25 ∆z µ 0 1 + µ 0 r AU 1/14 m s −1 (3) where ρ d /ρ g = µ 0 at the midplane and ρ d /ρ g ≪ 1 at altitude (for more details see CY10, or § §2.1-2.2 of this paper). For µ 0 ≫ 1 the velocity difference ∆v φ saturates at a speed ηΩ K r ∼ 25(r/AU) 1/14 m s −1 , well below the gas sound speed c s ∼ 1 km s −1 . That the flow is highly subsonic motivates what simulation methods we employ in our study. We might expect the flow to be stabilized if the Brunt-Väisälä frequency ω Brunt = −g ρ ∂ρ ∂z 1/2 ∼ µ 0 1 + µ 0 1/2 Ω K(4) of buoyant vertical oscillations is much larger than the vertical shearing rate. For the order-of-magnitude evaluation in (4) we approximate the vertical gravitational acceleration g as the vertical component of stellar gravity −Ω 2 K ∆z (no self-gravity), and the density gradient ρ −1 ∂ρ/∂z ∼ (ρ d + ρ g ) −1 ∆(ρ d + ρ g )/∆z ∼ (ρ d + ρ g ) −1 ∆ρ d /∆z. The last approximation relies in part on the dust density ρ d changing over a lengthscale ∆z much shorter than the gas scale height. Both |∂v φ /∂z| and ω Brunt shrink as µ 0 decreases. 6 For two-dimensional, heterogeneous, unmagnetized flow, a necessary but not sufficient condition for instability is given by the Richardson number: Ri ≡ −(g/ρ)(dρ/dz) (dv φ /dz) 2 < 1/4 is necessary for instability (5) (Miles 1961; see the textbook by Drazin & Reid 2004). The Richardson number is simply the square of the ratio of the stabilizing Brunt frequency (4) to the destabilizing vertical shearing frequency (3). The critical value of 1/4 arises formally but can also be derived heuristically by energy arguments (e.g., Chandrasekhar 1981). The Richardson criterion does not formally apply to our dusty subdisk, which represents a three-dimensional flow: the KHI couples vertical motions to azimuthal motions, while the Coriolis force couples azimuthal motions to radial motions. (For how the Richardson criterion may not apply to magnetized flows, see Lecoanet et al. 2010.) Nevertheless we may hope the Richardson number is useful as a guide, as previous works have assumed (Sekiya 1998;Youdin & Shu 2002;Youdin & Chiang 2004). In this spirit let us use the Richardson criterion to estimate the thickness of a marginally KH-unstable dust 6 But not indefinitely. In the limit µ 0 → 0, the vertical shearing and Brunt frequencies reach minima set by pressure and temperature gradients in gas (see, e.g., Knobloch & Spruit 1985). The limit µ 0 → 0 is not relevant for our study and not captured by either (3) or (4). layer. Substitution of (3) and (4) into (5) reveals that 7 ∆z ≈ µ 0 1 + µ 0 1/2 Ri 1/2 ηr . Since the gas scale height H g = c s /Ω K and η ∼ (H g /r) 2 , equation (6) indicates that for µ 0 > 1 the marginally unstable dust sublayer is ∼Ri 1/2 H g /r ∼ 0.02Ri 1/2 times as thin as the gas disk in which it is immersed. Those KH-unstable modes that disrupt the layer should have azimuthal wavelengths-and by extension radial wavelengths, because the Kepler shear turns azimuthal modes into radial ones-that are comparable to ∆z. Shorter wavelength modes cannot overturn the layer, while longer wavelength modes grow too slowly (Gómez & Ostriker 2005). How does disk rotation affect the development of the KHI? In a linear analysis, Ishitsu & Sekiya (2003) highlight the role played by the Keplerian shear, characterized by the strain rate ∂Ω K ∂ ln r = 3Ω K 2 ,(7) in limiting the growth of KH-unstable modes. The radial shear is implicated because azimuthal motions excited by the KHI are converted to radial motions by the Coriolis force; moreover, the non-axisymmetric pattern excited by the KHI is wound up, i.e., stretched azimuthally by the radial shear. The Kepler rate |∂Ω K /∂ ln r| is at least as large as ω Brunt , and can dominate the latter when µ 0 is small. This suggests that Ri does not capture all the relevant dynamics-a concern already clear on formal grounds. In this paper we address this concern head-on, using fully 3D numerical simulations to assess the role of the Richardson number in governing the stability of the dust layer. Our Study in Relation to Previous Numerical Simulations Three-dimensional shearing box simulations of the KHI in dusty subdisks, performed in the limits that dust is perfectly coupled to gas and disk self-gravity is negligible, demonstrate the importance of the Kepler shear. Compared to rigidly rotating disks (Gómez & Ostriker 2005;Johansen et al. 2006), radially shearing disks are far more stable (Chiang 2008;Barranco 2009). The relevance of Ri, or lack thereof, may be assessed by simulating flows with initially spatially constant Ri (Sekiya 1998;Youdin & Shu 2002), and varying Ri from run to run to see whether dust layers turn over. Chiang (2008, hereafter C08) found that when µ 0 > 1, dust layers for which Ri < 0.1 overturn, while those for which Ri > 0.1 do not. In retrospect, we might have anticipated this result, that the critical value Ri crit dividing stable from unstable runs lies near the canonical value of 1/4, at least for µ 0 > 1, because in this regime of parameter space all the frequencies of the problem are comparable to each other: |∂v φ /∂z| ∼ ω Brunt ∼ |∂Ω K /∂ ln r| ∼ Ω K when µ 0 > 1 and Ri ≈ 0.1-1. But other simulations of C08 also make clear that Ri does not alone determine stability under all circumstances. For µ 0 ≈ 0.2-0.4, Ri crit was discovered to drop substantially to ∼0.02 (see his runs S9-S12). Chiang (2008) speculated that the baroclinic nature of the flow may be responsible (Knobloch & Spruit 1985, but no details were given. In addition to being left unexplained, the findings of C08 require verification. Parameter space was too sparsely sampled to discern trends with confidence. Concerns about numerics-e.g., biases introduced by box sizes that were too small, resolutions too coarse, and runs terminated too early-also linger. At least one numerical artifact marred the simulations of C08: the KHI manifested first at the "co-rotation" radius where the mean azimuthal flow speed was zero (see his figure 8). But in a shearing box, by Galilean invariance, there should be no special radius. It was suspected, but not confirmed, that errors of interpolation associated with the grid-based advection scheme used by C08 artificially suppressed the KHI away from co-rotation. For the problem at hand, the spectral code developed by Barranco & Marcus (2006) and modified by Barranco (2009, hereafter B09) to treat mixtures of dust and gas is a superior tool to the grid-based ZEUS code utilized by C08. Working in Fourier space rather than configuration space, the simulations of the KHI by B09 did not betray the co-rotation artifact mentioned above. Spectral methods, often used to model local (WKB) dynamics, are appropriate here because the structures of interest in the subdisk have dimensions tiny compared to the disk radius (by at least a factor Ri 1/2 η according to equation 6) and even the gas scale height. At the same computational expense, spectral algorithms typically achieve greater effective spatial resolution than their grid-based counterparts (Barranco & Marcus 2006). Another advantage enjoyed by the B09 code is that it employs the anelastic approximation, which is designed to treat subsonic flows such as ours. Having filtered away sound waves, anelastic codes are free to take timesteps set by how long it takes fluid to advect across a grid cell (which themselves move at the local orbital velocity in a shearing coordinate system). By contrast, codes such as ZEUS take mincing steps limited by the time for sound waves to cross a grid cell. The latter constraint is the usual Courant condition for numerically solving problems in compressible fluid dynamics. It was unnecessarily applied by C08 to a practically incompressible flow. In this paper we bring all the advantages of the spectral, anelastic, shearing box code of B09 to bear on the problems originally addressed by C08. We assess numerically the stability of flows characterized by constant Richardson number Ri, systematically mapping out the stability boundary in the parameter space of Ri, midplane dust-to-gas ratio µ 0 , and bulk metallicity Σ d /Σ g (the height-integrated surface density ratio of dust to gas). Though our simulations may still be underresolved, we rule out box size as a major influence on our results. We offer some new insight into why Ri is not a sufficient predictor of stability. And in the restricted context of our constant Ri flows, we assess the conditions necessary for the midplane to become dense enough to trigger gravitational instability on a dynamical time. The Perfect Coupling Approximation vs. The Streaming Instability vs. Turbulent Concentration Between Eddies Following C08 and B09, we continue to work in the limit that dust is perfectly coupled to gas, i.e., in the limit that particles are small enough that their frictional stopping times t stop in gas can be neglected in comparison to the dynamical time Ω −1 K . The perfect coupling approximation allows us to screen out the streaming instability which relies on a finite stopping time and which is most powerful when particles are marginally coupled, i.e., when τ s ≡ Ω K t stop ∼ 0.1-1 (Youdin & Goodman 2005). Numerical simulations have shown that when an order-unity fraction of the disk's solids is in particles having τ s = 0.1-1, the streaming instability clumps them strongly and paves the way for gravitational instability (e.g., Johansen et al. 2007Johansen et al. , 2009). The particle sizes corresponding to τ s = 1 depend on the properties of the background gas disk, as well as on the particle's shape and internal density; under typical assumptions, marginally coupled particles are decimeter to meter-sized. It remains debatable whether a substantial fraction of a disk's solid mass is in marginally coupled particles at the time of planetesimal formation, as current proposals relying on the streaming instability assume. Particle size and shape distributions are not well constrained by observations (though see, e.g., Wilner et al. 2005, who showed that centimeter-wavelength fluxes from a few T Tauri stars are consistent with having been emitted by predominantly centimeter-sized particles). Measuring τ s in disks also requires knowing the gas density, but direct measurements of the gas density at disk midplanes do not exist. Marginally coupled particles-sometimes referred to as "meter-sized boulders"-also face the longstanding problem that they drift onto the central star too quickly, within hundreds of years from distances of a few AU in a minimum-mass disk. Johansen et al. (2007) claimed to solve this problem by agglomerating all the boulders into Ceres-mass planetesimals via the streaming instability before they drifted inward. Their simulation presumed, however, that all of the disk's solids began boulder-sized. The concern we have is that even if particle-particle sticking could grow boulders (and sticking is expected to stall at centimeter sizes; Blum & Wurm 2008;CY10), the disk's solids may not be transformed into boulders all at once. Rather, marginally coupled bodies may initially comprise a minority population on the extreme tail of the particle size distribution. Unless they can transform themselves from a minority to a majority within the radial drift timescale, they would be lost from the nebula by aerodynamic drag. By focussing on the dynamics of the smallest, most well entrained particles having τ s ≪ 1, our work complements that which relies on the streaming instability. We would argue further that the well coupled limit is potentially more relevant for planet formation. If even the smallest particles having sizes ≪ cm can undergo gravitational collapse to form kilometer-sized or larger planetes-imals, nature will have leapfrogged over the marginally coupled regime, bypassing the complications and uncertainties described above. Particle clumping is not restricted to marginally coupled particles via the streaming instability. Small τ s particles also clump within the interstices of turbulent, high vorticity eddies (Maxey 1987;Eaton & Fessler 1994;Cuzzi et al. 2008, and references therein; for a review, see CY10). This particle concentration mechanism presumes some gas turbulence, which may be present in the marginally KH-unstable state to which dust settles. Our simulations cannot capture this phenomenon. However, on the scales of interest to us, turbulent clumping might only be of minor significance. Particles of given t stop are concentrated preferentially by eddies that turn over on the same timescale. Thus the degree of concentration depends sensitively on particle size and the turbulent spectrum. At least in Kolmogorov turbulence, the smallest eddies concentrate particles most strongly because they have the greatest vorticity. The smallest eddies at the inner scale of Kolmogorov turbulence have sizes ℓ i ∼ ν 3/4 t 1/4 o /δv 1/2 o , where ν is the molecular kinematic viscosity, and t o and δv o are the turnover time and velocity of the largest, outer scale eddy. Given δv o ∼ ηΩ K r ∼ 25(r/AU) 1/14 m/s, t o ∼ Ω −1 K , and values of ν based on the nebular model of CY10, we estimate that ℓ i ∼ 10 3 (r/AU) 127/56 cm. This is far smaller than the sublayer thicknesses ∆z ∼ 0.02Ri 1/2 H g ∼ 2 × 10 9 (Ri/0.1) 1/2 (r/AU) 9/7 cm considered in this paper. Moreover, the lifetimes of the particle clumps on a given eddy length scale should roughly equal the eddy turnover times, which for the smallest eddies are of order t i ∼ √ νt o /δv o ∼ 10 2 (r/AU) 55/28 s. We do not expect such rapid fluctuations in particle density, occurring on such small length scales, to affect significantly the evolution of the slower, larger scale KHI. Turbulent clumping may only serve as a source of noise on tiny scales. The possibility that turbulent clumping could still be significant on larger scales is still being investigated (Hogan & Cuzzi 2007;Cuzzi et al. 2008). The perfect coupling approximation prevents us from studying how particles sediment out of gas into dusty sublayers, but it does not stop us from identifying what kinds of sublayers are dynamically stable to the KHI. A subdisk with a given density profile is either dynamically stable or it is not, and we can run the B09 code for many dynamical times (typically 60 or more) to decide the answer. In a forthcoming paper we will combine the B09 code with a settling algorithm that will permit us to study how dust settles from arbitrary initial conditions, freeing us from the assumption that the density profile derives from a constant Richardson number. Organization of this Paper Our numerical methods, including our rationale for choosing box sizes and resolutions, are described in §2. Results are presented in §3 and discussed in §4. METHODS The equations solved by the B09 code are rederived in §2.1. Initial conditions for our simulations are given in §2.2. The code itself is briefly described in §2.3. Our choices for box size and resolution are explained in §2.4. Equations The equations we solve are identical to equations (12ae) of B09. We outline their derivation here, filling in steps skipped by B09, adjusting the notation, and providing some clarifications. This section may be skimmed on a first reading. We begin with the equations for an ideal gas perfectly coupled to pressureless dust in an inertial frame: dv dt = −∇Φ − ∇P ρ d + ρ g ,(8)dρ g dt = −ρ g ∇ · v,(9)d(ρ d /ρ g ) dt = 0,(10)ρ g C V dT dt = −P ∇ · v,(11)P = ℜρ g T,(12) where d/dt is the convective derivative, ρ g(d) is the density of gas (dust), P is the gas pressure, and T is the gas temperature. Under the assumption that they are perfectly coupled, gas and dust share the same velocity v, and the dust-to-gas ratio is conserved in a Lagrangian sense. The background potential is provided by the central star of mass M : Φ = −GM/ √ r 2 + z 2 , where r is the cylindrical radius and z is the vertical distance above the disk midplane. There are five equations for the five flow variables v, ρ g , ρ d , P , and T . The thermodynamic constants include the specific heat C V = ℜ/(γ − 1) at constant volume, the ideal gas constant ℜ = C P − C V , the specific heat C P at constant pressure, and γ = C P /C V . Equation (11) is equivalent to the condition that the flow be isentropic [d(P ρ −γ g )/dt = 0]. The code which solves the fluid equations actually employs an artificial hyperviscosity to damp away the smallest scale perturbations ( §2.3); in writing down equations (8)-(12), we have omitted the hyperviscosity terms for simplicity. We move to a frame co-rotating with dust-free gas at some fiducial radius r = R. This frame has angular frequency Ω F given by (1) with Ω K = (GM/R 3 ) 1/2 . We define a velocity v max using the pressure support parameter η, as given by (2): v max ≡ η| r=R Ω K R.(13) The velocity v max is the difference in azimuthal velocity between a strictly Keplerian flow and dust-free gas; it is the maximum possible difference in velocity, attained at large µ 0 , between gas at the midplane and gas at altitude. The quantities v max , η, and the background radial pressure gradient are equivalent; specifying one specifies the other two. Our numerical models are labeled by v max . In addition to moving into a rotating frame, we also replace the usual cylindrical coordinates (r, φ, z) with local Cartesian coordinates x = r − R, y = (φ − Ω F t)R, and z. 8 Keeping terms to first order in |x| ∼ |z| ∼ ηR (see the discussion surrounding equation 6) and dropping curvature terms, the momentum equation (8) reads dv dt = −2Ω Kẑ ×v+3Ω 2 K xx−Ω 2 K zẑ− 1 ρ d + ρ g ∇P −2Ω 2 K ηRx (14) where d/dt = ∂/∂t + v i ∂/∂x i (i = x, y, z) . On the right-hand side, the first term is the Coriolis acceleration, the second combines centrifugal and radial gravitational accelerations, the third represents the vertical gravitational acceleration from the star, and the last term arises from the centrifugal acceleration in a frame rotating at Ω F = Ω K . The remaining fluid equations appear the same as (9)-(12), except that v is now measured in a (rigidly) rotating frame. We measure all flow variables relative to a timeindependent reference state (subscripted "ref" ): v = v ref + v = v P = P ref + P ρ g = ρ g,ref + ρ g T = T ref + T ρ d = ρ d,ref + ρ d = ρ d . The reference state is defined as follows. It is dust-free (ρ d,ref = 0) and has constant gas temperature T ref . The gas in the reference state does not shear, either in the radial or vertical directions, but rotates with a fixed angular frequency Ω F in the inertial frame (hence v ref = 0 in the rotating frame). In the reference state there exists a radial pressure gradient directed outward − 1 ρ g,ref ∂P ref ∂r = 2Ω 2 K ηR = 2Ω K v max(15) and a vertical pressure gradient balanced by vertical tidal gravity − 1 ρ g,ref ∂P ref ∂z = Ω 2 K z .(16) Equation (16) together with equation (12) The flows of interest are subsonic. Mach numbers ǫ ≡ v/c s peak at v max /c s ∼ c s /(Ω K R) ∼ 0.02 for gas sound speeds c s ∼ 1 km/s at R ∼ 1 AU. Such flow is nearly incompressible : | ρ g |/ρ g,ref ∼ | P |/P ref ∼ | T |/T ref ∼ ǫ 2 . Invoking the anelastic approximation, we keep only terms leading in ǫ in any given equation. Equations (9), (10), and (12) reduce to: (17), (18), and (19) match equations (12b), (12c), and (12e) of B09. The anelastic approximation has been employed in the study of atmospheric convection (Ogura & Phillips 1962;Gough 1969), stars (Gilman & Glatzmaier 1981), and vortices in protoplanetary disks (Barranco & Marcus 2000. By eliminating the time derivative in the continuity equation (17), we effectively "sound-proof" the fluid. The simulation timestep is not limited by the sound-crossing time but rather by the longer advection time. dρ g dt + ρ g ∇ · v = ∂ρ g ∂t + ∇ · (ρ g v) ≈ ∇ · (ρ g,ref v) = 0 (17) d(ρ d /ρ g ) dt ≈ d( ρ d /ρ g,ref ) dt ≡ dµ dt = 0 (18) P ρ g,ref ≡ h = ρ g ρ g,ref ℜT ref + ℜ T (19) where we define µ ≡ ρ d /ρ g,ref = ρ d /ρ g, We rewrite our energy equation (11) as follows: replace −∇ · v with d ln ρ g /dt = −d ln T /dt + d ln P/dt to find that C P d T dt = 1 ρ g dP dt ≈ 1 ρ g,ref v · ∇P ref ≈ −v · 2Ω 2 K ηRx + Ω 2 K zẑ(20) where for the second line we dropped d P /dt in comparison to v · ∇P ref , and for the third line we replaced ρ −1 g,ref ∇P ref using (15) and (16). Equation (20) matches (12d) of B09 except that for the right-hand side he has a coefficient equal to 1 + T /T ref , which we have set to unity. Finally, to recover the form of the momentum equation (12a) of B09, first consider the pressure acceleration and isolate the contribution from dust-free gas (−ρ −1 g ∇P ): − 1 ρ d + ρ g ∇P = − 1 ρ d + ρ g − 1 ρ g ∇P − 1 ρ g ∇P ≈ µ µ + 1 1 ρ g ∇P − 1 ρ g ∇P .(21) Now expand 1 ρ g ∇P ≈ 1 ρ g,ref ∇P ref + 1 ρ g,ref ∇ P − ρ g ρ 2 g,ref ∇P ref ≈ 1 ρ g,ref ∇P ref + ∇ h + T T ref ∇P ref ρ g,ref ≈ − 1 + T T ref (2Ω 2 K ηRx + Ω 2 K zẑ) + ∇ h(22) where for the last line we used (15) and (16). Insertion of (21) and (22) into (14) yields the anelastic momentum equation (12a) of B09: dv dt = −2Ω Kẑ × v + 3Ω 2 K xx + T T ref (2Ω 2 K ηRx + Ω 2 K zẑ) − ∇ h − µ µ + 1 1 + T T ref (2Ω 2 K ηRx + Ω 2 K zẑ) − ∇ h .(23) which isolates the driving term due to dust. 2.2. Initial Conditions Equilibrium initial conditions (superscripted " †") are specified by five functions: µ = µ † , T = T † , h = h † , ρ g = ρ g † , and v = v † . For µ † , we use flows characterized by a globally constant Richardson number (Sekiya 1998;Youdin & Shu 2002;Chiang 2008). The conditions Ri = constant, ∂ρ g /∂z ≪ ∂ρ d /∂z, and g = −Ω 2 K z (no selfgravity) yield µ † (z) = 1 1/(1 + µ 0 ) 2 + (z/z d ) 2 1/2 − 1,(24) where µ 0 is the initial midplane dust-to-gas ratio and z d ≡ Ri 1/2 v max Ω K(25) is a characteristic dust height. The dust density peaks at the midplane and decreases to zero at z = ±z max = ± µ 0 (2 + µ 0 ) 1 + µ 0 z d(26) which is consistent with our order-of-magnitude expression (6). Neither equation (24) nor the code accounts for self-gravity and therefore we are restricted to modeling flows whose densities are less than that required for the Toomre parameter of the subdisk to equal unity (CY10; see also §4). For the minimum-mass disk of CY10, this restriction is equivalent to µ ∼ < 30. Input model parameters include µ 0 , Ri, and v max . For the gas, we assume T † = 0(27) (initially isothermal) and solve vertical hydrostatic equilibrium for h † (the z-component of equation 23): ∂ h † ∂z = −µ † Ω 2 K z .(28) The functional form for h † (z) is not especially revealing and so we do not write it out here. For simplicity we assume that h † does not depend on x. From h † and T † = 0 it follows from (19) that ρ g † = ρ g,ref h † ℜT ref .(29) The fractional deviations ρ g † /ρ g,ref and P † /P ref from the reference state are very small, of order µ † (v max /c s ) 2 Ri. It remains to specify v † . Using the conditions on h † stated above, we solve for the equilibrium (steady-state) solution to equation (23): v † x = v † z = 0 v † y = − 3 2 Ω K x + µ † (z) µ † (z) + 1 v max .(30) In our reference frame rotating with the velocity of dustfree gas at R, the first term on the right side of (30) accounts for the standard Kepler shear, while the second term describes how dust, which adds to inertia but not pressure, speeds up the gas. To µ † we add random perturbations ∆µ(x, y, z) = A(x, y)µ † (z)[cos(πz/2z d ) + sin(πz/2z d )] .(31) The amplitude A(x, y) is constructed in Fourier space so that each Fourier mode has a random phase and an amplitude inversely proportional to the horizontal wavenumber:Â ∝ k −1 ⊥ = (k 2 x + k 2 y ) −1/2 . Because our box sizes are scaled to z max , our Fourier noise amplitudes are largest on scales comparable to the dust layer thickness. Thus those modes which are most likely to overturn the layer are given the greatest initial power. The perturbations are also chosen to be antisymmetric about the x-axis so that no extra energy is injected into the system. We take the root-mean-squared amplitude A rms ≡ A 2 1/2 of the perturbations to be 10 −4 or 10 −3 . In summary, three input parameters µ 0 , Ri, and v max determine our isothermal equilibrium initial conditions (equations 24, 28, and 30). 9 The equilibrium solution for µ(z) is then perturbed (equation 31) by a root-meansquared fractional amount A rms . The parameters of primary interest are µ 0 and Ri. For the remaining parameters v max and A rms we consider three possible combinations: (v max , A rms ) = (0.025c s , 10 −4 ) for our standard runs; (0.025c s , 10 −3 ) to probe larger initial perturbations; and (0.05c s , 10 −4 ) to assess the effect of a stronger radial pressure gradient. Note that specifying µ 0 and Ri (and v max , though this last variable is fixed for all of our standard runs) specifies the entire dust and gas vertical profiles, ρ d (z) and ρ g (z), and by extension the bulk height-integrated metallicity, Σ d /Σ g ≡ ρ d dz/ ρ g dz. We do not give an explicit expression for Σ d /Σ g because it is cumbersome and not particularly revealing. The bulk metallicity is in some sense the most natural independent variable because its value is given by the background disk (for ways in which the bulk metallicity may change, e.g., by radial particle drifts, see CY10). We will plot our results in the space of µ 0 , Ri, and Σ d /Σ g , keeping in mind that only two of these three variables are independent. Code We use the spectral, anelastic, shearing box code developed by Barranco & Marcus (2006) and modified by 9 While our initial conditions are isothermal, the temperature of the flow can change because of adiabatic compression/expansion and because our artificial hyperviscosity dissipates the highest wavenumber disturbances. These temperature changes are fractionally tiny because the flow is highly subsonic. B09 to simulate well-coupled gas and dust. The code employs shearing periodic boundary conditions in r, periodic boundary conditions in φ, and closed lid boundaries in z; the vertical velocity v z is required to vanish at the top and bottom of the box (z = ±L z /2). Spectral methods approximate the solution to the fluid equations as a linear combination of basis functions. The basis functions describe how the flow varies in space, and the coefficients of the functions are determined at every timestep. For each of the periodic dimensions, a standard Fourier basis is used, while for the vertical direction, Chebyshev polynomials are employed. Whereas in r and φ grid points are evenly spaced, the use of Chebyshev polynomials in z has the effect that vertical grid points are unevenly spaced; points are concentrated towards the top and bottom boundaries of the box, away from the midplane where the dust layer resides. Thus to resolve the dust layer vertically, we need to increase the number of vertical grid points N z by an amount disproportionately large compared to the numbers of radial and azimuthal grid points N r and N φ . See §2.4 for further discussion. Spectral codes have no inherent grid dissipation; energy is allowed to cascade down to the smallest resolved length scales through nonlinear interactions. To avoid an energy "pile-up" at the highest wavenumbers, we dissipate energy using an artificial hyperviscosity, given in §3.3.3 of Barranco & Marcus (2006). Simulations satisfy the Courant-Friedrichs-Lewy (CFL) condition which states that the CFL number, defined as the code timestep divided by the shortest advection time across a grid cell, be small. In the shearing coordinates in which the code works, that advection time is the cell dimension divided by the local velocity over and above the Keplerian shear, i.e., orbital velocities are subtracted off before evaluating the CFL number. All simulations reported in this paper are characterized by CFL numbers less than about 0.1. Box Size and Numerical Resolution Our standard box dimensions are (L r , L φ , L z ) = (6.4, 12.8, 8)z max and the corresponding numbers of grid points are (N r , N φ , N z ) = (32, 64, 128). By scaling our box lengths L i to z max and fixing the numbers of grid points N i , we ensure that each standard simulation enjoys the same resolution (measured in grid points per physical length) regardless of Ri, µ 0 , and v max . The vertical extent of the dust layer between z = ±z max is resolved by 22 grid points (this is less than [128/(8z max )] × 2z max = 32 because the Chebyshev-based vertical grid only sparsely samples the midplane). The radial and azimuthal directions are resolved by 10 grid points per 2z max length. We choose our resolution in the vertical direction to be greater than that of the horizontal directions because the dust layer has finer scale structure in z: the dust layer becomes increasingly cuspy at the midplane as µ 0 increases. We prescribe the same resolution in the radial and azimuthal directions (L φ /N φ = L r /N r ); experiments with different resolutions in r and φ generated spurious results. Too small a box size can artificially affect the stability of the dust layer, because a given box can only support modes having integer numbers of wavelengths inside it. Small boxes may be missing modes that in reality over- turn the layer. We verify that for all runs in which the dust layer overturns, the KH mode that most visibly disrupts the layer spans more than one azimuthal wavelength. Typically 3-5 wavelengths are discerned across the box. To more thoroughly test our standard choices for L i , we study how systematic variations in box length affect how the instability develops. For this test, we adopt a fixed set of physical input parameters, (Ri, µ 0 , v max ) = (0.1, 10, 0.025c s ), which should lead to instability (Chiang 2008). Our diagnostic is the time evolution of the vertical kinetic energy at the midplane: µ(t)v 2 z (t) /2, where the average is over all r and φ at fixed z = 0 and time t. We vary L i and N i in tandem to maintain the same resolution from run to run, thereby isolating the effect of box size. Figure 1 shows how doubling one of the box dimensions while fixing the other two alters the time history of µv 2 z /2. Panel (a) demonstrates that our standard choice for L z = 8z max is sufficiently large because the curves for L z = 8z max and L z = 16z max practically overlap. Panels (b) and (c) show that our standard choices for L φ = 12.8z max and L r = 16z max are somewhat less adequate. The peak of the curve for (L φ , N φ ) = (12.8z max , 64) is delayed by two orbits compared to that for (L φ , N φ ) = (25.6z max , 128), and the curve for (L r , N r ) = (6.4z max , 32) peaks an orbit earlier than that for (L r , N r ) = (12.8z max , 64). Nevertheless these time differences are small compared to the total time to instability, about 10 orbits. Moreover, the errors point in opposite directions. Thus we expect our choices for L φ and L r to partially compensate for each other so that any error due to our box size in calculating the time to instability will be less than ∼1 orbit. We test how robust our results are to numerical resolution by re-running a few simulations at twice the normal resolution (doubling N i while fixing L i ). Results at high resolution are given in §3.3. Every simulation is run for at least ten orbits. A typical run performed at our standard resolution takes approximately 2.5 wall-clock hours using 56 processors on the Purdue Steele cluster. A highresolution run takes about 32 wall-clock hours. RESULTS In our standard simulations, we fix v max and A rms while systematically varying Ri and µ 0 from run to run. Our systematic variations of Ri and µ 0 correspond to systematic variations in Σ d /Σ g ; recall that only two of the three parameters Ri, µ 0 , and Σ d /Σ g are independent. For each µ 0 ∈ {0.3, 1, 3, 10} we adjust Ri until the threshold value Ri crit dividing stable from unstable runs is determined to within 0.1 dex. Deciding by numerical simulation whether a given dust layer is stable or not is unavoidably subject to the finite duration of the simulation. We define our criteria for deciding stability in §3.1. Results are given in §3.2 and tested for robustness in §3.3. Criteria for Stability Stability is assessed by two quantities: the midplane vertical kinetic energy µv 2 z /2 as a function of t where the average is performed over r and φ at fixed z = 0 and t, and the dust density profile µ as a function of z and t where the average is performed over r and φ at fixed z and t. By definition, in an "unstable" run, µv 2 z /2 grows exponentially over several orbital periods, and µ deviates from its initial value µ † by more than 15%. "Stable" simulations satisfy neither criterion. Some runs are "marginally unstable" in that they satisfy the first but not the second criterion. At the end of the standard ten-orbit duration of a marginally unstable run, we find the kinetic energy continues to rise, suggesting that were the run to be extended for longer than ten orbits, the dust layer would eventually overturn. In every instance where we extend the duration of a marginally unstable run, we verify that this is the case. Thus "marginally unstable" is practically synonymous with "unstable." Examples of unstable and stable runs are shown in Figure 2. In the unstable simulation, after t ≈ 6 orbits, the kinetic energy rises exponentially. At t ≈ 9 orbits, the dust layer overturns and the midplane dust-to-gas ratio falls by more than 60%. By contrast, in the stable simulation, after an initial adjustment period lasting ∼3 orbits during which the midplane value of µ decreases by 10%, the kinetic energy drops by orders of magnitude to a nearly constant value and shows no evidence of further growth. Figure 3 shows the evolution of |v i (z)| (i = r, φ, z) and µ(z) for the same unstable run of Figure 2. The velocity data are sampled at a single (x, y) position at the center of our simulation box. The radial and vertical velocities |v r | and |v z |, initially zero, grow to become comparable with the shearing velocity |v φ |. Figure 4 displays corresponding snapshots of µ(y, z), taken at a single radius x near the center of our box. Though the data in Figures 3 and 4 are sampled at particular radial locations in our box, we verify that the instability develops similarly at all locations-as it should-unlike the ZEUS-based simulations of Chiang (2008). 3.2. Stability as a Function of Ri, µ 0 , and Σ d /Σ g Figure 5 maps the stable and unstable regions in (Ri, µ 0 ) space, for fixed v max = 0.025c s and A rms = 10 −4 . Figures 6 and 7 portray the same data using alternate but equivalent projections of parameter space: (Ri, Σ d /Σ g ) and (µ 0 , Σ d /Σ g ), respectively. These plots demonstrate that there is no unique value of Ri crit . Rather Ri crit is a function of µ 0 , or equivalently a function of Σ d /Σ g . For bulk metallicities Σ d /Σ g near the solar value, Ri crit is found to be close to the classical value of 1/4. But as Σ d /Σ g decreases below the solar value, Ri crit shrinks to ∼0.01 or even lower. A leastsquares fit to the four midpoints (evaluated in log space) in Figure 5 dividing neighboring stable points (in black) and unstable points (in red or red outlined with black) yields Ri crit ∝ µ 1.0 0 . This same fit, projected into metallicity space, is shown in Figures 6 and 7; in metallicity space the stability boundary is not a power law. As Figure 7 attests, dust-to-gas ratios µ 0 as high as ∼8 can be attained in disks of solar metallicity without triggering a shear instability: see the intersection between the dashed curve fitted to our standard resolution data, and the dotted line representing solar metallicity. This intersection occurs at µ 0 ≈ 7. Were we to re-fit the dashed curve using the higher resolution data represented by triangles, the intersection with solar metallicity would occur at µ 0 closer to 8. A dust-to-gas ratio of µ 0 ≈ 8 is within a factor of ∼4 of the Toomre threshold for gravitational fragmentation in a minimum-mass disk (CY10; §4). We can achieve the Toomre threshold by simply allowing for a gas disk that is ∼4× more massive than the minimummass nebula. Alternatively we can enrich the disk in metals to increase Σ d /Σ g . Extrapolating the boundary of stability (dashed curve) in Figure 7 to higher Σ d /Σ g suggests that the Toomre threshold µ 0 ≈ 30 could be achieved for minimum-mass disks having ∼3× the solar metallicity. The sensitivity to metallicity is also exemplified by Figure 2. For the same µ 0 = 10, the dust layer based on a near-solar metallicity of Σ d /Σ g = 0.013 overturns, whereas one derived from a supersolar metallicity of Σ d /Σ g = 0.030 remains stable. Tests at Higher Resolution, Higher A rms , and Higher v max We test how robust our determination of Ri crit is to numerical resolution by redoing our simulations for µ 0 = 0.3 and 10 with double the number of grid points in each dimension. The results are overlaid as blue triangles in Figures 5, 6, and 7. At µ 0 = 0.3, increasing the resolution does not change Ri crit from its value of 0.009. At µ 0 = 10, Ri crit shifts downward from 0.3 to 0.2. Although we have not strictly demonstrated convergence of our results with resolution, and although high resolution data at other values of µ 0 are missing, it seems safe to conclude In the unstable case, the layer overturns and mixes dust-rich gas with dust-poor gas, causing the dust-to-gas ratio at the midplane to drop by a factor of ∼3 after 10 orbits (top left). As the instability unfolds, the vertical kinetic energy amplifies exponentially from t ≈ 5-10 orbits (top right). At fixed µ 0 , the layer is stabilized by increasing the Richardson number or equivalently the height-integrated metallicity Σ d /Σg. In the stable run, the dust profile changes by less than 15% (bottom left) while the kinetic energy, after dropping precipitously, shows no indication of growing (bottom right). The two runs shown use vmax = 0.025cs and Arms = 10 −4 . that the slope of the stability boundary in Ri-µ 0 space is close to, but decidedly shallower than, linear. We also test the sensitivity of our results to A rms . Increasing A rms by an order of magnitude to 10 −3 shifts Ri crit upward by ∼ < 0.2 dex at µ 0 < 1, but leaves Ri crit unchanged at larger µ 0 (Figure 8). B09 also reported some sensitivity to A rms . Tests where v max was doubled to 0.05c s reveal no change in Ri crit (data not shown). SUMMARY AND DISCUSSION Where a protoplanetary disk is devoid of turbulence intrinsic to gas, dust particles settle toward the midplane, accumulating in a sublayer so thin and so dense that the dust-gas mixture becomes unstable. If the first instability to manifest is self-gravitational, dust particles are drawn further together, possibly spawning planetesimals. If instead the layer is first rendered unstable by a Kelvin-Helmholtz-type shearing instability (KHI), the resultant turbulence prevents dust from settling further, pre-empting gravitational collapse. In this paper we investigated the conditions which trigger the KHI, hoping to find a region of parameter space where the KHI might be held at bay so that planetesimals can form by selfgravity. A fundamental assumption underlying our work is that turbulence intrinsic to gas can, in some regions of the disk, be neglected. There is some consensus that near disk midplanes, in a zone extending from ∼1 to at least ∼10 AUs from the parent star, gas may be too poorly ionized to sustain magnetohydrodynamic turbulence (Ilgner & Nelson 2006;Bai & Goodman 2009;Turner et al. 2010). Presumably if the magnetorota- tional instability (e.g., Balbus 2009) cannot operate at the midplane, disk gas there is laminar-pending the uncertain ability of magnetically active surface layers to stir the disk interior (e.g., Turner et al. 2010), or the discovery of a purely hydrodynamic form of turbulence (Lithwick 2009). To get a sense of how laminar disk gas must be to permit dust sublayers to form, Chiang & Youdin (2010) compared the height to which dust particles are stirred in an "alpha"-turbulent disk to the thickness of the sublayer (6). They estimated that the former is smaller than the latter when the dimensionless turbulent diffusivity α ∼ < 3 × 10 −4 Ω K t stop (r/AU) 4/7 for t stop < Ω −1 K . To place this requirement in context, α values for magnetically active zones are typically quoted to be greater than ∼10 −3 . Whether magnetically dead zones are sufficiently passive for dust to settle into sublayers remains an outstanding question. Modulo this concern, we studied the stability of dust layers characterized by spatially constant Richardson numbers Ri using a three-dimensional, spectral, anelastic, shearing box code (Barranco & Marcus 2006) that models gas and dust as two perfectly coupled fluids (Barranco 2009). We found that stability is not characterized by a single critical Richardson number. Rather the value of Ri crit distinguishing layers that overturn from those that do not is a nearly linear function of the midplane dust-to-gas ratio µ 0 ( Figure 5). Dust-rich sublayers having µ 0 ≈ 10 have Ri crit ≈ 0.2-near the canonical value of 1/4-while dust-poor sublayers having µ 0 ≈ 0.3 (still orders of magnitude dustier than wellmixed gas and dust at solar abundances) have Ri crit as low as 0.009. Previous studies (e.g., Sekiya 1998;Youdin & Shu 2002;Youdin & Chiang 2004) assumed a universal critical Richardson number of 1/4. This popular assumption seems correct only for dust-rich layers having µ 0 so large they are on the verge of gravitational instability. For less dusty midplanes, the assumption appears to be incorrect. Our numerical results are roughly consistent with those of Chiang (2008), who also found evidence that Ri crit decreases with decreasing µ 0 . Comparing his Table 2 with our Figure 5 shows that his constraints on Ri crit are, for the most part, compatible with those presented here, for the range µ 0 ≈ 0.3-10 where our respective data overlap. Our findings supersede those of Chiang (2008) insofar as we have explored parameter space more finely and systematically, at greater and more uniform resolution, with numerical methods better suited for subsonic flows. Our results turn out to be consistent with the classical Richardson criterion-which states only that Ri < 1/4 is necessary, not sufficient, for instability-even though the criterion as derived by Miles (1961) applies only to twodimensional flows, which our dust layers are not. Our simulations demonstrate that the criterion can still serve as a useful guide for assessing stability in disks having bulk metallicities ranging from subsolar to slightly supersolar values-with the proviso that the actual Richardson number dividing KH-stable from KH-unstable flows, while < 1/4, is generally not equal to 1/4. Why isn't the Richardson criterion for instability sufficient in rotating dust disks? The criterion considers the competition between the destabilizing vertical Ri and µ 0 . Red points correspond to unstable dust layers, whose dust-to-gas ratios µ change by more than 15%, and whose vertical kinetic energies grow exponentially, within the 10-orbit duration of the simulation. Black points mark stable dust layers satisfying neither criterion. Red points outlined in black signify marginally unstable layers, whose kinetic energies rise but whose dust-to-gas ratios change by less than 15%; these are essentially equivalent to red points without outlines, because every marginally unstable run that we extend beyond 10 orbits eventually becomes fully unstable. Runs performed at twice the standard resolution appear as triangles. Downward pointing triangles symbolize stable runs, upward triangles are unstable, and upward pointing triangles in black outline are marginally unstable. All simulations use Arms = 10 −4 and vmax = 0.025cs. There is no unique value for the critical Richardson number separating stable from unstable dust layers. Rather, a least-squares fit to the data from our standard resolution runs yields Ri crit ∝ µ 1.0 , shown as a dashed line. The classical boundary Ri crit = 0.25 is plotted as a dotted line. shear and the stabilizing influence of buoyancy, which causes fluid parcels to oscillate about their equilibrium positions at the Brunt-Väisälä frequency. However, there exists another stabilizing influence, ignored by the Richardson number, provided by the radial Kepler shear (Ishitsu & Sekiya 2003). In the limit µ 0 ≪ 1, the Brunt frequency (4) becomes negligible relative to the Kepler shearing frequency (7), suggesting stability now depends on the competition between the destabilizing vertical shear and stabilizing radial Kepler shear. We expect the flow to be stable as long as the Kepler shear can wind up unstable eigenmodes to higher radial wavenumbers before their amplitudes grow large enough to trigger nonlinear effects. This suggests that we replace the Richardson number with a "shearing number," defined by analogy as the square of the ratio of the Kepler shearing frequency to the vertical shearing frequency: Sh ≡ |∂Ω/∂ ln r| 2 (∂v φ /∂z) 2 ∝ ∆z ∆v φ 2 ∝ Ri 1 + µ 0 µ 0(32) where we have used (3) and (6). By assuming Sh is constant for marginally stable dust profiles, we arrive at the relation Ri crit ∝ µ 0 for µ 0 ≪ 1 .(33) What is surprising is that this trend, although expected to hold only for µ 0 ≪ 1, appears to hold approximately -Mapping the boundary of stability in the space of initial Ri and bulk (height-integrated) dust-to-gas ratio Σ d /Σg. The data are identical to those in Figure 5. The labeling convention is also the same, except that the triangles representing high-resolution runs have adjusted their orientation so that they point towards the stability boundary. The same least-squares fit from Figure 5 is projected here as a dashed curve. Solar metallicity Σ d /Σg = 0.015 (Lodders 2003) is indicated by a dotted line. The critical value Ri crit dividing stable from unstable dusty subdisks trends with metallicity. This trend was only hinted at in the data of C08. Fig. 7.-Mapping the boundary of stability in the space of midplane dust-to-gas ratio µ 0 and bulk (height-integrated) dust-to-gas ratio Σ d /Σg. The data are identical to those in Figure 5. The labeling convention is also the same, except that the triangles representing high-resolution runs have adjusted their orientation so that they point towards the stability boundary. The same least-squares fit from Figure 5 is projected here as a dashed curve. Solar metallicity Σ d /Σg = 0.015 (Lodders 2003) is indicated by a dotted line. A minimum-mass solar nebula requires µ 0 ≈ 30 for gravitational instability to ensue on a dynamical time (CY10). Extrapolating the boundary of stability to µ 0 ≈ 30 suggests that metallicities roughly ∼3 times solar would be required for dynamical gravitational instability in a minimum-mass disk. The required degree of metal enrichment would be proportionately less in more massive disks. Figure 5, except that all data correspond to Arms = 10 −3 . For comparison with Arms = 10 −4 , the same best-fit line of Figure 5 is reproduced here. Not much changes, except that Ri crit shifts upward by 0.2 dex at µ 0 = 0.3. for all µ 0 , according to our simulation results in Figure 5. For µ 0 ∼ > 1, we would have expected from (32) that Ri crit asymptote to a constant; but it does not. Our higher resolution runs do suggest the stability curve slightly flattens at µ 0 ≈ 10, but such deviations seem too small to be fully explained using arguments relying purely on the shearing number. To explain the observed trend, we might co-opt the methods of Ishitsu & Sekiya (2003), who linearized and numerically integrated the 3D equations of motion for the dust layer. For their particular choice of background vertical density profile, they solved for the maximum growth factors for the most unstable KH modes (see also Knobloch & Spruit 1985 who considered the axisymmetric problem). We would need to replace their assumed profile with our profiles having spatially constant Ri. Perhaps our numerically determined stability curve Ri crit (Σ d /Σ g ) corresponds to a locus of fixed maximum growth factor. Gravitational instability occurs on a dynamical time when the dust layer's Toomre Q ≈ M/[2πr 3 ρ g (1 + µ 0 )] reaches unity (Toomre 1964;Goldreich & Lynden-Bell 1965). For ρ g given by the minimum-mass solar nebula, this occurs when µ 0 ≈ 30, fairly independently of r (Chiang & Youdin 2010). Of course in more massive gas disks (greater ρ g ), the requirement on µ 0 is proportionately lower. Figure 7 shows that for disks having bulk metallicities Σ d /Σ g equal to the solar value of 0.015, the dusty sublayer can achieve µ 0 ≈ 8 before it becomes KH unstable. Taken at face value, such a marginally KH-stable subdisk, embedded in a gas disk having 30/8 ≈ 4 times the mass of the minimum-mass solar nebula, would undergo gravitational instability on the fastest timescale imaginable, the dynamical time. The case that planets form from disks several times more massive than the minimum-mass solar nebula is plausible (e.g., Goldreich et al. 2004;Lissauer et al. 2009). An alternate way of crossing the Toomre threshold is to allow the bulk metallicity Σ d /Σ g to increase above the solar value of 0.015. Extrapolating the boundary of stability in Figure 7 to µ 0 ≈ 30 suggests that metallicities roughly ∼3 times solar would be required for dynamical gravitational instability in a minimum-mass disk. There are several proposed ways to achieve supersolar metallicities in some portions of the disk, among them radial pileups (Youdin & Shu 2002) or dissipative gravitational instability (Ward 1976;Coradini et al. 1981;Ward 2000;Youdin 2005a;Youdin 2005b; see also the introduction of Goodman & Pindor 2000). None of the ways we have outlined for achieving gravitational instability rely on the streaming instability or turbulent concentration of particles, mechanisms that we have criticized in §1.3. Nevertheless our scenarios may be too optimistic because all our dust profiles are predi-cated on the assumption of a spatially constant Ri. This assumption tends to generate strong density cusps at the midplane that might not be present in reality. In a forthcoming paper we will relax the assumption of spatially constant Ri and measure the maximum µ 0 attainable, as a function of metalllicity Σ d /Σ g , by simulating explicitly the settling of dust towards the midplane. Fig. 1 . 1-Testing box sizes at fixed numerical resolution. For our standard box, (Lr, L φ , Lz) = (6.4, 12.8, 8)zmax and (Nr, N φ , Nz) = (32, 64, 128). In each panel we vary one box dimension while keeping the other two dimensions fixed at their standard values. In the top panel we vary Lz at fixed resolution Nz/Lz. In the middle and bottom panels, L φ and Lr are varied in turn. All simulations in this figure have µ 0 = 10, Ri = 0.1, vmax = 0.025cs, Arms = 10 −4 , and use code units ρ g,ref (z = 0) = Ω K = Hg = 1. Doubling the box dimensions from our standard values changes when the average vertical kinetic energy peaks by only a few orbits at most. The averageis performed over all r and φ at fixed z = 0. Fig. 2 . 2-Sample unstable (top) and stable (bottom) dust layers. Fig. 3 . 3-Snapshots of absolute values of the three velocity components (top panels) and horizontally averaged dust-to-gas ratio (bottom panels), both as functions of height, at three instants in time. For this unstable run, (Ri, µ 0 , vmax, Arms) = (0.1, 10, 0.025cs, 10 −4 ). Velocities are taken from a grid point near the middle of the box. The vertical shear ∂v φ /∂z inside the dust layer weakens with time as dust is more uniformly mixed with gas, and as the radial and vertical velocities grow at the expense of the azimuthal velocity. Fig. 4 . 4-Snapshots of µ(y, z), sampled at r = R (x = 0; the central slice of the simulation box) for the same unstable run shown inFigure 3. The box size parameters are (Lr, L φ , Lz) = (0.05, 0.1, 0.063)Hg, larger than what is shown in the figure, which zooms in for more detail. Fig. 5 . 5-Mapping the boundary of stability in the space of initial Fig. 6 . 6Fig. 6.-Mapping the boundary of stability in the space of initial Ri and bulk (height-integrated) dust-to-gas ratio Σ d /Σg. The data are identical to those in Figure 5. The labeling convention is also the same, except that the triangles representing high-resolution runs have adjusted their orientation so that they point towards the stability boundary. The same least-squares fit from Figure 5 is projected here as a dashed curve. Solar metallicity Σ d /Σg = 0.015 (Lodders 2003) is indicated by a dotted line. The critical value Ri crit dividing stable from unstable dusty subdisks trends with metallicity. This trend was only hinted at in the data of C08. Fig. 8 . 8-How the stability boundary changes with stronger initial perturbations. This figure is the same as and the assumption of constant T ref implies that the reference gas density ρ g,ref and pressure P ref have Gaussian vertical distributions in z with scale height H g = √ ℜT ref /Ω K . For simplicity we neglect the radial density gradient (∂ρ g,ref /∂r = 0), as did B09. This reference state should not be confused with our equilibrium states of interest ( §2.2), which do shear and which do contain dust. The reference state merely serves as a fiducial. ref and the pressure-like enthalpy h ≡ P /ρ g,ref , and henceforth for convenience drop all tildes on ρ d , µ, and v (but not the other variables related to gas). The rightmost equalities of This order-of-magnitude expression for the dust layer thickness, and the related equation (3) which approximates the vertical shear, are each smaller than their counterparts given byYoudin & Shu (2002, page 499, first full paragraph) by a factor of (1 + µ). This is becauseYoudin & Shu (2002) evaluate quantities deep inside the layer, within a density cusp at the midplane, whereas we are interested in quantities averaged across the entire layer. The difference does not change either our conclusions or theirs. Throughout this paper we alternate freely between subscripts (x, y, z) and (r, φ, z). We thank Daniel Lecoanet, Eve Ostriker, Prateek Sharma, Jim Stone, and Yanqin Wu for discussions. An anonymous referee provided a thoughtful and encouraging report that helped to place our work in a broader context. This research was supported by the National Science Foundation, in part through TeraGrid resources provided by Purdue University under grant number TG-AST090079. . X Bai, J Goodman, ApJ. 701737Bai, X. & Goodman, J. 2009, ApJ, 701, 737 ArXiv e-prints Barranco. S A Balbus, J. A. 691B09907ApJBalbus, S. A. 2009, ArXiv e-prints Barranco, J. A. 2009, ApJ, 691, 907 (B09) J A Barranco, P S Marcus, Studying Turbulence Using Numerical Simulation Databases, 8. Proceedings of the 2000 Summer Program. 97Barranco, J. A. & Marcus, P. S. 2000, in Studying Turbulence Using Numerical Simulation Databases, 8. Proceedings of the 2000 Summer Program, p. 97, 97-+ . J A Barranco, P S Marcus, Journal of Computational Physics. 21921ApJBarranco, J. A. & Marcus, P. S. 2005, ApJ, 623, 1157 -. 2006, Journal of Computational Physics, 219, 21 . J Blum, G Wurm, ARA&A. 4621Blum, J. & Wurm, G. 2008, ARA&A, 46, 21 Hydrodynamic And Hydromagnetic Stability. S Chandrasekhar, Dover PublicationsNew York1st edn.Chandrasekhar, S. 1981, Hydrodynamic And Hydromagnetic Stability, 1st edn. (Dover Publications, New York) . E Chiang, ApJ. 675C081549Chiang, E. 2008, ApJ, 675, 1549 (C08) . E Chiang, A Youdin, Annual Reviews of Earth and Planetary Science. CY1038Chiang, E. & Youdin, A. 2010, Annual Reviews of Earth and Planetary Science, 38 (CY10) . A Coradini, G Magni, C Federico, A&A. 98173Coradini, A., Magni, G., & Federico, C. 1981, A&A, 98, 173 . J N Cuzzi, A R Dobrovolskis, J M Champney, Icarus. 106102Cuzzi, J. N., Dobrovolskis, A. R., & Champney, J. M. 1993, Icarus, 106, 102 . J N Cuzzi, R C Hogan, K Shariff, ApJ. 6871432Cuzzi, J. N., Hogan, R. C., & Shariff, K. 2008, ApJ, 687, 1432 P G Drazin, W H Reid, Hydrodynamic Stability. CambridgeCambridge University Press2nd edn.Drazin, P. G. & Reid, W. H. 2004, Hydrodynamic Stability, 2nd edn. (Cambridge University Press, Cambridge) . J K Eaton, J R Fessler, International Journal of Multiphase Flow Supplemental. 20169Eaton, J. K. & Fessler, J. R. 1994, International Journal of Multiphase Flow Supplemental, 20, 169 . C F Gammie, ApJ. 462725Gammie, C. F. 1996, ApJ, 462, 725 . P A Gilman, G A Glatzmaier, ApJS. 45335Gilman, P. A. & Glatzmaier, G. A. 1981, ApJS, 45, 335 . P Goldreich, Y Lithwick, R Sari, ARA&A. 42549Goldreich, P., Lithwick, Y., & Sari, R. 2004, ARA&A, 42, 549 . P Goldreich, D Lynden-Bell, P Goldreich, W R Ward, MNRAS. 1301051ApJGoldreich, P. & Lynden-Bell, D. 1965, MNRAS, 130, 125 Goldreich, P. & Ward, W. R. 1973, ApJ, 183, 1051 . G C Gómez, E C Ostriker, ApJ. 630Gómez, G. C. & Ostriker, E. C. 2005, ApJ, 630 . J Goodman, B Pindor, Icarus. 148537Goodman, J. & Pindor, B. 2000, Icarus, 148, 537 . D O Gough, Journal of Atmospheric Sciences. 26448Gough, D. O. 1969, Journal of Atmospheric Sciences, 26, 448 . R C Hogan, J N Cuzzi, Phys. Rev. E. 7556305Hogan, R. C. & Cuzzi, J. N. 2007, Phys. Rev. E, 75, 056305 . M Ilgner, R P Nelson, A&A. 445223Ilgner, M. & Nelson, R. P. 2006, A&A, 445, 223 . N Ishitsu, M Sekiya, Icarus. 165181Ishitsu, N. & Sekiya, M. 2003, Icarus, 165, 181 . A Johansen, T Henning, H Klahr, ApJ. 6431219Johansen, A., Henning, T., & Klahr, H. 2006, ApJ, 643, 1219 . A Johansen, J S Oishi, M Low, H Klahr, T Henning, A Youdin, Nature. 4481022Johansen, A., Oishi, J. S., Low, M., Klahr, H., Henning, T., & Youdin, A. 2007, Nature, 448, 1022 . A Johansen, A Youdin, M Mac Low, ApJ. 70475Johansen, A., Youdin, A., & Mac Low, M. 2009, ApJ, 704, L75 . E Knobloch, H C Spruit, Geophysical and Astrophysical Fluid Dynamics. 32359A&AKnobloch, E. & Spruit, H. C. 1985, Geophysical and Astrophysical Fluid Dynamics, 32, 197 -. 1986, A&A, 166, 359 . D Lecoanet, E G Zweibel, R H D Townsend, Y Huang, ApJ. 7121116Lecoanet, D., Zweibel, E. G., Townsend, R. H. D., & Huang, Y. 2010, ApJ, 712, 1116 . J J Lissauer, O Hubickyj, G D&apos;angelo, P Bodenheimer, Icarus. 199338Lissauer, J. J., Hubickyj, O., D'Angelo, G., & Bodenheimer, P. 2009, Icarus, 199, 338 . Y Lithwick, ApJ. 69385Lithwick, Y. 2009, ApJ, 693, 85 . K Lodders, ApJ. 5911220Lodders, K. 2003, ApJ, 591, 1220 . M R Maxey, J. Fluid Mech. 174441Maxey, M. R. 1987, J. Fluid Mech., 174, 441 . J W Miles, Journal of Fluid Mechanics. 10496Miles, J. W. 1961, Journal of Fluid Mechanics, 10, 496 . Y Nakagawa, M Sekiya, C Hayashi, Icarus. 67375Nakagawa, Y., Sekiya, M., & Hayashi, C. 1986, Icarus, 67, 375 . Y Ogura, N A Phillips, Journal of Atmospheric Sciences. 19Ogura, Y. & Phillips, N. A. 1962, Journal of Atmospheric Sciences, 19 Evolution of the protoplanetary cloud and formation of the Earth and planets. V S Safronov, IPST JerusalemSafronov, V. S. 1969, Evolution of the protoplanetary cloud and formation of the Earth and planets (IPST Jerusalem) . M Sekiya, Icarus. 133298Sekiya, M. 1998, Icarus, 133, 298 . A Toomre, ApJ. 1391217Toomre, A. 1964, ApJ, 139, 1217 . N J Turner, A Carballido, T Sano, ApJ. 708188Turner, N. J., Carballido, A., & Sano, T. 2010, ApJ, 708, 188 W R Ward, Frontiers of Astrophysics. Ward, W. R. 1976, in Frontiers of Astrophysics, 1-40 On Planetesimal Formation: The Role of Collective Particle Behavior. W R Ward, R M Canup, K Righter, Ward, W. R. On Planetesimal Formation: The Role of Collective Particle Behavior, ed. Canup, R. M., Righter, K., & et al., 75-84 . S J Weidenschilling, Icarus. 44172Weidenschilling, S. J. 1980, Icarus, 44, 172 . D J Wilner, P D&apos;alessio, N Calvet, M J Claussen, L Hartmann, ApJ. 626109Wilner, D. J., D'Alessio, P., Calvet, N., Claussen, M. J., & Hartmann, L. 2005, ApJ, 626, L109 ArXiv Astrophysics e-prints. A N Youdin, ApJ. Youdin, A. N. & Chiang, E. I6011109Youdin, A. N. 2005a, ArXiv Astrophysics e-prints -. 2005b, ArXiv Astrophysics e-prints Youdin, A. N. & Chiang, E. I. 2004, ApJ, 601, 1109 . A N Youdin, J Goodman, ApJ. 620459Youdin, A. N. & Goodman, J. 2005, ApJ, 620, 459 . A N Youdin, F H Shu, ApJ. 580494Youdin, A. N. & Shu, F. H. 2002, ApJ, 580, 494
[]
[ "Bipolaronic nature of the pseudogap in (TaSe 4 ) 2 I revealed via weak photoexcitation", "Bipolaronic nature of the pseudogap in (TaSe 4 ) 2 I revealed via weak photoexcitation" ]
[ "Yingchao Zhang \nDepartment of Physics\nJILA\nUniversity of Colorado\nNIST\n80309BoulderCOUSA\n", "Tika Kafle \nDepartment of Physics\nJILA\nUniversity of Colorado\nNIST\n80309BoulderCOUSA\n", "Wenjing You \nDepartment of Physics\nJILA\nUniversity of Colorado\nNIST\n80309BoulderCOUSA\n", "Xun Shi \nDepartment of Physics\nJILA\nUniversity of Colorado\nNIST\n80309BoulderCOUSA\n", "Lujin Min \nMaterials Research Institute\nDepartment of Materials Science & Engineering\nThe Pennsylvania State University\n16802University ParkPAUSA\n", "Huaiyu ", "Hugo Wang \nMaterials Research Institute\nDepartment of Materials Science & Engineering\nThe Pennsylvania State University\n16802University ParkPAUSA\n", "Na Li \nDepartment of Physics\nJILA\nUniversity of Colorado\nNIST\n80309BoulderCOUSA\n", "Venkatraman Gopalan \nMaterials Research Institute\nDepartment of Materials Science & Engineering\nThe Pennsylvania State University\n16802University ParkPAUSA\n", "Kai Rossnagel \nInstitute of Experimental and Applied Physics\nKiel University\nD-24098KielGermany\n\nDeutsches Elektronen-Synchrotron DESY\nRuprecht Haensel Laboratory\nD-22607HamburgGermany\n", "Lexian Yang \nDepartment of Physics\nState Key Laboratory of Low Dimensional Quantum Physics\nTsinghua University\n100084BeijingChina\n\nFrontier Science Center for Quantum Information\n100084BeijingChina\n", "Zhiqiang Mao \nMaterials Research Institute\nDepartment of Materials Science & Engineering\nThe Pennsylvania State University\n16802University ParkPAUSA\n\nDepartment of Physics\nThe Pennsylvania State University\n16802University ParkPAUSA\n", "Rahul Nandkishore \nDepartment of Physics and Center for Theory of Quantum Matter\nUniversity of Colorado Boulder\n80309BoulderCOUSA\n", "Henry Kapteyn \nDepartment of Physics\nJILA\nUniversity of Colorado\nNIST\n80309BoulderCOUSA\n", "Margaret Murnane \nDepartment of Physics\nJILA\nUniversity of Colorado\nNIST\n80309BoulderCOUSA\n" ]
[ "Department of Physics\nJILA\nUniversity of Colorado\nNIST\n80309BoulderCOUSA", "Department of Physics\nJILA\nUniversity of Colorado\nNIST\n80309BoulderCOUSA", "Department of Physics\nJILA\nUniversity of Colorado\nNIST\n80309BoulderCOUSA", "Department of Physics\nJILA\nUniversity of Colorado\nNIST\n80309BoulderCOUSA", "Materials Research Institute\nDepartment of Materials Science & Engineering\nThe Pennsylvania State University\n16802University ParkPAUSA", "Materials Research Institute\nDepartment of Materials Science & Engineering\nThe Pennsylvania State University\n16802University ParkPAUSA", "Department of Physics\nJILA\nUniversity of Colorado\nNIST\n80309BoulderCOUSA", "Materials Research Institute\nDepartment of Materials Science & Engineering\nThe Pennsylvania State University\n16802University ParkPAUSA", "Institute of Experimental and Applied Physics\nKiel University\nD-24098KielGermany", "Deutsches Elektronen-Synchrotron DESY\nRuprecht Haensel Laboratory\nD-22607HamburgGermany", "Department of Physics\nState Key Laboratory of Low Dimensional Quantum Physics\nTsinghua University\n100084BeijingChina", "Frontier Science Center for Quantum Information\n100084BeijingChina", "Materials Research Institute\nDepartment of Materials Science & Engineering\nThe Pennsylvania State University\n16802University ParkPAUSA", "Department of Physics\nThe Pennsylvania State University\n16802University ParkPAUSA", "Department of Physics and Center for Theory of Quantum Matter\nUniversity of Colorado Boulder\n80309BoulderCOUSA", "Department of Physics\nJILA\nUniversity of Colorado\nNIST\n80309BoulderCOUSA", "Department of Physics\nJILA\nUniversity of Colorado\nNIST\n80309BoulderCOUSA" ]
[]
The origin of the pseudogap in many strongly correlated materials has been a longstanding puzzle. Here, we uncover which many-body interactions underlie the pseudogap in quasi-onedimensional (quasi-1D) material (TaSe4)2I by weak photo-excitation of the material to partially melt the ground state order and thereby reveal the underlying states in the gap. We observe the appearance of both dispersive and flat bands by using time-and angle-resolved photoemission spectroscopy. We assign the dispersive band to a single-particle bare band, while the flat band to a collection of single-polaron sub-bands. Our results provide direct experimental evidence that many-body interactions among small Holstein polarons i.e., the formation of bipolarons, are primarily responsible for the pseudogap in (TaSe4)2I. Recent theoretical studies of the Holstein model support the presence of such a bipolaron-to-polaron crossover. We also observe dramatically different relaxation times for the excited in-gap states in (TaSe4)2I (~600 fs) compared with another quasi-1D material Rb0.3MoO3 (~60 fs), which provides a new method for distinguishing between pseudogaps induced by polaronic or Luttinger-liquid many-body interactions.
null
[ "https://arxiv.org/pdf/2203.05655v1.pdf" ]
247,411,145
2203.05655
724914c56a488da70c39b22ce130c210bd928c8b
Bipolaronic nature of the pseudogap in (TaSe 4 ) 2 I revealed via weak photoexcitation Yingchao Zhang Department of Physics JILA University of Colorado NIST 80309BoulderCOUSA Tika Kafle Department of Physics JILA University of Colorado NIST 80309BoulderCOUSA Wenjing You Department of Physics JILA University of Colorado NIST 80309BoulderCOUSA Xun Shi Department of Physics JILA University of Colorado NIST 80309BoulderCOUSA Lujin Min Materials Research Institute Department of Materials Science & Engineering The Pennsylvania State University 16802University ParkPAUSA Huaiyu Hugo Wang Materials Research Institute Department of Materials Science & Engineering The Pennsylvania State University 16802University ParkPAUSA Na Li Department of Physics JILA University of Colorado NIST 80309BoulderCOUSA Venkatraman Gopalan Materials Research Institute Department of Materials Science & Engineering The Pennsylvania State University 16802University ParkPAUSA Kai Rossnagel Institute of Experimental and Applied Physics Kiel University D-24098KielGermany Deutsches Elektronen-Synchrotron DESY Ruprecht Haensel Laboratory D-22607HamburgGermany Lexian Yang Department of Physics State Key Laboratory of Low Dimensional Quantum Physics Tsinghua University 100084BeijingChina Frontier Science Center for Quantum Information 100084BeijingChina Zhiqiang Mao Materials Research Institute Department of Materials Science & Engineering The Pennsylvania State University 16802University ParkPAUSA Department of Physics The Pennsylvania State University 16802University ParkPAUSA Rahul Nandkishore Department of Physics and Center for Theory of Quantum Matter University of Colorado Boulder 80309BoulderCOUSA Henry Kapteyn Department of Physics JILA University of Colorado NIST 80309BoulderCOUSA Margaret Murnane Department of Physics JILA University of Colorado NIST 80309BoulderCOUSA Bipolaronic nature of the pseudogap in (TaSe 4 ) 2 I revealed via weak photoexcitation 1 The origin of the pseudogap in many strongly correlated materials has been a longstanding puzzle. Here, we uncover which many-body interactions underlie the pseudogap in quasi-onedimensional (quasi-1D) material (TaSe4)2I by weak photo-excitation of the material to partially melt the ground state order and thereby reveal the underlying states in the gap. We observe the appearance of both dispersive and flat bands by using time-and angle-resolved photoemission spectroscopy. We assign the dispersive band to a single-particle bare band, while the flat band to a collection of single-polaron sub-bands. Our results provide direct experimental evidence that many-body interactions among small Holstein polarons i.e., the formation of bipolarons, are primarily responsible for the pseudogap in (TaSe4)2I. Recent theoretical studies of the Holstein model support the presence of such a bipolaron-to-polaron crossover. We also observe dramatically different relaxation times for the excited in-gap states in (TaSe4)2I (~600 fs) compared with another quasi-1D material Rb0.3MoO3 (~60 fs), which provides a new method for distinguishing between pseudogaps induced by polaronic or Luttinger-liquid many-body interactions. Introduction The absence of a clear Fermi edge along with the existence of a pseudogap in a broad class of materials including cuprates, colossal magnetoresistance manganites and quasi-one-dimensional (quasi-1D) materials has been a long standing and important puzzle. For example, it is believed to play a vital role in a series of strongly correlated phenomena including high-temperature superconductivity (1)(2)(3)(4). Strong correlations in combination with multiple mechanisms such as polaronic interactions (1,3), Luttinger-liquid behavior (4,5), Efros-Shklovski effect (6,7) and charge density wave (CDW) fluctuations (8) can be responsible for opening a pseudogap. Determining the dominant mechanism leading to the formation of a pseudogap is challenging, partly due to the lack of characteristic features to distinguish between different mechanisms. This is the case in many conventional static equilibrium measurements such as angle-resolved photoemission spectroscopy (ARPES), electrical transport and optical conductivity (3,5,9,10). These techniques share the characteristic of probing only the electronic bands close to the ground state. For instance, many potential mechanisms are expected to give rise to fine features in the ARPES spectral function in the pseudogap i.e. a quasi-particle peak and polaron sub-bands (11,12). However, these features are often difficult to observe under equilibrium conditions because of the strongly suppressed spectral weight within the pseudogap. Past work demonstrated that the timescale for melting the order can yield useful information for distinguishing the dominant interactions in strongly-coupled materials (13). Here we use weak laser excitation to partially melt the ground state order and reveal the underlying states in the pseudogap, essentially driving a transient crossover between two different many-body regimes. Using time-resolved ARPES, we measure both the energy-momentum distribution and characteristic formation and relaxation times of these emergent excited in-gap states, that provide insight into the dominant interactions that underlie the ground state. This approach provides clear signatures of the nature of the manybody interactions leading to a pseudogap, making it possible to distinguish pseudogaps induced by the formation of bipolarons (9), or the formation of a Luttinger liquid (5). Quasi-1D materials are excellent platforms for observing strongly correlated phenomena such as CDW order (14)(15)(16) and Luttinger liquid behavior (4,5) since they exhibit a reduced phase space for scattering and less screening. Recently, quasi-1D (TaSe4)2I has attracted much attention since it may become an axion insulator when the Weyl points are modified by CDW order (17,18). However, to date no clear evidence of Weyl points has been observed, partly due to the missing spectral weight at the Fermi surface of (TaSe4)2I. Compared to 3D materials, the Fermi surface in quasi-1D materials is more susceptible to Peierls interactions, which open a CDW gap at the Fermi level. However, even in the normal phase above the CDW transition temperature, a clear Fermi edge is still absent in most quasi-1D materials -including (TaSe4)2I. In some quasi-1D materials such as Li0.9Mo6O17 and K0.3MoO3 (4,5), the opening of a pseudogap in the normal phase could be attributed to the formation of a Luttinger liquid. CDW and magnetic fluctuations could also reduce the spectral weight at the Fermi surface, especially when the scattering phase-space is reduced in 1D (8,9). However, these mechanisms alone cannot explain the ARPES spectra of (TaSe4)2I observed above and below the CDW phase transition. Polaron formation has been proposed as a mechanism for the pseudogap opening in colossal magnetoresistance manganites (1). A polaron consists of a carrier dressed by a cloud of virtual phonons (19,20). In (TaSe4)2I(3), the strong electron-phonon coupling and large effective mass of the carriers suggest that small Holstein polarons could play an important role in determining the low energy excitations. For decades, many efforts have been made to solve the Holstein model with many-body interactions in different dimensions. Recent Monte Carlo simulations of the Holstein model at the adiabatic limit have produced a complex phase diagram with CDW and bipolaron insulating phases, as well as a metallic single-polaron phase, as shown schematically in Fig. 1A (21). Here we report clear evidence of a photoinduced bipolaron-to-polaron crossover in (TaSe4)2I by using trARPES to probe the electronic structure of a gently photo-doped excited state. After excitation by a femtosecond laser, two new bands are revealed in the pseudogap of (TaSe4)2I -a straight-line dispersive band and a non-dispersive flat band. The new dispersive band resembles the single-particle bare band dispersion (dashed black line in Fig. 1B). The transiently-revealed flat band observed in the trARPES data, which does not appear in the calculated single-particle band structure (18,22,23), is consistent with a collection of smeared single-polaron sub-bands within the pseudogap, as illustrated schematically in Fig. 1B (solid black line). The emergence of these two bands after partial melting of the bipolaron states provides direct evidence that manybody polaronic interaction -namely bipolaron formation -reduces the spectral weight of the single-particle bare band at the Fermi surface and opens up a polaron pseudogap. The relatively weak laser excitation ensures that we only partially melt the bipolaron insulator into a singlepolaron metal, and avoid driving the material into a normal metal state. To extract more information about the many-body interactions, we also track the ultrafast evolution of the spectral weight in the pseudogap. The dynamics of the in-gap states show that the bipolarons are partially melted into single polarons within ~250 fs, before gradually recovering with a time constant of ~600 fs -which is a typical relaxation time for photoinduced structural changes (13,24). These findings, when combined with recent theoretical predictions (21), provide compelling evidence that polaronic many-body interactions play a key role in opening the pseudogap in (TaSe4)2I. Moreover, we conduct the trARPES measurements for another prototypical quasi-1D material Rb0.3MoO3 that exhibits dramatically faster (~10x) relaxation. The pseudogap in this material was variously attributed either to the presence of a polaron gap (9), or the formation of a Luttinger liquid (5). Surprisingly, we find that the decay constant of the excited in-gap states in Rb0.3MoO3 is only ~60 fs. This fast relaxation is likely electronic in nature -such as hot carriers losing energy to Luttinger plasmons. Thus, our results favor the recent Luttinger-liquid explanation of the pseudogap in Rb0.3MoO3 (5). Our data indicate that the polaronic effects previously proposed (9) are not crucial to the pseudogap physics in Rb0.3MoO3. This work thus represents a new approach for uncovering the dominant many-body interactions via weak ultrafast photo-doping, which may help explain a set of puzzling strongly correlated phenomena such as the strange metal phase in high-temperature superconductors. Results (TaSe4)2I is a prototypical quasi-1D CDW material which enters an incommensurate CDW phase at temperatures below 263 K ( ). Theoretical calculations propose that it is a Weyl semimetal above and that it becomes an axion insulator below (18,23). (TaSe4)2I has a conventional tetragonal unit cell (a = b = 9.531Å and c = 12.824 Å). The Ta atoms form chains, surrounded by Se4 rectangular units and separated by iodine ions. Two adjacent Se4 rectangles are rotated by 45°, which makes TaSe4 chains exhibit a screwlike symmetry -thus, (TaSe4)2I is known as a chiral crystal (22). The chains are bonded weakly by iodine atoms, forming needlelike crystals that naturally cleave along the (110) plane. The second material we study is Rb0.3MoO3, also known as blue bronze, which is another prototypical quasi-1D CDW material (~183 K) that has similar properties to (TaSe4)2I (3,9,25,26). We perform trARPES measurements on (TaSe4)2I at both room (300 K) and low temperatures (80 K). For our measurements, the chain direction is aligned along the analyzer slit. We excite the materials using a 1.6 eV infrared laser pump pulse (~40 fs, 10 kHz), and probe the dynamic electronic order using a 22 eV extreme ultraviolet (EUV) high harmonic probe pulse (~10 fs, 10 kHz). The time and energy resolution are <10 fs and ~100 meV, respectively. Figure 2 shows the raw trARPES spectra at different temperatures and time delays. The in-plane band dispersion ( Fig We believe the large unpumped band velocity (~30 eVÅ) is precisely such an artifact. However, we believe the smaller 7.5 eVÅ velocity observed after pump excitation is meaningful (11) The flat band can be explained as a collection of single-polaron sub-bands within the pseudogap, which is consistent with the calculated spectral function from single-electron Holstein mode (28). Furthermore, we track the dynamics of the spectral weight within the pseudogap of both (TaSe4)2I and Rb0.3MoO3, as denoted by the yellow dashed rectangles in Figs. 4C and S6. Figure 4E plots the excited state dynamics of (TaSe4)2I and Rb0.3MoO3 for a pump fluence of 0.6 mJ/cm 2 at 300 K. For (TaSe4)2I, upon excitation, the electrons in the VB are pumped into the CB and are thermalized around the Fermi level within ~100 fs, with the electron temperature rising to a peak value of ~1600 K (Fig. S3B). The spectral weights within the pseudogap start to appear immediately after the arrival of the pump pulse and reach their peak value at ~250 fs. Then the ARPES spectra gradually recover to the ground state within ~1 ps. The dynamics of the in-gap states for the CDW phase at 80 K and the normal phase at 300 K are similar (Fig. S6), which indicates that polaron dynamics dominate CDW dynamics at this 0.6 mJ/cm 2 pump fluence. We Discussion With the new experimental findings presented in this paper, we can now clarify the long- The relaxation timescales also yield useful information for distinguishing different manybody interactions that underlie the pseudogap. For example, the identical experiment in another 1D material Rb0.3MoO3 yields a much faster relaxation timescale of ~60 fs (Fig. 4E). The order of magnitude difference in relaxation times (60 fs vs 600 fs) suggests a different underlying mechanism, which in turn suggests that bipolaron effects are likely not crucial to the pseudogap physics in Rb0.3MoO3. We speculate that Luttinger liquid effects may be responsible instead, with relaxation occurring through emission of Luttinger plasmons. When the initial electron-hole pairs excited by 1.6 eV photons thermalize to the vicinity of the Fermi level, they can rearrange into collective bosonic excitations (Luttinger plasmons) that are nearly invisible in ARPES measurements. This decay channel could be much faster than the bipolaron reformation, since it is not bottlenecked by the lattice structural relaxation. We note that a recent high-resolution static equilibrium ARPES study (5) has also observed power law singularities consistent with Luttinger liquid physics in this material. Other mechanisms, such as heat transfer from hot carriers to strongly coupled high-energy optical phonon modes (29,32), could also contribute to the fast relaxation of excited states in Rb0.3MoO3. Nevertheless, the ~60 fs relaxation time can still help to confirm the existence of dominant Luttinger Liquid physics, since it rules out pseudogap mechanisms involving structural phase changes such as bipolaron effects and CDW fluctuations. In summary, we provide direct evidence that bipolaron formation in (TaSe4)2I plays a major role in the formation of the pseudogap. This is supported by recent Monte Carlo simulations of the Holstein model. The ultrafast in-gap dynamics observed depict a picture in which bipolarons are transiently broken into single polaron states by screening by hot electrons, forming again once the electron subsystem cools. By comparing the ultrafast dynamics of (TaSe4)2I and Rb0.3MoO3, we show that our new approach for determining the dominant many-body interactions can be extended to understand the nature of the pseudogap phase in other materials. In the future, this approach can help with understanding a set of puzzling strongly correlated phenomena such as the strange metal phase. EDCs of (TaSe4)2I at around kF before and after pump excitation at 80 K. Fig S5. Time-resolved reflectivity results of (TaSe4)2I. Fig S6. The dynamics of the in-gap spectral weight of (TaSe4)2I within a region defined by the yellow dashed rectangle in Fig. 4C Fig S7. trARPES mapping of Rb0.3MoO3 Fig S8. In-gap spectral weight dynamics of (TaSe4)2I at different temperatures and pump fluences. Fig. 4 At 0.5 Å which is away from kF, new spectral weights appear around the Fermi level after pump excitation as shown in Fig. 4A-D. We claim in the main text that these new weights belong to the newly emerged flat band, consisting of numerous single-polaron subbands. Nevertheless, the broadening and shift of the broad hump at lower energy are also possible to present some extra spectra weights around the Fermi level. Here we provide further analysis to rule out this possibility by presenting the differential curve between EDCs at 250 fs and before pump. However, it's common for trARPES spectra to have slight inhomogeneous broadenings after pump excitations. The broadening mechanisms include Stark effect, increased carrier scatterings, pump-probe spatial cross-correlation and etc., which can be seen as extrinsic effects. If we directly subtract the black curve in (A) from the red curve in (B) to get the differential EDC, the extrinsic broadening will slightly contribute to the gain of spectral weights around Fermi level after pump excitation. Thus, in (A), we intentionally broaden the EDC before pump (black curve) by convolving it with a Gaussian function with 0.1 eV width, so that resulting blue curve has a good agreement with the raw EDC after pump (red curve) at the range from -1.5 eV to -0.8 eV. Error analysis of the observed flat band (TaSe 4 ) 2 I in Then we take the difference between red and blue curves, as the yellow curve in (B). Note that the yellow curve shows a net gain of the spectral weight between -1 eV and 0.5 eV, which is a strong evidence that the broadening and shift of the broad hump at higher binding energy are not the major origins of the flat band around the Fermi level, as both of them conserve the spectral weight. The net gain in the differential EDC is persistent with and without the intentional broadening. The extrinsic broadening only slightly increases the intensity around the Fermi level, as shown in (A). The differential maps in Fig. 4 are from the direct subtractions of raw data without intentional broadening. . 2A) shows a characteristic "V"-shaped valence band (VB). The VB maximum is at around 1.1 / , with ~0.4 eV binding energy. The conduction band (CB) minimum is below the Fermi level due to natural n-type doping from iodide vacancies. A CDW gap (2∆ ≈0.2 eV) is opened between the VB maxima and CB minima below 263 K(27). In addition to the CDW gap, a pseudogap (2∆ ≈0.4 eV) removes most of the spectral weight of the CB at the Fermi level. Most importantly, this pseudogap is present in both the low temperature CDW phase and the room temperature normal phase, as shown in Figs. 2A and B. Figures 3A-E plot the second derivative images of the raw data at different temperatures and time delays in Figure 2. The second derivative along the vertical direction is best for enhancing the horizontally dispersing band, while the horizontal second derivative is best for enhancing the vertically dispersing band. A straight-line-like dispersive band appears in the pseudogap approximately 250 fs after laser excitation, as shown in Figs. 3B, C and E. The room temperature and low temperature data show very similar behaviors for the dynamics of this new band. The dispersion of the new band is tracked by fitting the momentum distribution curves (MDC), which are presented inFig. 3H. Due to the transition matrix effect, we can only resolve one branch of the Dirac cone. Thus, a single Lorentzian peak is applicable to fit the band dispersion. This new band can be recognized as the single-particle bare band -which is not present in the data before laser excitation due to polaronic interactions. The band velocity, which is almost constant for all positive time delays(Fig. S3A), is determined to be 7.5 eVÅ -almost twice as the VB velocity between 0.1 Å -1 and 0.2 Å -1 . Note that the room temperature spectra before laser excitation presented in Figs. 3A and H still show a weak Fermi crossing with a very high band velocity (~30 eVÅ). The band velocities derived from MDC fittings can be influenced by artifacts resulting from the way MDC analysis handles remnants of the broad hump in the pseudogap(11). partial 'undressing' of polarons into bare electrons. This critical change of the band velocity can be directly seen from the MDC curves of the raw data, presented in Figs. 3F and G. In addition to revealing the dispersive bare band, another non-dispersive flat band also emerges near the Fermi level (-0.16 eV) after pump excitation, as denoted by black dashed lines in Figs. 4A-D. Since this weak band is on the shoulder of an intense broad hump, it is not clearly recognized from the raw and second derivative images in Figs. 2 and 3. Nevertheless, the difference maps (Figs. 4A at 80 K and 4C at 300 K) between 250 fs and the negative time delay can remove the intense background and enhance photo-induced weak features, which helps us to recognize the photo-enhanced flat band centered at -0.16 eV. Figures 4B and D show the second derivative of the images in Figs. 4A and C respectively, in which the flat band is enhanced throughout the entire Brillouin zone. More error analysis can be found in the supplementary material regarding overcoming artifacts posed by band broadening and shifting. The net gain of the spectral weight at 0.5 Å between -1 eV and 0.5 eV, as shown in Fig. S2B, is strong evidence that the broadening and up-shift of the broad valence band at higher binding energy are not the major origins of the flat band centered at -0.16 eV, as both of them conserve the spectral weight. believe that the rapidly excited hot carriers screen the bipolaronic interactions within 50 fs, while it takes ~ 250 fs for the two interacting polarons in a bipolaron to fall apart, due to the bottleneck imposed by the speed of the lattice relaxation. The screened bipolaronic interactions gradually recover following heat transfer from the hot electron bath to the cold phonon bath. The photogenerated single polarons remerge into bipolarons with a time constant of ~600 fs.In the case of Rb0.3MoO3, after excitation by a 1.6 eV photon-energy pump pulse, electronhole pairs are excited in the region above the Fermi level. Then these hot carriers quickly lose their energy via emitting phonons and relax to the vicinity of the Fermi level within 100 fs,where Luttinger-liquid interactions start to dominate. These excited states relax with a 10x faster time constant of ~60 fs, possibly by emitting Luttinger plasmons. We note that similarly fast time scales have been observed in previous trARPES measurements of Rb0.3MoO3(29). standing challenges of explaining the opening of a pseudogap in quasi-1D (TaSe4)2I. This methodology can also be extended to solve the puzzle of missing Fermi surface states in other quasi-1D materials. Here we discuss four possible mechanisms: (i) Luttinger-liquid behavior; (ii) the Efros-Shklovski effect; (iii) fluctuations of the CDW order parameter; (iv) bipolaronic interactions. With strong electronic correlations in a 1D metal, single-particle excitations can be rearranged into collective bosonic excitations, which form a Luttinger liquid with various peculiar features -including power-law singularity and spin-charge separation. The power-law singularity of the spectral function in the vicinity of the Fermi level induces a pseudogap such that single-particle excitations are greatly suppressed. Nevertheless, the power-law distribution of the spectral function does not modify the band dispersion(5). Thus, the change in band velocity after pump excitation in (TaSe4)2I (Figs. 3F-H) is clearly not a signature of Luttinger liquid behavior.The strong electron-phonon coupling that enhances the effective mass of the carriers can make them more susceptible to Anderson localization. The static Coulomb repulsions among Anderson-localized carriers render the so-called Efros-Shklovski pseudogap(6,7). However, there is not a natural way for a disorder-localized high temperature phase to give rise to a low temperature CDW phase. The well-defined CDW phase with characteristic phason-induced nonlinear resistivity and angular dependence of the axial current in (TaSe4)2I at low temperature does not support a dominant role of Anderson localization in the electronic properties(17). With reduced scattering phase-space, fluctuations can be more dominant for lower dimensionality.Random CDW regions above could reduce the spectral weight on the Fermi surface and open up a pseudogap(8,9). However, the CDW gap is below the Fermi level in the naturally doped crystal and smaller than the pseudogap(27), which rules out the possibility that thepseudogap is mainly induced by CDW fluctuations. Previous ARPES and optical conductivity measurements have suggested a possible polaronic origin of the pseudogap in (TaSe4)2I(3). However, direct experimental observations of the influence of polaronic effects are still needed. Although simulating many-electron Holstein polarons is still challenging, the one-electron Holstein model, with a single electron coupled to an Einstein phonon mode (with ℏ 0 phonon energy), has been solved and results in a spectral function that exhibits kinks, and multiple polaron sub-bands at ℏ 0 binding energy ( = ± 1, ± 2 …)(28). Although the calculated spectral functions of one-electron Holstein polarons agree well with a series of ARPES measurements(5, 20, 30), in many materials with strong electron-phonon coupling including (TaSe4)2I, the large pseudogap (~200 meV) cannot be reproduced by a one-electron Holstein model. Instead, many-body interactions among single polarons need to be taken into account in order to generate such a pseudogap. Previous calculations using a many-electron Holstein model in the atomic limit (ℏ 0 ≫ , where is the electron hopping integral) can well reproduce the pseudogap features, bearing similarity to the Franck-Condon lineshape(12, 31). However, in (TaSe4)2I, the adiabatic limit ( ℏ 0 ≪ , with ℏ 0 < 35 meV, ~1 eV) is more applicable. Recent Monte Carlo simulations of the Holstein model in the adiabatic limit show a complex phase diagram in the − plane (where is the dimensionless electron-phonon coupling strength and is the temperature), as illustrated schematically in Fig. 1A(21). Above the CDW region, there exists not only a metallic regime but also an insulating regime, which is associated with a gap or pseudogap across the Fermi level. A clear phase crossover between the two regimes is controlled by and . A simple microscopic description of the insulating regime considers two metallic single polarons that can merge into a bipolaron and gain energy = 2 / 0 2 = / , where is the electron-phonon coupling constant and is the bare density of state at the Fermi level. The lattice can be seen as a binary alloy with − bipolaron sites and + remaining sites. A gap of 2 is thus opened due to the formation of bipolarons. And in (TaSe4)2I, a few phonon modes have sufficiently large electronphonon coupling strength to generate bipolarons with a few hundred meV binding energy(10). However, we do not have direct evidence to determine which phonon mode is responsible for bipolaron formation. Nevertheless, empirically, the bipolaron melting time (~250 fs) should correspond to ~1/2 period of the relevant phonon mode(s), which suggests a phonon frequency of ~2 THz. Our time-resolved optical reflectivity measurements identify two candidate phonon modes with frequencies close to this (2 THz and 2.5 THz) and strong responses to optical excitation (Fig. S5). A previous study(10) also reports phonon modes with similar frequencies and large electron-phonon coupling strength that could produce a ~200 meV bipolaron pseudogap. Nevertheless, more conclusive studies are needed to finally determine the phonon mode that generates the bipolaron pseudogap. This bipolaron picture can explain the pseudogap observed in the ARPES spectra as well as the activation behavior of conductivity in (TaSe4)2I at > . After laser excitation, the electron temperature increases to 1600 K within ~50 fs. The resulting hot electron distribution, although low density, can reduce the electron-phonon coupling strength and screen the bipolaronic interactions. The relatively high plasma frequency (~126 THz at 300 K(3)) ensures that the effective screening can build up within 10 fs. Within ~250 fs, the bipolarons are broken into free single polarons, which results in a large number of single-polaron states in the vicinity of the Fermi level. We note that the ~250 fs rise time of the excited single-polaron states remains unchanged when tuning the pump fluence from 0.1 to 0.6 mJ/cm 2 (Fig. S8), which indicates that this characteristic rise time results from a structural bottleneck imposed by the half-period of the strongly coupled phonon mode. The single-polaron states are derived from the bare bandhybridizing with the strongly coupled phonon mode, which forms a series of discrete polaron sub-bands, as shown schematically inFig. 1B (solid black line). Each polaron sub-band has two non-dispersive branches extending over the whole Brillouin zone at the two sides of the original bare band. The spacing between two adjacent polaron sub-bands is equal to ℏ 0 which should be less than 35 meV (the maximum phonon energy in (TaSe4)2I). Thus, we cannot resolve an individual polaron sub-band with ~100 meV energy resolution. Instead, we observe one flat band throughout the entire Brillouin zone within the gap, which corresponds to a collection of several smeared non-dispersive branches of the polaron sub-bands. The excess energy stored in the electron bath and strongly coupled phonon mode can then be transferred and thermalized to the rest of the phonon bath in ~1 ps, when the single polarons relax into bipolarons. Fig. 1 . 1(A) Schematic of the − phase diagram based on Monte Carlo simulations of Holstein model at adiabatic limit(21). (B) Schematic of the predicted static photoemission spectra in the presence of strong electron-phonon coupling and bipolaron formation. The in-gap states are the single-polaron sub-bands formed by bare electronic band coupling to a non-dispersive phonon mode in single-electron Holstein model. Each single-polaron polaron sub-band has two nondispersive branches extending over the whole Brillouin zone at the sides of the original bare band. The adjacent polaron sub-bands are spaced by the energy of the strongly coupled phonon mode. The flat band we transiently observe within the pseudogap after laser excitation represents a collection of side branches from several polaron sub-bands. Fig. 2 . 2(A) to (F) show the ARPES mappings at different temperatures and time delays. (C) and (D) are the zoom-in of the areas enclosed by the pink rectangles in (A) and (B), respectively. Fig. 3 . 3(A) to (E) show the vertical and horizontal second derivative images of the corresponding raw ARPES data at different temperatures and time delays, for 0.6 mJ/cm 2 pump fluence. The yellow arrows denote the newly emerged dispersive bare band (dashed black line in Fig. 1), while the pink arrows denote the renormalized valence band dispersion at large momenta (lower red band in Fig. 1B). (F) and (G) present the MDC curves in the vicinity of Fermi level (±120 meV) extracted from the raw ARPES data at 300 K, before and after pump excitation, respectively. (H) shows the corresponding centroid of the band dispersion at different time delays and temperatures. Here the MDCs were fit for the vertically trending part, while the EDC curves were fit for the horizontally trending part. Fig. 4 . 4Observation of coherent single-polaron flat bands in (TaSe 4 ) 2 I and dramatically different relaxation times for the excited in-gap states in (TaSe 4 ) 2 I and Rb 0.3 MoO 3 . (A) and (B) show the difference maps between ARPES spectra at 250 fs and at negative time delay at 80 K and 300 K, respectively. (B) and (D) are the corresponding vertical second derivative of the difference maps in (A) and (C). The red dashed lines track the dispersion of the photo-excited dispersive bare band, while the black dashed lines track the non-dispersive flat bands which can be assigned to a collection of single-polaron sub-bands (solid black bands in Fig. 1). (E) show the dynamics of the in-gap spectral weight within a region defined by the yellow dashed rectangles in (C) for (TaSe4)2I and in Fig. S7 for Rb0.3MoO3. The decay constants of 649 ± 31 fs and 65 ± 8 fs for (TaSe4)2I and Rb0.3MoO3 respectively are obtained by single-exponential fittings. * Corresponding author: [email protected] This PDF file includes: Section 1. Error analysis of the observed flat band (TaSe4)2I in Fig. 4 Section 2. Experimental and data analysis methods for time-resolved reflectivity results of (TaSe4)2I. Fig S1. Energy distribution curves (EDC) of (TaSe4)2I from 0.3 Å to 0.5 Å before pump at 80 K. Fig S2. EDCs of the flat band of (TaSe4)2I in Fig. 4 and corresponding error analysis Fig S3. The dispersions of the bare bands at different time delays and the extraction of electron temperature at 200 fs via fitting Fermi-Dirac distribution Fig S4. Fig. S1 . S1Energy distribution curves (EDC) of (TaSe 4 ) 2 I from 0.3 Å to 0.5 Å before pump excitation at 80 K. Fig. S2 . S2(A) EDCs of (TaSe 4 ) 2 I before pump excitation at 80 K with (blue curve) and without Gaussian broadening (black curve). (B) To right axis: EDCs at 80 K before pump with Gaussian broadening (blue curve) and at 250 fs without Gaussian broadening (red curve). To left axis: the difference between red and blue curves (yellow curve). Fig. S3 . S3(A) The dispersions of the bare bands of (TaSe 4 ) 2 I at different time delays at 300 K via fitting the peak positions of momentum distribution curves (MDC). Note that the band velocities at different time delays are basically the same. (B) The black dots are energy-dependent peak intensities of MDCs at 200 fs, while the red curve is Fermi-Dirac function at 1600 K temperature. Fig. S4 . S4EDCs of (TaSe4)2 at around kF before and after pump at 80 K. The second peak around Fermi level in the pink curve is the newly emerged bare band. Fig. S5 .Fig. S6 . S5S6(A) shows a typical fitting of the exponential decay baseline in optical pump optical probe data collected on (TaSe 4 ) 2 I, the example shown is at 10K. (B) shows the residue between raw data and the fitting model, which represents the coherent phonon oscillations. (C) is the FFT color map of the extracted coherent oscillation as a function of temperature.2. Experimental and data analysis methods for time-resolved reflectivity results of (TaSe 4 ) 2 I.The transient optical spectroscopy is collected with 800 nm ultrafast laser of 100 fs pulse width from MaiTai system with 80MHz repetition rate. The pump beam is generated from 400 nm SHG signal of a BBO crystal and the probe beam is 800 nm. The phenomenological model used for fitting the ΔR/R vs. delay time is the following: The dynamics of the in-gap spectral weight of (TaSe 4 ) 2 I within a region defined by the yellow dashed rectangle inFig. 4Cat 80 K (black symbols) and 300 K (pink symbols). The blue dashed line determines the peak of in-gap spectral weight dynamics happening at ~250 fs. The solid lines are the exponential fitting curves from which the decay constant τ is extracted. Fig. S7 . S7(A) The second derivative image of the ARPES map of Rb 0.3 MoO 3 along Γ − direction at 300 K. (B) Difference map between 100 fs and before pump at 300 K with 0.6 mJ/cm 2 pump fluence. The yellow dashed rectangle denotes the region of interest mentioned in the main text and Fig. 4E. Fig. S8 . S8In-gap spectral weight dynamics of (TaSe 4 ) 2 I at different temperatures and pump fluences. ACKNOWLEDGEMENTSSupplementary Materials for Tokura, k-dependent electronic structure, a large "ghost" Fermi surface, and a pseudogap in a layered magnetoresistive oxide. D S Dessau, T Saitoh, C.-H Park, Z.-X Shen, P Villella, N Hamada, Y Moritomo, Y , Phys. Rev. Lett. 81192D. S. Dessau, T. Saitoh, C.-H. Park, Z.-X. Shen, P. Villella, N. Hamada, Y. Moritomo, Y. Tokura, k-dependent electronic structure, a large "ghost" Fermi surface, and a pseudogap in a layered magnetoresistive oxide. Phys. Rev. Lett. 81, 192 (1998). Missing quasiparticles and the chemical potential puzzle in the doping evolution of the cuprate superconductors. K M Shen, F Ronning, D H Lu, W S Lee, N J C Ingle, W Meevasana, F Baumberger, A Damascelli, N P Armitage, L L Miller, Phys. Rev. Lett. 93267002K. M. Shen, F. Ronning, D. H. Lu, W. S. Lee, N. J. C. Ingle, W. Meevasana, F. Baumberger, A. Damascelli, N. P. Armitage, L. L. Miller, Missing quasiparticles and the chemical potential puzzle in the doping evolution of the cuprate superconductors. Phys. Rev. Lett. 93, 267002 (2004). Spectroscopic indications of polaronic carriers in the quasi-one-dimensional conductor (TaSe 4) 2 I. L Perfetti, H Berger, A Reginelli, L Degiorgi, H Höchst, J Voit, G Margaritondo, M Grioni, Phys. Rev. Lett. 87216404L. Perfetti, H. Berger, A. Reginelli, L. Degiorgi, H. Höchst, J. Voit, G. Margaritondo, M. Grioni, Spectroscopic indications of polaronic carriers in the quasi-one-dimensional conductor (TaSe 4) 2 I. Phys. Rev. Lett. 87, 216404 (2001). New Luttinger-liquid physics from photoemission on Li 0.9 Mo 6 O 17. F Wang, J Alvarez, S.-K Mo, J W Allen, G.-H Gweon, J He, R Jin, D Mandrus, H Höchst, Phys. Rev. Lett. 96196403F. Wang, J. V Alvarez, S.-K. Mo, J. W. Allen, G.-H. Gweon, J. He, R. Jin, D. Mandrus, H. Höchst, New Luttinger-liquid physics from photoemission on Li 0.9 Mo 6 O 17. Phys. Rev. Lett. 96, 196403 (2006). Band-selective Holstein polaron in Luttinger liquid material A0. 3MoO3 (A= K, Rb). L Kang, X Du, J S Zhou, X Gu, Y J Chen, R Z Xu, Q Q Zhang, S C Sun, Z X Yin, Y W Li, Nat. Commun. 12L. Kang, X. Du, J. S. Zhou, X. Gu, Y. J. Chen, R. Z. Xu, Q. Q. Zhang, S. C. Sun, Z. X. Yin, Y. W. Li, Band-selective Holstein polaron in Luttinger liquid material A0. 3MoO3 (A= K, Rb). Nat. Commun. 12, 1-7 (2021). Angle-resolved photoemission spectroscopy of the insulating Na x WO 3: Anderson localization, polaron formation, and remnant fermi surface. S Raj, D Hashimoto, H Matsui, S Souma, T Sato, T Takahashi, D D Sarma, P Mahadevan, S Oishi, Phys. Rev. Lett. 96147603S. Raj, D. Hashimoto, H. Matsui, S. Souma, T. Sato, T. Takahashi, D. D. Sarma, P. Mahadevan, S. Oishi, Angle-resolved photoemission spectroscopy of the insulating Na x WO 3: Anderson localization, polaron formation, and remnant fermi surface. Phys. Rev. Lett. 96, 147603 (2006). Metal-Insulator Transition and Emergent Gapped Phase in the Surface-Doped 2D Semiconductor 2 H− MoTe 2. T T Han, L Chen, C Cai, Z G Wang, Y D Wang, Z M Xin, Y Zhang, Phys. Rev. Lett. 126106602T. T. Han, L. Chen, C. Cai, Z. G. Wang, Y. D. Wang, Z. M. Xin, Y. Zhang, Metal- Insulator Transition and Emergent Gapped Phase in the Surface-Doped 2D Semiconductor 2 H− MoTe 2. Phys. Rev. Lett. 126, 106602 (2021). Fluctuation effects at a Peierls transition. P A Lee, T M Rice, P W Anderson, Phys. Rev. Lett. 31462P. A. Lee, T. M. Rice, P. W. Anderson, Fluctuation effects at a Peierls transition. Phys. Rev. Lett. 31, 462 (1973). Mobile small polarons and the Peierls transition in the quasi-one-dimensional conductor K 0.3 MoO 3. L Perfetti, S Mitrovic, G Margaritondo, M Grioni, L Forró, L Degiorgi, H Höchst, Phys. Rev. B. 6675107L. Perfetti, S. Mitrovic, G. Margaritondo, M. Grioni, L. Forró, L. Degiorgi, H. Höchst, Mobile small polarons and the Peierls transition in the quasi-one-dimensional conductor K 0.3 MoO 3. Phys. Rev. B. 66, 75107 (2002). Fluctuation effects in quasi-one-dimensional conductors: optical probing of thermal lattice fluctuations. L Degiorgi, B Alavi, G Grüner, R H Mckenzie, K Kim, F Levy, Phys. Rev. B. 525603L. Degiorgi, B. Alavi, G. Grüner, R. H. McKenzie, K. Kim, F. Levy, Fluctuation effects in quasi-one-dimensional conductors: optical probing of thermal lattice fluctuations. Phys. Rev. B. 52, 5603 (1995). Nodal quasiparticle in pseudogapped colossal magnetoresistive manganites. N Mannella, W L Yang, X J Zhou, H Zheng, J F Mitchell, J Zaanen, T P Devereaux, N Nagaosa, Z Hussain, Z.-X Shen, Nature. 438N. Mannella, W. L. Yang, X. J. Zhou, H. Zheng, J. F. Mitchell, J. Zaanen, T. P. Devereaux, N. Nagaosa, Z. Hussain, Z.-X. Shen, Nodal quasiparticle in pseudogapped colossal magnetoresistive manganites. Nature. 438, 474-478 (2005). Spectral function of electron-phonon models by cluster perturbation theory. M Hohenadler, M Aichhorn, W Der Linden, Phys. Rev. B. 68184304M. Hohenadler, M. Aichhorn, W. von der Linden, Spectral function of electron-phonon models by cluster perturbation theory. Phys. Rev. B. 68, 184304 (2003). Time-domain classification of charge-density-wave insulators. S Hellmann, T Rohwer, M Kalläne, K Hanff, C Sohrt, A Stange, A Carr, M M Murnane, H C Kapteyn, L Kipp, Nat. Commun. 3S. Hellmann, T. Rohwer, M. Kalläne, K. Hanff, C. Sohrt, A. Stange, A. Carr, M. M. Murnane, H. C. Kapteyn, L. Kipp, Time-domain classification of charge-density-wave insulators. Nat. Commun. 3, 1-8 (2012). J Voit, L Perfetti, F Zwick, H Berger, G Margaritondo, G Grüner, H Höchst, M Grioni, Electronic structure of solids with competing periodic potentials. Science. 290J. Voit, L. Perfetti, F. Zwick, H. Berger, G. Margaritondo, G. Grüner, H. Höchst, M. Grioni, Electronic structure of solids with competing periodic potentials. Science (80-. ). 290, 501-503 (2000). Discovery of an unconventional charge density wave at the surface of K 0.9 Mo 6 O 17. D Mou, A Sapkota, H.-H Kung, V Krapivin, Y Wu, A Kreyssig, X Zhou, A I Goldman, G Blumberg, R Flint, Phys. Rev. Lett. 116196401D. Mou, A. Sapkota, H.-H. Kung, V. Krapivin, Y. Wu, A. Kreyssig, X. Zhou, A. I. Goldman, G. Blumberg, R. Flint, Discovery of an unconventional charge density wave at the surface of K 0.9 Mo 6 O 17. Phys. Rev. Lett. 116, 196401 (2016). Dimensional crossover in a charge density wave material probed by angle-resolved photoemission spectroscopy. C W Nicholson, C Berthod, M Puppin, H Berger, M Wolf, M Hoesch, C Monney, Phys. Rev. Lett. 118206401C. W. Nicholson, C. Berthod, M. Puppin, H. Berger, M. Wolf, M. Hoesch, C. Monney, Dimensional crossover in a charge density wave material probed by angle-resolved photoemission spectroscopy. Phys. Rev. Lett. 118, 206401 (2017). Axionic charge-density wave in the Weyl semimetal (TaSe 4) 2 I. J Gooth, B Bradlyn, S Honnali, C Schindler, N Kumar, J Noky, Y Qi, C Shekhar, Y Sun, Z Wang, Nature. 575J. Gooth, B. Bradlyn, S. Honnali, C. Schindler, N. Kumar, J. Noky, Y. Qi, C. Shekhar, Y. Sun, Z. Wang, Axionic charge-density wave in the Weyl semimetal (TaSe 4) 2 I. Nature. 575, 315-319 (2019). A charge-density-wave topological semimetal. W Shi, B J Wieder, H L Meyerheim, Y Sun, Y Zhang, Y Li, L Shen, Y Qi, L Yang, J Jena, Nat. Phys. 17W. Shi, B. J. Wieder, H. L. Meyerheim, Y. Sun, Y. Zhang, Y. Li, L. Shen, Y. Qi, L. Yang, J. Jena, A charge-density-wave topological semimetal. Nat. Phys. 17, 381-387 (2021). Polarons in materials. C Franchini, M Reticcioli, M Setvin, U Diebold, Nat. Rev. Mater. C. Franchini, M. Reticcioli, M. Setvin, U. Diebold, Polarons in materials. Nat. Rev. Mater., 1-27 (2021). Origin of the crossover from polarons to Fermi liquids in transition metal oxides. C Verdi, F Caruso, F Giustino, Nat. Commun. 8C. Verdi, F. Caruso, F. Giustino, Origin of the crossover from polarons to Fermi liquids in transition metal oxides. Nat. Commun. 8, 1-7 (2017). C Murthy, A Pandey, I Esterlis, S A Kivelson, A stability bound on the $ T $-linear resistivity of conventional metals. arXiv Prepr. arXiv2112. 6966C. Murthy, A. Pandey, I. Esterlis, S. A. Kivelson, A stability bound on the $ T $-linear resistivity of conventional metals. arXiv Prepr. arXiv2112.06966 (2021). Surface charge induced Dirac band splitting in a charge density wave material (Ta Se 4) 2 I. H Yi, Z Huang, W Shi, L Min, R Wu, C M Polley, R Zhang, Y.-F Zhao, L.-J Zhou, J , Phys. Rev. Res. 313271H. Yi, Z. Huang, W. Shi, L. Min, R. Wu, C. M. Polley, R. Zhang, Y.-F. Zhao, L.-J. Zhou, J. Adell, Surface charge induced Dirac band splitting in a charge density wave material (Ta Se 4) 2 I. Phys. Rev. Res. 3, 13271 (2021). Type-III Weyl semimetals:(TaSe 4) 2 I. X.-P Li, K Deng, B Fu, Y Li, D.-S Ma, J Han, J Zhou, S Zhou, Y Yao, Phys. Rev. B. 10381402X.-P. Li, K. Deng, B. Fu, Y. Li, D.-S. Ma, J. Han, J. Zhou, S. Zhou, Y. Yao, Type-III Weyl semimetals:(TaSe 4) 2 I. Phys. Rev. B. 103, L081402 (2021). Ultrafast electron calorimetry uncovers a new long-lived metastable state in 1T-TaSe2 mediated by mode-selective electron-phonon coupling. X Shi, W You, Y Zhang, Z Tao, P M Oppeneer, X Wu, R Thomale, K Rossnagel, M Bauer, H Kapteyn, Sci. Adv. 54449X. Shi, W. You, Y. Zhang, Z. Tao, P. M. Oppeneer, X. Wu, R. Thomale, K. Rossnagel, M. Bauer, H. Kapteyn, Ultrafast electron calorimetry uncovers a new long-lived metastable state in 1T-TaSe2 mediated by mode-selective electron-phonon coupling. Sci. Adv. 5, eaav4449 (2019). Fluctuation effects on the electrodynamics of quasione-dimensional conductors above the charge-density-wave transition. A Schwartz, M Dressel, B Alavi, A Blank, S Dubois, G Grüner, B P Gorshunov, A A Volkov, G Kozlov, S Thieme, Phys. Rev. B. 525643A. Schwartz, M. Dressel, B. Alavi, A. Blank, S. Dubois, G. Grüner, B. P. Gorshunov, A. A. Volkov, G. V Kozlov, S. Thieme, Fluctuation effects on the electrodynamics of quasi- one-dimensional conductors above the charge-density-wave transition. Phys. Rev. B. 52, 5643 (1995). Thermal conductivity of the charge-density-wave systems K 0. R S Kwok, S E Brown, R. S. Kwok, S. E. Brown, Thermal conductivity of the charge-density-wave systems K 0.3 MoO 3 and (TaSe 4) 2 I near the Peierls transition. Phys. Rev. Lett. 63895MoO 3 and (TaSe 4) 2 I near the Peierls transition. Phys. Rev. Lett. 63, 895 (1989). Electronic Instability in a Zero-Gap Semiconductor: The Charge-Density Wave in (TaSe 4) 2 I. C Tournier-Colletta, L Moreschini, G Autes, S Moser, A Crepaldi, H Berger, A L Walter, K S Kim, A Bostwick, P Monceau, Phys. Rev. Lett. 110236401C. Tournier-Colletta, L. Moreschini, G. Autes, S. Moser, A. Crepaldi, H. Berger, A. L. Walter, K. S. Kim, A. Bostwick, P. Monceau, Electronic Instability in a Zero-Gap Semiconductor: The Charge-Density Wave in (TaSe 4) 2 I. Phys. Rev. Lett. 110, 236401 (2013). Green's function of the Holstein polaron. G L Goodvin, M Berciu, G A Sawatzky, Phys. Rev. B. 74245104G. L. Goodvin, M. Berciu, G. A. Sawatzky, Green's function of the Holstein polaron. Phys. Rev. B. 74, 245104 (2006). Bypassing the structural bottleneck in the ultrafast melting of electronic order. L X Yang, G Rohde, K Hanff, A Stange, R Xiong, J Shi, M Bauer, K Rossnagel, Phys. Rev. Lett. 125266402L. X. Yang, G. Rohde, K. Hanff, A. Stange, R. Xiong, J. Shi, M. Bauer, K. Rossnagel, Bypassing the structural bottleneck in the ultrafast melting of electronic order. Phys. Rev. Lett. 125, 266402 (2020). Holstein polaron in a valley-degenerate two-dimensional semiconductor. M Kang, S W Jung, W J Shin, Y Sohn, S H Ryu, T K Kim, M Hoesch, K S Kim, Nat. Mater. 17M. Kang, S. W. Jung, W. J. Shin, Y. Sohn, S. H. Ryu, T. K. Kim, M. Hoesch, K. S. Kim, Holstein polaron in a valley-degenerate two-dimensional semiconductor. Nat. Mater. 17, 676-680 (2018). Many-body CPA for the Holstein double-exchange model. A C M Green, Phys. Rev. B. 63205110A. C. M. Green, Many-body CPA for the Holstein double-exchange model. Phys. Rev. B. 63, 205110 (2001). Nonequilibrium electron and lattice dynamics of strongly correlated Bi2Sr2CaCu2O8+ δ single crystals. T Konstantinova, J D Rameau, A H Reid, O Abdurazakov, L Wu, R Li, X Shen, G Gu, Y Huang, L Rettig, Sci. Adv. 47427T. Konstantinova, J. D. Rameau, A. H. Reid, O. Abdurazakov, L. Wu, R. Li, X. Shen, G. Gu, Y. Huang, L. Rettig, Nonequilibrium electron and lattice dynamics of strongly correlated Bi2Sr2CaCu2O8+ δ single crystals. Sci. Adv. 4, eaap7427 (2018).
[]
[ "ROBUST CAUSAL INFERENCE FOR INCREMENTAL RETURN ON AD SPEND WITH RANDOMIZED PAIRED GEO EXPERIMENTS", "ROBUST CAUSAL INFERENCE FOR INCREMENTAL RETURN ON AD SPEND WITH RANDOMIZED PAIRED GEO EXPERIMENTS" ]
[ "Aiyou Chen [email protected] \nGoogle LLC\n\n", "Timothy C Au \nGoogle LLC\n\n" ]
[ "Google LLC\n", "Google LLC\n" ]
[]
AbstractEvaluating the incremental return on ad spend (iROAS) of a prospective online marketing strategy (i.e., the ratio of the strategy's causal effect on some response metric of interest relative to its causal effect on the ad spend) has become increasingly more important. Although randomized "geo experiments" are frequently employed for this evaluation, obtaining reliable estimates of iROAS can be challenging as oftentimes only a small number of highly heterogeneous units are used. Moreover, advertisers frequently impose budget constraints on their ad spends, which further complicates causal inference by introducing interference between the experimental units. In this paper, we formulate a novel statistical framework for inferring the iROAS of online advertising from randomized paired geo experiment which further motivates and provides new insights into Rosenbaum's arguments on instrumental variables, and we propose and develop a robust, distribution-free and interpretable estimator "Trimmed Match", as well as a data-driven choice of the tuning parameter which may be of independent interest. We investigate the sensitivity of Trimmed Match to some violations of its assumptions and show that it can be more efficient than some alternative estimators based on simulated data. We then demonstrate its practical utility with real case studies.
10.1214/21-aoas1493
[ "https://arxiv.org/pdf/1908.02922v3.pdf" ]
235,359,260
1908.02922
ae64c99faaa7bfb9923972995978e04a189b2e3a
ROBUST CAUSAL INFERENCE FOR INCREMENTAL RETURN ON AD SPEND WITH RANDOMIZED PAIRED GEO EXPERIMENTS Aiyou Chen [email protected] Google LLC Timothy C Au Google LLC ROBUST CAUSAL INFERENCE FOR INCREMENTAL RETURN ON AD SPEND WITH RANDOMIZED PAIRED GEO EXPERIMENTS Submitted to the Annals of Applied Statistics AbstractEvaluating the incremental return on ad spend (iROAS) of a prospective online marketing strategy (i.e., the ratio of the strategy's causal effect on some response metric of interest relative to its causal effect on the ad spend) has become increasingly more important. Although randomized "geo experiments" are frequently employed for this evaluation, obtaining reliable estimates of iROAS can be challenging as oftentimes only a small number of highly heterogeneous units are used. Moreover, advertisers frequently impose budget constraints on their ad spends, which further complicates causal inference by introducing interference between the experimental units. In this paper, we formulate a novel statistical framework for inferring the iROAS of online advertising from randomized paired geo experiment which further motivates and provides new insights into Rosenbaum's arguments on instrumental variables, and we propose and develop a robust, distribution-free and interpretable estimator "Trimmed Match", as well as a data-driven choice of the tuning parameter which may be of independent interest. We investigate the sensitivity of Trimmed Match to some violations of its assumptions and show that it can be more efficient than some alternative estimators based on simulated data. We then demonstrate its practical utility with real case studies. AbstractEvaluating the incremental return on ad spend (iROAS) of a prospective online marketing strategy (i.e., the ratio of the strategy's causal effect on some response metric of interest relative to its causal effect on the ad spend) has become increasingly more important. Although randomized "geo experiments" are frequently employed for this evaluation, obtaining reliable estimates of iROAS can be challenging as oftentimes only a small number of highly heterogeneous units are used. Moreover, advertisers frequently impose budget constraints on their ad spends, which further complicates causal inference by introducing interference between the experimental units. In this paper, we formulate a novel statistical framework for inferring the iROAS of online advertising from randomized paired geo experiment which further motivates and provides new insights into Rosenbaum's arguments on instrumental variables, and we propose and develop a robust, distribution-free and interpretable estimator "Trimmed Match", as well as a data-driven choice of the tuning parameter which may be of independent interest. We investigate the sensitivity of Trimmed Match to some violations of its assumptions and show that it can be more efficient than some alternative estimators based on simulated data. We then demonstrate its practical utility with real case studies. 1. Introduction. Similar to traditional offline media such as television, radio and print, the primary goal of online advertising is to help promote the selling of goods and services. However, despite these shared goals, online advertising has been the leading source of advertising revenue in the United States since 2016 (Interactive Advertising Bureau, 2018). Goldfarb and Tucker (2011) attribute this success of online advertising to its superiority over other media in terms of its measurability and targetability. A prospective online marketing strategy (e.g., expanding the list of search keywords on which to advertise) is frequently evaluated in terms of its incremental return on ad spend (iROAS)-that is, the ratio of the strategy's causal effect on some response metric of interest relative to its causal effect on the ad spend. Here the response metric of interest may be for example, revenue from online sales, offline sales, or overall sales which may be affected by the ad. Indeed, this evaluation has become progressively more important as advertisers increasingly seek to optimize the impact of their marketing decisions-an evaluation that is, in theory, facilitated by large-scale randomized experiments (i.e. "A/B tests") which randomize users to different ad serving conditions (Goldfarb and Tucker, 2011;Johnson, Lewis and Nubbemeyer, 2017). In practice, however, privacy concerns which restrict the collection of user data and technical issues such as cookie churn and multiple device usage have made it hard to maintain the integrity of a randomized user experiment in the online advertising context (Gordon et al., 2019). Consequently, observational studies remain an area of active research for estimating the causal impact of online marketing strategies (e.g., Varian (2016), Sapp et al. (2017), Chen et al. (2018), and the references therein), although the empirical Keywords and phrases: effect ratio, interference, heterogeneity, studentized trimmed mean. studies of Lewis and Rao (2015) and Gordon et al. (2019) continue to suggest caution when using observational methods despite these recent advances. Indeed, randomized experiments are still regarded as the "gold standard" for causal inference (Imbens and Rubin, 2015) and, to mitigate some of the challenges of user-level experimentation, advertisers frequently instead employ randomized "geo experiment" designs (Vaver and Koehler, 2011) which partition a geographic region of interest into a set of nonoverlapping "geos" (e.g., Nielsen Media Research's 210 Designated Market Areas 1 which subdivide the United States) that are regarded as the units of experimentation rather than the individual users themselves. More formally, let G be the set of geos in a target population where, for a geo g ∈ G, we let (S g , R g ) ∈ R 2 denote its observed bivariate outcome with ad spend S g and response R g . Following the Neyman-Rubin causal framework, we denote geo g's potential outcomes under the control and treatment ad serving conditions as (S (C) g , R (C) g ) and (S (T ) g , R (T ) g ), respectively, where we can only observe one of these two bivariate potential outcomes for each geo g. Therefore, relative to the control condition, there are two unit-level causal effects caused by the treatment condition for each geo g-the incremental ad spend and the incremental response which are defined by S (T ) g − S (C) g and R (T ) g − R (C) g , respectively. However, advertisers frequently find the iROAS, i.e. the ratio of incremental response to incremental ad spend, to be a more informative and actionable measure of advertising performance: θ g = R (T ) g − R (C) g S (T ) g − S (C) g , (1.1) for g ∈ G. Following Kerman, Wang and Vaver (2017) and Kalyanam et al. (2018), and letting |·| denote the set cardinality function, the overall iROAS with respect to G can be defined as the ratio of the average incremental response to the average incremental ad spend: θ * = 1 |G| g∈G R (T ) g − R (C) g 1 |G| g∈G S (T ) g − S (C) g , (1.2) which is the parameter of primary interest in this paper. In a randomized experiment, one can use the group difference to obtain unbiased estimates of the average incremental response and average incremental ad spend. The ratio of these group differences then gives an empirical estimate of θ * : θ (emp) = 1 |T | g∈T R g − 1 |C| g∈C R g 1 |T | g∈T S g − 1 |C| g∈C S g (1.3) where T and C denote the set of geos in treatment and in control, respectively. Similar empirical estimates have been commonly used for effect ratios in other applications, e.g., the incremental cost-effectiveness ratio which summarizes the cost-effectiveness of a health care intervention (Chaudhary and Stearns, 1996;Bang and Zhao, 2014). However, geo experiments often introduce some additional complexity which makes the causal estimation of the iROAS more difficult. The complexity can be attributed mostly to two sources of interference: 1) spillover effects (e.g., from consumers traveling across geo boundaries), and 2) budget constraints on ad spend. Existence of interference, if not handled properly, may invalidate traditional causal inference which relies on the "stable unit treatment value assumption" (SUTVA)-that is, the presumption that the treatment applied to one unit does not affect the outcome of another unit (Rubin, 1980). Interference due to the first source can be controllable to a large extent by using bigger geographically clustered regions as the experimental units (Vaver and Koehler, 2011;Rolnick et al., 2019) and, therefore, such interference is assumed to be ignorable in this paper. As the consequence, however, geo experiments frequently involve only a small number of geos (Vaver and Koehler, 2011) where the distributions of {S g : g ∈ G} and {R g : g ∈ G} can be very heavy-tailed and, as a result, the empirical estimator defined in equation (1.3) can be very unreliable. Interference due to the second source is more subtle and less controllable. Although advertisers have some control over the online marketing strategies that they employ (e.g., which search keywords to advertise on, the maximal price they are willing to pay to show their ads, etc.), they are competing in a dynamic ecosystem where the actual delivery of their online ads is determined by advertising platforms which run auctions and use machine learning models to optimize the ad targeting in real-time to maximize key performance indicators such as clicks, site visits, and purchases (Varian, 2009;Johnson, Lewis and Nubbemeyer, 2017). Thus, for a given budget constraint for geos in the treatment group (or the control group), these advertising platforms may choose to allocate ad spend to one geo at the expense of others which, in turn, introduces interference in the responses observed in each geo. We discuss later that this interference can be handled naturally for various kinds of geo experiments, unlike interference which has recently been studied elsewhere (e.g., medical science, social network), see Rosenbaum (2007), Luo et al. (2012), Athey, Eckles and Imbens (2018) and references therein. The key contributions of this paper are: 1) the formulation of a novel statistical framework for inferring the iROAS θ * from randomized paired geo experiments which embed the complex nature of small sample sizes, data heavy-tailedness, and interference due to budgetary constraints, 2) the proposal and development of a robust and distribution-free estimator "Trimmed Match" with a simple interpretation, and 3) extensive simulations and real case studies demonstrating that Trimmed Match can be more efficient than some alternative estimators. The rest of this paper is organized as follows. We first provide the background on geo experiments in the online advertising context in Section 2. Afterwards, we formulate a statistical framework for inferring θ * in a randomized paired geo experiment in Section 3. Under this framework and in the spirit of Rosenbaum (1996Rosenbaum ( , 2002, we review a few distribution-free estimators based on common test statistics in Section 4, and we propose the Trimmed Match estimator in Section 5. Section 6 introduces a data-driven choice of the trim rate. Simulations demonstrating the robustness and efficiency of Trimmed Match are presented in Section 7, and some real case studies illustrating the performance of Trimmed Match in practice are shown in Section 8. Finally, Section 9 concludes with some suggestions for future research. Fast computation of Trimmed Match is presented in Appendix A. Trimmed Match has been applied for advertiser studies at Google. The Python library for the implementation is available at GitHub (Chen, Longfils and Best, 2020). Background. Geo experiments are now a standard tool for the causal measurement of online advertising at Google-see, for example, Blake, Nosko and Tadelis (2015), Ye et al. (2016), and Kalyanam et al. (2018). 2 There has been some related work in terms of estimating the effectiveness of online advertising with geo experiments, but to the best of our knowledge, all work to date has been model-based. Notably, after introducing the concept of a randomized geo experiment design for online advertising, Vaver and Koehler (2011) proceed to analyze them using a two stage weighted linear regression approach, called Geo-Based Regression (GBR). The first stage fits a linear predictive model for the geo-level potential control ad spend using data from the control group, where pre-experimental ad spend is used as regressors and model weights to try to account for heteroscedasticity caused by the differences in geo size. The second stage of GBR fits a regression for the response variable, where pre-experimental response and incremental ad spend-which is 0 by definition for geos in the control group, but for geos in the treatment group is inferred by the difference between the observed ad spend and the counterfactual ad spend predicted from the first stage-are used as regressors and model weights to try to account for heteroscedasticity, and the iROAS parameter is the coefficient of incremental ad spend. More recently, to address situations where there are only a few geos available for experimentation, Kerman, Wang and Vaver (2017) propose a Time-Based Regression (TBR) approach which uses a constrained version of the Bayesian structural time series model Brodersen et al. (2015) to estimate the overall incremental response for the treatment group, where the control group's time series is used to contemporaneously model the treatment group's "business as usual" behavior prior to the experiment, and then subsequently used in conjunction with the trained model to predict what the treatment group's "business as usual" counterfactual would have been had the experiment not occurred. However, it can be shown that these methods rely on some strong modeling assumptions that are often hard to justify in practice. For GBR, the result can be quite sensitive to the choice of weights, and furthermore, even if the geo-level incremental ad spends are known, unlike more recent regression adjustment models (Lin, 2013), its second stage regression may still suffer from the endogeneity problem-incremental ad spend may correlate with the residual-despite randomness in the treatment assignment. As a natural extension of GBR, one might attempt to fit the heterogeneous geo-level causal effects on the response and ad spend separately using parametric or nonparametric models (Bloniarz et al., 2016;Wager and Athey, 2018;Kunzel et al., 2019). Besides the requirement of larger sample size, however, this may not be straight-forward since budget constraints on ad spend may break the assumption of independent measurements behind the models. Meanwhile, TBR assumes a stable linear relationship regarding the contemporaneous "business as usual" time series between the treatment group and the control group from the pre-experimental period into the experimental period. But this is an untestable assumption and may not hold in practice. Given the temporal dynamics (e.g., the COVID-19 pandemic as an extreme case), it is important to have a method which is robust and does not rely on any fragile and untestable modeling assumptions. Such a method is especially desirable when it needs to be built into a product to serve many geo experiments seamlessly. A Statistical Framework for Inferring the iROAS. We first consider the scenario where there is no budgetary constraint so that there is no interference between geos. Recall from (1.1) that a geo g's unit-level iROAS θ g is defined in terms of the ratio of its incremental response to its incremental ad spend. Rearranging the terms in this definition then leads to the following lemma, which serves as the basis for our statistical framework. LEMMA 1. R (T ) g − θ g S (T ) g = R (C) g − θ g S (C) g for every geo g ∈ G. Lemma 1 implies that the quantity Z g ≡ R g − θ g S g , which is not observable due to unknown θ g , remains the same regardless of whether geo g is assigned to treatment or control. Loosely speaking, the quantity Z g measures geo g's "uninfluenced response"-that is, the part of g's baseline response due to, for example, seasonality in the market demand which is not influenced by its ad spend. As previously discussed in Section 1, budgetary constraints may introduce complex interference and thus a violation of SUTVA since the ad spend allocated to one geo may come at the expense of others; therefore the experimental outcome (S g , R g ) for each geo g may be affected by the treatment assignment of other geos. In particular, for a design on n matched pairs, there are 2 n possible assignments each associated with its own potential outcome vector of length 2n, where the realized potential outcome vector depends on the materialized assignment-see for instance Hudgens and Halloran (2008) for a detailed formulation. But in light of Lemma 1, in this paper we assume for notational simplicity a relaxed version of SUTVA when there is interference due to budgetary constraints, which is formally described as Assumption 0 below. Assumption 0. The uninfluenced response Z g for any geo g introduced by Lemma 1 is invariant to both its own treatment assignment and the treatment assignment of other geos. It is not hard to verify that under Assumption 0, the parameters θ g in (1.1) are well defined. Assumption 0 trivially holds when there is no interference (e.g., if the advertiser had no budget constraints or if the total actual ad spend was below the pre-specified budget constraint). More generally, by the decomposition R g = (R g − θ g S g ) + θ g S g = Z g + θ g S g , it is important to note that under Assumption 0, Z g is not affected by geo assignment and thus any interference introduced from budgetary constraints only affects the response through the magnitude of materialized ad spend S g and iROAS θ g . In other words, this specifies a simple linear form which quantifies how the interference due to budgetary constraints affects the measurements. Consequently, Assumption 0 may still hold in less trivial situations with budget constraints such as: • When the potential ad spend under the control condition is known to be 0 for every geo (e.g., showing online ads under the treatment condition versus not showing ads under the control condition), or when the potential ad spend under the treatment condition is known to be 0 for every geo (e.g., "go dark" experiments where no ads are shown under the treatment condition (Blair and Kuse, 2004)). • When advertisers use their budget constraint to pre-specify each geo's potential ad spend under the control condition (but not necessarily when under the treatment condition), or when advertisers pre-specify each geo's potential ad spend under the treatment condition (but not necessarily when under the control condition). These scenarios encompass many of the geo experiment designs in practice and, consequently, we assume that Assumption 0 holds in the remainder of this paper. PROPOSITION 1. In a completely randomized geo experiment, under Assumption 0, the distribution of R g − θ g S g is the same between the treatment group and the control group. TABLE 1 Description of the notation used for the ith pair of geos. Notation Description S i 1 , R i 1 Observed ad spend and response for the 1st geo S i 2 , R i 2 Observed ad spend and response for the 2nd geo A i Indicator which geo receives treatment or control X i = (S i 1 − S i 2 ) · A i Spend difference between treatment and control Y i = (R i 1 − R i 2 ) · A i Response difference between treatment and control i (θ) = Y i − θX i Difference in the "uninfluenced responses" with respect to θ Proposition 1, whose proof directly follows from Assumption 0, provides a general framework for inferring the unit-level iROAS {θ g : g ∈ G} (e.g., by parameterizing θ g with geolevel features) and the overall target population iROAS θ * by simplifying the bivariate causal inference problem to a single dimension. In this paper, we formulate a statistical framework specifically for inferring θ * by assuming, just as Vaver and Koehler (2011) do, that the unitlevel iROAS θ g are all identical. Assumption 1. θ g = θ * for all geos g ∈ G. Although the rigorous verification of Assumption 1 is beyond the scope of this paper, our sensitivity analysis in Section 7 suggests that estimates of θ * can still be reliable even if the unit-level iROAS θ g moderately differ, while our hypothesis tests in Section 8 indicate that this assumption is compatible with data observed from several real case studies. Following the recommendations of Vaver and Koehler (2011), in the remainder of this paper we consider a randomized paired design where 2n geos are matched into n pairs prior to the experiment such that, within each pair, one geo is randomly selected for treatment and the other geo for control. Let A i be the random assignment for the two geos in the ith pair with P (A i = 1) = P (A i = −1) = 1 2 , where A i = 1 indicates that the 1st geo receives treatment and the 2nd geo receives control, while A i = −1 indicates that the 2nd geo receives treatment and the 1st geo receives control. Let (S i1 , R i1 ) and (S i2 , R i2 ) be the observed spend and response values for the 1st geo and the 2nd geo respectively in the ith pair. Let X i and Y i be the observed differences in the ad spends and responses, respectively, between the treatment geo and the control geo in the ith pair, that is, Table 1 lists the notation and definitions used for the ith pair of geos. PROOF. Let Z i1 and Z i2 be the uninfluenced responses associated with the two geos in the ith pair, i.e. Z ij = R ij − θ * S ij for j = 1, 2. Then we have X i = (S i1 − S i2 ) · A i and Y i = (R i1 − R i2 ) · A i . (3.1) Let i (θ) = Y i − θX i . (3.2)i (θ * ) = (R i1 − R i2 ) · A i − θ * · (S i1 − S i2 ) · A i = (Z i1 − Z i2 ) · A i . Under Assumption 0, Z i1 and Z i2 are non-random quantities and are invariant to any treatment assignment of the 2n geos. The conclusion follows immediately since {A i : i = 1, . . . , n} are i.i.d. and each A i is symmetric about 0. Proposition 2 provides a general framework that facilitates the estimation of θ *regardless of how complicated the bivariate distribution of {(R g , S g ) : g ∈ G} may be, we can always reformulate the causal inference problem in terms of a simpler univariate "location" problem that is defined in terms of the symmetric distribution of each i (θ * ). By Proposition 2, the average of { i (θ * ) : 1 ≤ i ≤ n} is expected to be 0, so by setting 1 n n i=1 i (θ) = 0 and then solving for θ, we arrive at the following estimator for θ * : θ (emp) = n i=1 Y i n i=1 X i , (3.3) which coincides with, and also further motivates, the empirical estimator given in (1.3) with |T | = |C|. However, recall from our discussions in Section 1 that the empirical estimator may be unreliable when the bivariate distribution of {(R g , S g ) : g ∈ G} is heavy tailed. Although the iROAS estimation problem is fundamentally different from the classical location problem as studied extensively in the statistics literature, the reformulation of the problem in terms of the symmetry of the i (θ * ) values about 0 facilitates the application of robust statistical methods to address the heterogeneity issue of geo experiments. For conciseness, we only consider three such techniques in this paper and leave the exploration of other robust statistical methods to future work-we refer the reader to Tukey and McLaughlin (1963), Lehmann (2006), and Huber and Ronchetti (2009) for a comprehensive overview of such techniques. Specifically, we first briefly review the application of binomial sign test and the Wilcoxon signed-rank test in Section 4. Afterwards, in Section 5, we develop a robust and more easily interpretable estimator based on the trimmed mean, and we demonstrate its efficiency in practice through simulations and real case studies presented in Sections 7 and 8. Related work . A similar statistic in the form of (3.2) was first proposed and studied by Rosenbaum (1996Rosenbaum ( , 2002 to generalize an instrumental variable argument made by Angrist, Imbens and Rubin (1996), but in a different context and without the concern of interference as studied in this paper. In this section, we review two distribution and covariate free estimators of θ * along the same lines and refer to Rosenbaum (2020) for a more comprehensive overview. For any θ ∈ R, let M n (θ) be a statistic for testing symmetry where, in the case of binomial sign test, we have M n (θ) = n i=1 I( i (θ) > 0) − 1 2 , with i (θ) given by (3.2) and where I(·) is the indicator function, while in the case of Wilcoxon's signed-rank test we have M n (θ) = n i=1 sgn( i (θ)) · rank(| i (θ)|), with sgn(·) and rank(·) denoting the sign and rank functions, respectively. We refer the reader to Lehmann (2006) for additional details on tests of symmetry. PROPOSITION 3. Under Assumption 0 & 1, the test statistic M n (θ * ) for either binomial sign test or Wilcoxon's signed-rank test follow a known distribution that is symmetric about 0. Proposition 3, whose proof follows directly from Proposition 2, allows us to construct a 100(1 − α)% confidence interval for θ * -if we let q 1−α/2 be the (1 − α/2) quantile for the distribution of M n (θ * ) under Assumption 1, then we identify the minimal interval containing all θ ∈ R satisfying |M n (θ)| ≤ q 1−α/2 . Note, however, that M n (θ) = 0 may not always have a root. Following Rosenbaum (1996), the point estimator of θ * can be defined as the average of the smallest and largest values of θ that minimize |M n (θ)|-that is,θ = inf Θ M + sup Θ M 2 , (4.1) where Θ M = arg min θ∈R |M n (θ)|. In the remainder of this paper, we letθ (binom) andθ (rank) denote the estimators which correspond to binomial sign test statistic and Wilcoxon's signedrank test statistic, respectively. 5. The "Trimmed Match" Estimator . In this section, we derive an important estimator for θ * based on the trimmed mean under Proposition 2. In particular, for a randomized paired geo experiment, let {(X i , Y i ) : 1 ≤ i ≤ n} be as defined in (3.1) and, for any θ ∈ R, let { i (θ) : 1 ≤ i ≤ n} be as defined in (3.2) with the corresponding order statistics given by (1) (θ) ≤ (2) (θ) ≤ . . . ≤ (n) (θ). Point Estimation. For a fixed value λ ∈ [0, 1/2), the trimmed mean statistic as a function of θ is defined as: nλ (θ) ≡ 1 n − 2m n−m i=m+1 (i) (θ), (5.1) where m ≡ nλ is the minimal integer greater or equal to nλ. Here λ is a tuning parameter which is commonly referred to as the trim rate and, in order to be well defined, λ must satisfy n − 2m ≥ 1 so that trimming does not remove all n data points. We first develop an estimator for a fixed λ and defer discussions on the choice of λ to Section 6. By Proposition 2, nλ (θ * ) has an expected value of 0, so we can estimate θ * by solving nλ (θ) = 0. (5.2) When multiple roots exist, we choose the one that minimizes D nλ (θ) ≡ 1 n − 2m n−m i=m+1 | (i) (θ) + (n−i+1) (θ)|,(5.(trim) λ = i∈I Y i i∈I X i , (5.5) where I is the set of n−2m untrimmed indices of i (θ) used in the calculation of nλ (θ (trim) λ ) and thus I depends onθ (trim) λ . Note that if the two geos in the ith pair are perfectly matched in terms of the uninfluenced response, then i (θ * ) = 0. Therefore,θ (trim) λ has a nice interpretation: it trims the poorly matched pairs in terms of the i (θ * ) values and estimates θ * using only the well-matched untrimmed pairs. Consequently, in this paper, we refer toθ (trim) λ as the "Trimmed Match" estimator. It is worth emphasizing that Trimmed Match directly estimates θ * without having to estimate either the incremental response or the incremental spend. Moreover, the point estimate is calculated after trimming the pairs that are poorly matched in terms of the i (θ (trim) λ ) values rather than the pairs which are poorly matched with respect to the differences in their response Y i or ad spend X i . Indeed, consider an alternative trimmed estimator which does not directly estimate θ * , but instead first separately calculates a trimmed mean estimate of the average incremental response and a trimmed mean estimate of the average incremental ad spend, and then takes their ratio: i∈IY Y i /|I Y | i∈IX X i /|I X | , where the sets I Y and I X denote the indices of the untrimmed pairs used for estimating the incremental response and the incremental ad spend, respectively, and where the two sets will generally not be identical. Note, however, that this is not a desirable estimator for θ * since its numerator and denominator may not even yield an unbiased estimate of either the average incremental response or the average incremental spend, respectively, as neither {Y i : 1 ≤ i ≤ n} nor {X i : 1 ≤ i ≤ n} is expected to follow a symmetric distribution even if all of the geo pairs are perfectly matched in terms of their "uninfluenced responses". Finally, it is also interesting to note the connection between trimming poorly matched pairs and the theory presented in Small and Rosenbaum (2008) which shows that a smaller study with a stronger instrument is likely to be more powerful and less sensitive to biases than a larger study with a weaker instrument. These arguments were later supported empirically by Baiocchi et al. (2010), who studied a similar effect ratio in the form of (1.2) and showed that optimally removing about half of the data in order to define fewer pairs with similar pre-treatment covariates but with stronger instrument resulted in shorter confidence intervals and more reliable conclusions. Our Trimmed Match method also identifies and trims poorly matched pairs, but does not rely on pre-treatment covariates. Confidence Interval. Define the studentized trimmed mean statistic (Tukey and McLaughlin, 1963) with respect to { i (θ) : 1 ≤ i ≤ n} as follows: (5.6) T nλ (θ) =¯ nλ (θ) σ nλ (θ)/ √ n − 2m − 1 , whereσ 2 nλ (θ) = m (m+1) (θ) 2 + n−m i=m+1 (i) (θ) 2 + m (n−m) (θ) 2 − n [w nλ (θ)] 2 n − 2m is the winsorized variance estimate for¯ nλ (θ), and w nλ (θ) = m · (m+1) (θ) + n−m i=m+1 (i) (θ) + m · (n−m) (θ) n is the winsorized mean of i (θ)'s. The Trimmed Match confidence interval is constructed by determining the minimal interval containing all θ ∈ R satisfying |T nλ (θ)| ≤ c, (5.7) where the threshold c is chosen such that P (|T nλ (θ * )| ≤ c) = 1 − α. Under mild conditions, T nλ (θ * ) approximately follows a Student's t-distribution with n − 2m − 1 degrees of freedom, and we therefore set c to be the (1 − α/2) quantile of this distribution. Alternatively, one can also choose the threshold c by using Fisher's randomization test approach (see, for example, Rosenbaum (2002) and Ding, Feller and Miratrix (2016)) and relying on the fact that the distribution of i (θ * ) is symmetric about zero for each i. However, when constructing the confidence interval, it is also important to recognize that the trim rate λ is unknown in practice. Later, in Section 6, we propose a data-driven estimate of this trim rate which can be used by plug-in to construct the confidence interval, although such an interval may suffer from undercoverage in finite samples as it ignores the uncertainty associated with estimating this tuning parameter (Ding, Feller and Miratrix, 2016). Interestingly, however, our numerical studies in Section 7 suggest that the empirical coverage of the confidence intervals constructed using the estimated trim rates are often quite close to the nominal level even when n is small-a finding which is consistent with the observation that the studentized trimmed mean belongs to the class of "less vulnerable confidence and significance procedures" for the classical location problem (Tukey and McLaughlin, 1963). 6. Data-driven Choice of the Trim Rate λ. For the location problem, Jaeckel (1971) proposed minimizing the empirical estimate of the asymptotic variance when choosing the trimmed mean's trim rate λ, while Hall (1981) proved the general consistency of this approach. Similarly, we could choose the trim rate λ for Trimmed Match by minimizing an estimate of the asymptotic variance ofθ (trim) λ , which can be derived by assuming independent sample for (X, Y ). 3 This may apply to the scenario with no budget constraints, but not with budget constraints. We note that the essential idea of Jaeckel (1971) is to choose the trim rate by minimizing the uncertainty of the point estimate measured by approximate variance. To handle both scenarios without budget constraints and with budget constraints, we extend this idea and propose to choose the trim rate by minimizing the uncertainty of the point estimate measured by the width of the 100(1 − α 0 )% two-sided confidence interval previously defined in Section 5.2, where α 0 ∈ (0, 1) is pre-specified. Although different levels of α 0 can be used, our numerical studies in Section 7 suggest α 0 = 0.5 to be a reliable choice in terms of its mean squared error performance for both light and heavy tailed distributions. Hereafter we useλ to denote this data-driven trim rate using α 0 = 0.5. Simulation and Sensitivity Analysis. In this section, we present several numerical simulations which evaluate the performance and sensitivity of the Trimmed Match estimator θ (trim) λ defined by (5.4). We consider interferences between experimental units due to the presence of budget constraint on the total incremental ad spend. To meet the requirement of Assumption 0, we consider the scenarios where the potential ad spend under the control condition is pre-specified for each geo and will not be affected by any geo assignment, but the potential ad spend under the treatment condition may vary subjective to geo assignment, as discussed in Section 3. In particular, for simulations where Assumption 1 holds, we investigate how the choice of the trim rate λ affects the performance ofθ (trim) λ and, more broadly, we compare its performance against the empirical estimatorθ (emp) given by (3.3), as well as the binomial sign-test estimatorθ (binom) and the Wilcoxon signed-rank-test estimator θ (rank) defined in Section 4. Meanwhile, for simulations where Assumption 1 is violated, we investigate how the level of deviation from Assumption 1 affects the performances of these estimators. For each simulation scenario, we first simulate the size of each geo g = 1, 2, . . . , 2n as z g = F −1 g 2n + 1 , where F controls the amount of geo heterogeneity in the population and is taken to be either a half-normal distribution, a log-normal distribution, or a half-Cauchy distribution. The geos are then paired based on these sizes (the largest two geos form a pair, the third and fourth largest geos form a pair, and so on), and afterwards the geos are randomized within each pair, which determines whether a geo's control or treatment potential outcome is observed. We list the detailed simulation steps as follows. Step 1. Potential ad spend and response under the control condition according to a nonlinear relationship which is not affected by geo assignment: for g ∈ {1, · · · , 2n}, S (C) g = 0.01 × z g × (1 + 0.25 × (−1) g ) and R (C) g = z g . Let B = 0.25 × r × 2n g=1 S (C) g be the pre-specified budget for total incremental ad spend, where r > 0 is a parameter controlling the intensity of incremental ad spend. Step 2. Given a geo assignment denoted by (T , C) for the treatment and control groups respectively, the incremental ad spend for each g ∈ T is proportional to their potential control spend: ∆ S g = S (C) g × B/ g ∈T S (C) g . Step 3. Observed ad spend and response: For each g ∈ C, S g = S (C) g and R g = R (C) g ; For each g ∈ T , S g = S (C) g + ∆ S g and R g = R (C) g + θ g × ∆ S g , where θ g = θ 0 × (1 + δ × (−1) g ) is the iROAS for geo g ∈ {1, 2, · · · , 2n} with δ ∈ [0, 1] controlling the level of deviation from Assumption 1. The overall iROAS θ * as defined in (1.2) can be rewritten as θ * = 2n g=1 θ g × ∆ S g 2n g=1 ∆ S g which may not be well-defined any more when potential outcomes depend on geo assignment. When Assumption 1 holds, i.e. θ g ≡ θ 0 ∀g ∈ 1, · · · , 2n, then θ * ≡ θ 0 for any assignment. When Assumption 1 is violated, to get around the ill-definition, we may assume a virtual experiment where all 2n geos were assigned to treatment with a doubled total incremental budget, i.e. 2B, and where the incremental budget for each geo is still proportional to their potential control spends. It is easy to show that for this virtual experiment ∆ S g = 0.5 × r × S (C) g for each g, and that the overall iROAS can be simplified to a welldefined static quantity: θ * = θ 0 + δ × θ 0 × g z g · (0.25 + (−1) g ) g z g · (1 + 0.25 × (−1) g ) which will be treated as the source of truth. To summarize, the simulation parameters which are allowed to vary from scenario to scenario are the number of geo pairs n, the distribution F controlling the amount of geo heterogeneity, the iROAS scale θ 0 , the intensity of the incremental ad spend r, and the level of deviation δ from Assumption 1. Within each scenario, we then simulate K = 10, 000 random assignments-a process that determines which bivariate outcome (S g , R g ) is actually observed for each geo g, and also the observed differences (X i , Y i ) as defined in (3.1) for each geo pair i. Note that this assignment mechanism is the only source of randomness within each of our simulations. We summarize the results for n = 50 and θ 0 = 10 for each scenario reported in this section, although we note that other simulation parameters (e.g. n = 25) yielded similar conclusions. The performance of an estimator's point estimateθ is evaluated in terms of its root mean square error RM SE(θ) = 1 K K k=1 θ(k) − θ * 2 and its bias Bias(θ) = 1 K K k=1θ (k) − θ * , whereθ (k) is the estimated value of θ * from the kth replicate. Meanwhile, the performance of an estimator's 100(1 − α)% confidence interval (θ α/2 ,θ 1−α/2 ) is measured in terms of its one-sided power P ower(θ) = 1 K K k=1 I(θ (k) α/2 > 0) and its two-sided empirical coverage Coverage(θ) = 1 K K k=1 I(θ (k) α/2 < θ * <θ (k) 1−α/2 ), where (θ (k) α/2 ,θ(k) 1−α/2 ) denotes the confidence interval from the kth replicate. 7.1. Performance Comparison When Assumption 1 Holds. We first fix δ = 0 (i.e. Assumption 1 holds) to investigate the performance of the estimators as we vary the geo heterogeneity F ∈ {Half-Normal, Log-Normal, Half-Cauchy} (89) and the incremental ad spend intensity r ∈ {0.5, 1, 2}. Table 2 summarizes the simulation results in terms of each estimator's RMSE and bias. Here we see that RMSE and bias of every estimator improves as the intensity of the incremental ad spend r increases, and we note that the rank test based estimatorθ (rank) is generally more efficient than the sign test based estimatorθ (binom) , but both of them perform poorly relative to the Trimmed Match estimatorθ (trim) λ . In addition, recall that the Trimmed Match estimatorθ (trim) λ coincides with the empirical estimatorθ (emp) when the trim rate λ = 0. Thus, if we focus specifically on the performance of the Trimmed Match estimator, we see that some level of trimming can be beneficial when the geo sizes are generated from the more heterogeneous Log-Normal and Half-Cauchy distributions. It is interesting to note that the data-driven choiceλ generally perform better than a fixed choice of the trim rate λ ∈ {0, 0.10}, even for the Half-Normal scenario at r = 0.5. For each scenario, RMSEs greater than 2 times the RMSE for the best performed estimator are colored in red. Obviously, among all the estimators,θ (trim) λ is the most robust for all the three distributions. Meanwhile, Table 3 summarizes the power and empirical coverage for each estimator's accompanying 90% confidence interval. Indeed, although the table suggests that the Trimmed Match estimator with the data-driven estimateλ of the trim rate can suffer slightly from undercoverage-a result which agrees with our discussions in Section 5.2-we note that this estimator also provides considerably more power thanθ (emp) andθ (binom) when there is a low (r = 0.5) or moderate (r = 1.0) level of incremental ad spend, more power thanθ (rank) when there is a low (r = 0.5) level of incremental ad spend, and is quite competitive for other scenarios. 7.2. Sensitivity Analysis When Assumption 1 is Violated. We now fix r = 1.0 (a moderate level of incremental ad spend) and evaluate the performance of the estimators when Assumption 1 is violated-that is, the geo-level iROAS are no longer the same. Instead, in these simulations, half of the geos have an iROAS of θ 0 (1 − δ) while the other half have an iROAS of θ 0 (1 + δ) according to Step 3. Figure 1 compares the performance of the estimatorsθ (emp) ,θ (binom) ,θ (rank) andθ (trim) λ in terms of a scaled RMSE. Here we see thatθ (trim) λ significantly outperformsθ (binom) and θ (rank) for all scenarios. The empirical estimatorθ (emp) slightly outperforms the Trimmed Match estimator in the case of a half-normal distribution, however, the Trimmed Match estimator significantly outperforms the empirical estimator in the cases of the heavier tailed log-normal and half-Cauchy distributions where there are stronger geo heterogeneity. Moreover, we see that the Trimmed Match estimator still provides a useful estimate of θ * even when Assumption 1 is heavily violated at δ = 1. 8. Real Case Studies. Next, we report real data analysis from three different geo experiments, referred to as "A", "B" and "C", respectively, which were run either in the United States or in Canada. Each of these three experiments focused on a different business vertical, but they were all designed by randomized matched geo pairs. The number of geo pairs ranges from 60 to 105. Each experiment took a few weeks which were split into two time periods: a test period during which the experiment took place, and a cooldown period where the treatment geos were returned to the control condition to account for potential lagged effects. For each geo g, S g and R g are the aggregated ad spend and aggregated response over both time periods. In experiments A and B, the advertisers wanted to measure the iROAS for an improved ad format for their business where geos assigned to the control group would run the business as usual using their existing ad format, while geos assigned to the treatment group would adopt the improved ad format. For either the control group or the treatment group, the actual ad spend was much lower than the budget pre-specified by the advertisers, and thus we expect no interference, which implies Assumption 0. Experiment C was run to measure the iROAS of a new ad. The ad would only be shown to geos in the treatment group, but not the control group. The advertiser pre-specified a budget for the total ad spend. After the experiment, the total ad spend for the control group was 0, while for the treatment group was equal to the budget, which implies strong interferences. The potential control ad spend is 0 for each geo, and thus Assumption 0 holds as discussed in Section 3. To illustrate the data heavy-tailedness, the scatter plots of (X, Y ) for the three case studies are provided in Figure 2, where the scales of both X and Y are removed in order to anonymize the experiments. In Table 4, we report the kurtosis as a measure of heavy-tailedness for the empirical distributions of [-0.22, 1.94] all of which are much larger than 3, which is the kurtosis of any univariate normal distribution. Table 4 also lists the point estimates and confidence intervals for the empirical estimator θ (emp) , sign test based estimatorθ (binom) , rank test based estimatorθ (rank) , and Trimmed Match estimatorθ (trim) λ . Here we see thatθ (rank) andθ (trim) λ yield similar results except in experiment B, where the confidence interval from the rank based estimator is almost 74% wider than Trimmed Match. Meanwhile, the sign test based estimator gives confidence intervals that are even wider for all experiments by a considerable amount. It is also interesting to note that the data-driven choice of Trimmed Match's trim rate results in no trimming (coinciding with the empirical estimatorθ (emp) ) for case B despite the heavy-tailedness of the data, likely due to the approximately linear relationship between X and Y (as shown in Figure 2) so that trimming a geo pair with a large | i (θ * )| may not necessarily reduce the variance. As experiment A demonstrates, however, the empirical estimator can sometimes be very unreliable as it is heavily affected by outliers (as shown in Figures 2 and 3). {X i : 1 ≤ i ≤ n}, {Y i : 1 ≤ i ≤ n}, and {Y i −θ (trim) λ X i : 1 ≤ i ≤ n}. Meanwhile, Figure 3 plots the Trimmed Match point estimate and confidence interval as a function of the trim rate λ. Here in case A, we note the significant reduction in the size of the confidence intervals relative to the empirical estimatorθ (emp) , i.e. no trimming. We also investigate whether the real data are incompatible with the statistical framework that we developed in Section 3 under Assumption 1, which assumes that the geo-level iROAS θ g are all equal to one another. Recall from Proposition 2 that the distribution of the residuals { i (θ * ) : 1 ≤ i ≤ n} is symmetric about 0, where n is the number of geo pairs. Therefore, we expect { i (θ (trim) λ ) : 1 ≤ i ≤ n} to be approximately symmetric about 0 as well-a null hypothesis which we can test by using the Wilcoxon signed-rank test. The corresponding p-values 4 are 0.66, 0.55 and 0.85 for the above three real case studies, which suggest that the real data are not incompatible with Assumption 1. Figure 4 shows the boxplots of the fitted residuals for the three cases respectively, which illustrate the heavy-tailedness as well as approximate symmetry. 9. Discussion. In this paper, we introduced the iROAS estimation problem in online advertising and formulated a novel statistical framework for its causal inference in a randomized paired geo experiment design which is often complicated by the issues of small sample sizes, geo heterogeneity, and interference due to budgetary constraints on ad spend. Moreover, we proposed and developed a robust, distribution-free, and easily interpretable Trimmed Match estimator which adaptively trims poorly matched geo pairs. In addition, we devised a datadriven choice of the trim rate which extends Jaeckel's idea but does not rely on asymptotic variance approximation, and presented numerical studies showing that Trimmed Match is often more efficient than alternative methods even when some of its assumptions are violated. Several open research questions of considerable interest remain such as 1) using Trimmed Match to improve the design of matched pairs experiments, 2) using covariates to further improve the estimation precision (Rosenbaum, 2002), 3) estimating the geo-level iROAS, and 4) further investigation of the choice of the trim rate and corresponding asymptotic analysis, e.g. using sample split (Klaassen, 1987;Nie and Wager, 2017). Finally, although this paper focused on the estimation of the iROAS of online advertising in geo experiments, we note that Trimmed Match can also be applied to matched pairs experiments in other areas where the ratio of two causal estimands is of interest (e.g., the incremental cost-effectiveness ratio (Chaudhary and Stearns, 1996;Bang and Zhao, 2014); see Chapter 5.3 of Rosenbaum (2020) for more examples). APPENDIX A: FAST COMPUTATION OF TRIMMED MATCH Recall from Section 5.1, that obtaining the Trimmed Match point estimateθ (trim) λ requires finding all roots of the trimmed mean equation (5.2). Moreover, recall that this computation is trivial when λ = 0 asθ (trim) λ just corresponds to the empirical estimator given by (1.3). Therefore, in the remainder of this section, we focus on the computation with a fixed trim rate λ > 0. Although (5.5) implies that calculatingθ (trim) λ is straightforward once its corresponding set of n−2m untrimmed indices I is known, I is generally a priori unknown as it depends on θ (trim) λ . One could, at least in theory, check all possible subsets of size n − 2m, but this brute force approach requires the evaluation of n 2m such subsets and would be computationally too expensive to be usable in practice when m is large. However, by instead considering how the ordering of the values in the set { i (θ) : 1 ≤ i ≤ n} changes as a function of θ ∈ R-in particular, by enumerating all possible values of θ at which this ordering changes-we are able to devise an efficient O(n 2 log n) algorithm for finding all of the roots of (5.2), which are required by (5.4). Following (3.1), let {(x i , y i ) : 1 ≤ i ≤ n} be the differences in the ad spends and responses that are observed from a randomized paired geo experiment. For notational simplicity, assume that {(x i , y i ) : 1 ≤ i ≤ n} is ordered such that x 1 < x 2 < . . . < x n . LEMMA 2. For any two pairs of geos i and j such that 1 ≤ i < j ≤ n, let θ ij = y j − y i x j − x i . Then i (θ) < j (θ) if and only if θ < θ ij . Note that ties in {x i : 1 ≤ i ≤ n} rarely occur in practice; when ties do occur, they can be broken by adding a small amount of random noise to the x i 's. Lemma 2, whose proof is straightforward and is omitted, allows us to efficiently solve the Trimmed Match equation defined by (5.2). Algorithm 1 Solving the Trimmed Match Equation (5.2) Input: {(x i , y i ) : 1 ≤ i ≤ n} and trim rate λ > 0; Let m ≡ nλ . Output: roots of (5.2). i) Reorder the pairs {(x i , y i ) : 1 ≤ i ≤ n} such that x 1 < . . . < xn; Calculate {θ ij : 1 ≤ i < j ≤ n} and order them such that θ i 1 j 1 < θ i 2 j 2 < . . . < θ i N j N . (Break ties with negligible random perturbation if needed.) ii) Initialize the set of untrimmed indices with I = {i : m < i ≤ n − m} Initialize a = i∈I y i , b = i∈I x i , and two ordered sets Θ 1 = {} and Θ 2 = {}. iii) For k = 1, . . . , N : a) If i k ∈ I and j k / ∈ I, then update I, a, b as follows: I ← I + {j k } − {i k } a ← a + y j k − y i k b ← b + x j k − x i k and append a/b to Θ 1 and θ i k j k to Θ 2 . b) If j k ∈ I and i k / ∈ I, then update I, a and b similar to (a), and append a/b to Θ 1 and θ i k j k to Θ 2 . c) Otherwise, continue. iv) Append ∞ to Θ 2 , and output a subset of Θ 1 as follows: For k = 1, . . . , |Θ 1 |, output Θ 1 [k] iff Θ 2 [k] ≤ Θ 1 [k] ≤ Θ 2 [k + 1]. A.1. Solving the Trimmed Match Equation. For ease of exposition, assume that {θ ij : 1 ≤ i < j ≤ n} has been ordered such that θ i1j1 ≤ θ i2j2 ≤ . . . ≤ θ iN jN , where N = n(n − 1)/2. Then, for any k = 1, 2, . . . , N − 1, Lemma 2 implies that the ordering of { i (θ) : 1 ≤ i ≤ n} is the same for all θ ∈ (θ ikjk , θ ik+1jk+1 ) and, thus, the set of untrimmed indices I(θ) ≡ {1 ≤ i ≤ n : (m+1) (θ) ≤ i (θ) ≤ (n−m) (θ)} must also be the same for all θ ∈ (θ ikjk , θ ik+1jk+1 ). Moreover, Lemma 2 also implies that as θ increases and crosses a point θ ikjk , then for any 1 ≤ i < j ≤ n, the ordering between i (θ) and j (θ) changes if and only if (i, j) = (i k , j k ) or (i, j) = (j k , i k ). Therefore, we can sequentially update the set of untrimmed indices I(θ) based on what occurs as θ increases and crosses each point θ i1j1 , θ i2j2 , . . . , θ iN jN . If i k , j k ∈ I(θ) or if i k , j k ∈ I(θ), then I(θ) remains unchanged; if i k ∈ I(θ) but j k ∈ I(θ), then we update I(θ) by replacing i k with j k ; if i k ∈ I(θ) but j k ∈ I(θ), then we update I(θ) by replacing j k with i k . Pseudocode further describing this O(n 2 log n) procedure is provided in Algorithm 1. A.2. Computing the Confidence Interval. Lemma 2 also facilitates the calculation of the confidence interval by reducing (5.7) to a quadratic inequality. The specific details are omitted from this paper for conciseness but are available from the authors upon request. A.3. Existence ofθ (trim) λ . From our discussions in this section, it is not necessarily obvious whether the Trimmed Match point estimateθ (trim) λ always exists. However, the following theorem, whose proof is also omitted for conciseness, guarantees that it does indeed always exist as long as the trimmed mean of the x i 's is nonzero. THEOREM 1. (Existence) Suppose that {(x i , y i ) : 1 ≤ i ≤ n} is ordered such that x 1 ≤ x 2 ≤ . . . ≤ x n . Then: 1) nλ (θ) is a continuous function with respect to θ ∈ R. 2) If n−m i=m+1 x i = 0, then nλ (θ) = 0 has at least one root. PROPOSITION 2 . 2With a randomized paired design for geo experiments, under Assumption 0 & 1, { i (θ * ) : i = 1, . . . , n} are mutually independent and the distribution of i (θ * ) is symmetric about 0 for i = 1, . . . , n. which measures the symmetric deviation from 0(Dhar and Chaudhuri, 2012). More formally, we can express this estimator as: nλ (θ) : nλ (θ) = 0}.(5.4) If λ = 0, then no trimming takes place andθ (trim) λ coincides with the empirical estimatorθ (emp) from (3.3). Although no simple closed form forθ (trim) λ exists when λ > 0, it is straightforward to show thatθ Figure 1 : 1Comparison of each estimator's performance in terms of a scaled RMSE, where the x-axis (δ) quantifies the level of deviation from Assumption 1. Here the budget is fixed. Figure 2 : 2The scatter plot of (X, Y ) for each of the three real case studies, where the horizontal and vertical lines pass the origin but the detailed scales of both X and Y are removed to anonymize the experiments. Figure 3 : 3The Trimmed Match point estimates and confidence intervals as a function of the trim rate λ for each of the three real case studies (rescaled by the point estimateθ(trim) λ to anonymize the experiments). The vertical dashed line corresponds to the data-driven estimatê λ of the trim rate. Figure 4 : 4Boxplots of the residuals, where the residuals are rescaled by their maximum absolute value for each case separately. TABLE 2 2Performance comparison w.r.t. RMSE (bias) when Assumption 1 holdsdistribution rθ (emp)θ(binom)θ(rank)θ (trim) 0.10θ (trim) λ 0.5 2.54 (0.29) 688.47 (1.20) 27.82 (0.64) 40.40 (0.78) 1.09 (-0.07) Half-Normal 1.0 0.35 (0.05) 490.91 (19.51) 5.30 (0.31) 0.51 (0.12) 0.38 (-0.00) 2.0 0.16 (0.01) 1.59 (0.94) 0.24 (0.04) 0.19 (0.03) 0.19 (-0.01) 0.5 20.09 (0.64) 224.89 (-26.62) 111.33 (-0.16) 114.92 (1.07) 1.96 (0.10) Log-Normal 1.0 0.86 (0.11) 219.46 (-2.93) 7.84 (0.35) 0.60 (0.14) 0.54 (0.04) 2.0 0.41 (0.03) 1.65 (0.99) 0.28 (0.05) 0.23 (0.03) 0.24 (0.00) 0.5 10.26 (-1.92) 2143.22 (-6.85) 446.95 (6.96) 222.86 (-0.73) 5.20 (1.02) Half-Cauchy 1.0 4.49 (-0.37) 299.33 (9.33) 2.25 (0.63) 1.03 (0.27) 1.27 (0.21) 2.0 2.20 (-0.08) 1.80 (1.06) 0.47 (0.12) 0.34 (0.06) 0.36 (0.02) TABLE 3 3Comparison of power (empirical coverage) when Assumption 1 holdsdistribution rθ (emp)θ(binom)θ(rank)θ (trim) 0.10θ (trim) λ 0.5 86 (95) 3 (95) 52 (95) 55 (95) 88 (92) Half-Normal 1.0 100 (89) 9 (97) 91 (93) 99 (90) 100 (87) 2.0 100 (89) 100 (93) 100 (90) 100 (89) 100 (86) 0.5 16 (96) 1 (94) 50 (95) 52 (95) 60 (92) Log-Normal 1.0 47 (93) 9 (96) 97 (92) 100 (90) 99 (88) 2.0 100 (93) 100 (93) 100 (90) 100 (90) 100 (87) 0.5 0 (100) 0 (96) 7 (95) 9 (96) 13 (92) Half-Cauchy 1.0 0 (100) 28 (97) 98 (91) 98 (93) 94 (92) 2.0 0 (100) 100 (93) 100 (90) 100 (93) 100 TABLE 4 4Summary of the three real case studies in terms of the kurtosis (rounded to the nearest 10) of the empirical distributions, the point estimates and confidence intervals obtained from different estimators (rescaled by the to anonymize the experiments), and the Trimmed Match's data-driven estimateλ of the trim rate.point estimateθ (trim) λ Case Kurt(X) Kurt(Y ) Kurt(ˆ )θ (emp)θ(binom)θ(rank)θ (trim) λλ A 30 80 80 2.79 0.84 1.09 1.00 0.22 [-1.26, 5.69] [0.23, 1.81] [0.31, 1.85] [0.25, 1.74] B 50 40 10 1.00 0.81 0.87 1.00 0.00 [0.81, 1.12] [0.01, 1.06] [0.56, 1.10] [0.81, 1.12] C 10 10 10 1.17 0.52 0.87 1.00 0.02 [0.14, 2.18] [-1.50, 2.69] [-0.32, 1.97] https://www.nielsen.com/intl-campaigns/us/dma-maps.html For a list of "geo targets" supported by Google AdWords, see:https://developers.google.com/adwords/api/docs/appendix/geotargeting Some asymptotic analysis is reported in an earlier version at https://arxiv.org/abs/1908. 02922v2. Note that the p-values are based on the estimated θ * (instead of the true values which are unknown) and thus may not be accurate. Acknowledgements. The authors would like to thank Art Owen and Jim Koehler for insightful early discussion, Peter Bickel for the reference of Jaeckel's paper on the choice of trim rate, Nicolas Remy, Penny Chu and Tony Fagan for the support, Jouni Kerman, Yin-Hsiu Chen, Matthew Pearce, Fan Zhang, Jon Vaver, Susanna Makela, Kevin Benac, Marco Longfils and Christoph Best for interesting discussions, and the people who read and commented on the manuscript. We appreciate Editor Beth Ann Griffin and anonymous reviewers whose comments have helped improve the paper significantly. All the figures are produced with the R package ggplot2(Wickham, 2016). Identification of causal effects using instrumental variables. J D Angrist, G W Imbens, D B Rubin, Journal of the American statistical Association. 91ANGRIST, J. D., IMBENS, G. W. and RUBIN, D. B. (1996). Identification of causal effects using instrumental variables. Journal of the American statistical Association 91 444-455. Exact p-Values for Network Interference. S Athey, D Eckles, G W Imbens, Journal of the American Statistical Association. 113ATHEY, S., ECKLES, D. and IMBENS, G. W. (2018). Exact p-Values for Network Interference. Journal of the American Statistical Association 113 230-240. Building a stronger instrument in an observational study of perinatal care for premature infants. M Baiocchi, D S Small, S Lorch, P R Rosenbaum, Journal of the American Statistical Association. 105BAIOCCHI, M., SMALL, D. S., LORCH, S. and ROSENBAUM, P. R. (2010). Building a stronger instrument in an observational study of perinatal care for premature infants. Journal of the American Statistical Association 105 1285-1296. Cost-effective analysis: a proposal of new reporting standards in statistical analysis. H Bang, H Zhao, Journal of Biopharmaceutical Statistics. 24BANG, H. and ZHAO, H. (2014). Cost-effective analysis: a proposal of new reporting standards in statistical analysis. Journal of Biopharmaceutical Statistics 24 443 -460. Better practices in advertising can change a cost of doing business to wise investments in the business. M H Blair, A R Kuse, Journal of Advertising Research. 44BLAIR, M. H. and KUSE, A. R. (2004). Better practices in advertising can change a cost of doing business to wise investments in the business. Journal of Advertising Research 44 71-89. Consumer Heterogeneity and Paid Search Effectiveness: A Large-Scale Field Experiment. T Blake, C Nosko, S Tadelis, Econometrica. 83BLAKE, T., NOSKO, C. and TADELIS, S. (2015). Consumer Heterogeneity and Paid Search Effectiveness: A Large-Scale Field Experiment. Econometrica 83 155-174. Lasso adjustments of treatment effect estimates in randomized experiments. A Bloniarz, H Liu, C.-H Zhang, J S Sekhon, Y U , B , Proceedings of the National Academy of Sciences. 113BLONIARZ, A., LIU, H., ZHANG, C.-H., SEKHON, J. S. and YU, B. (2016). Lasso adjustments of treatment effect estimates in randomized experiments. Proceedings of the National Academy of Sciences 113 7383-7390. Inferring causal impact using Bayesian structural time-series models. K H Brodersen, F Gallusser, J Koehler, N Remy, S L Scott, Annals of Applied Statistics. 9BRODERSEN, K. H., GALLUSSER, F., KOEHLER, J., REMY, N. and SCOTT, S. L. (2015). Inferring causal impact using Bayesian structural time-series models. Annals of Applied Statistics 9 247-274. . Interactive, Bureau, IAB Internet Advertising ReportFull Year ResultsINTERACTIVE ADVERTISING BUREAU (2018). IAB Internet Advertising Report: 2017 Full Year Results. Estimating confidence intervals for cost-effectiveness ratios: an example from a randomized trial. M A Chaudhary, S C Stearns, Statistics in Medicine. 15CHAUDHARY, M. A. and STEARNS, S. C. (1996). Estimating confidence intervals for cost-effectiveness ratios: an example from a randomized trial. Statistics in Medicine 15 1447-1458. The Python library for Trimmed Match and Trimmed Match Design. A Chen, M Longfils, C Best, Online; accessed 21CHEN, A., LONGFILS, M. and BEST, C. (2020). The Python library for Trimmed Match and Trimmed Match Design. https://github.com/google/trimmed_match. [Online; accessed 21-April-2021]. Bias correction for paid search in media mix modeling Technical Report. A Chen, D Chan, M Perry, Y Jin, Y Sun, Y Wang, J Koehler, Google IncCHEN, A., CHAN, D., PERRY, M., JIN, Y., SUN, Y., WANG, Y. and KOEHLER, J. (2018). Bias correction for paid search in media mix modeling Technical Report, Google Inc. https://ai.google/research/ pubs/pub46861. On the derivatives of the trimmed mean. S S Dhar, Chaudhuri , P , Statistica Sinica. 22DHAR, S. S. and CHAUDHURI, P. (2012). On the derivatives of the trimmed mean. Statistica Sinica 22 655-679. Randomization inference for treatment effect variation. P Ding, A Feller, L Miratrix, Journal of the Royal Statistical Society: Series B. 78DING, P., FELLER, A. and MIRATRIX, L. (2016). Randomization inference for treatment effect variation. Journal of the Royal Statistical Society: Series B 78 655-671. Online Advertising. A Goldfarb, C Tucker, Advances in Computers. M. V. ZelkowitzElsevier6GOLDFARB, A. and TUCKER, C. (2011). Online Advertising. In Advances in Computers, (M. V. Zelkowitz, ed.) 81 6, 289-315. Elsevier. A Comparison of Approaches to Advertising Measurement: Evidence from Big Field Experiments at Facebook. B Gordon, F Zettelmeyer, N Bhargava, D Chapsky, Marketing Science. 38GORDON, B., ZETTELMEYER, F., BHARGAVA, N. and CHAPSKY, D. (2019). A Comparison of Approaches to Advertising Measurement: Evidence from Big Field Experiments at Facebook. Marketing Science 38 193- 225. Large sample properties of Jaeckel's adaptive trimmed mean. P Hall, Annals of the Institute of Statistical Mathematics. 33HALL, P. (1981). Large sample properties of Jaeckel's adaptive trimmed mean. Annals of the Institute of Statistical Mathematics 33 449-462. P J Huber, E Ronchetti, Robust statistics. John Wiley & Sons, IncHUBER, P. J. and RONCHETTI, E. (2009). Robust statistics. John Wiley & Sons, Inc. Toward causal inference with interference. M G Hudgens, M E Halloran, Journal of the American Statistical Association. 103HUDGENS, M. G. and HALLORAN, M. E. (2008). Toward causal inference with interference. Journal of the American Statistical Association 103 832-842. G W Imbens, D B Rubin, Causal Inference for Statistics, Social, and Biomedical Sciences: An Introduction. New York, NY, USACambridge University PressIMBENS, G. W. and RUBIN, D. B. (2015). Causal Inference for Statistics, Social, and Biomedical Sciences: An Introduction. Cambridge University Press, New York, NY, USA. Some flexible estimates of location. L A Jaeckel, The Annals of Mathematical Statistics. 42JAECKEL, L. A. (1971). Some flexible estimates of location. The Annals of Mathematical Statistics 42 1540- 1552. Ghost ads: Improving the economics of measuring online ad effectiveness. G A Johnson, R A Lewis, E I Nubbemeyer, Journal of Marketing Research. 54JOHNSON, G. A., LEWIS, R. A. and NUBBEMEYER, E. I. (2017). Ghost ads: Improving the economics of measuring online ad effectiveness. Journal of Marketing Research 54 867-884. Cross channel effects of search engine advertising on brick & mortar retail sales: Meta analysis of large scale field experiments on Google. K Kalyanam, J Mcateer, J Marek, J Hodges, L Lin, Quantitative Marketing and Economics. 16KALYANAM, K., MCATEER, J., MAREK, J., HODGES, J. and LIN, L. (2018). Cross channel effects of search engine advertising on brick & mortar retail sales: Meta analysis of large scale field experiments on Google.com. Quantitative Marketing and Economics 16 1-42. Estimating Ad Effectiveness using Geo Experiments in a Time-Based Regression Framework Technical Report. J Kerman, P Wang, J Vaver, Google, IncKERMAN, J., WANG, P. and VAVER, J. (2017). Estimating Ad Effectiveness using Geo Experiments in a Time- Based Regression Framework Technical Report, Google, Inc. https://ai.google/research/pubs/ pub45950. Consistent estimation of the influence function of locally asymptotically linear estimators. The Annals of Statistics. C A Klaassen, KLAASSEN, C. A. (1987). Consistent estimation of the influence function of locally asymptotically linear esti- mators. The Annals of Statistics 1548-1562. Metalearners for estimating heterogeneous treatment effects using machine learning. S R Kunzel, J S Sekhon, P J Bickel, Y U , B , Proceedings of the National Academy of Sciences. the National Academy of Sciences116KUNZEL, S. R., SEKHON, J. S., BICKEL, P. J. and YU, B. (2019). Metalearners for estimating heterogeneous treatment effects using machine learning. Proceedings of the National Academy of Sciences 116 4156-4165. Nonparametrics: statistical methods based on ranks. E L Lehmann, SpringerLEHMANN, E. L. (2006). Nonparametrics: statistical methods based on ranks. Springer. The Unfavorable Economics of Measuring the Returns to Advertising. R A Lewis, J M Rao, The Quarterly Journal of Economics. 130LEWIS, R. A. and RAO, J. M. (2015). The Unfavorable Economics of Measuring the Returns to Advertising. The Quarterly Journal of Economics 130 1941-1973. Agnostic notes on regression adjustments to experimental data: Reexamining Freedman's critique. W Lin, The Annals of Applied Statistics. 7LIN, W. (2013). Agnostic notes on regression adjustments to experimental data: Reexamining Freedman's cri- tique. The Annals of Applied Statistics 7 295-318. Inference With Interference Between Units in an fMRI Experiment of Motor Inhibition. X Luo, D S Small, C.-S R Li, P R Rosenbaum, Journal of the American Statistical Association. 107LUO, X., SMALL, D. S., LI, C.-S. R. and ROSENBAUM, P. R. (2012). Inference With Interference Between Units in an fMRI Experiment of Motor Inhibition. Journal of the American Statistical Association 107 530- 541. Quasi-oracle estimation of heterogeneous treatment effects. X Nie, S Wager, arXiv:1712.04912arXiv preprintNIE, X. and WAGER, S. (2017). Quasi-oracle estimation of heterogeneous treatment effects. arXiv preprint arXiv:1712.04912. Randomized Experimental Design via Geographic Clustering. D Rolnick, K Aydin, J Pouget-Abadie, S Kamali, V Mirrokni, A Najmi, Proceedings of the 25th ACM International Conference on Knowledge Discovery & Data Mining. the 25th ACM International Conference on Knowledge Discovery & Data MiningROLNICK, D., AYDIN, K., POUGET-ABADIE, J., KAMALI, S., MIRROKNI, V. and NAJMI, A. (2019). Random- ized Experimental Design via Geographic Clustering. In Proceedings of the 25th ACM International Confer- ence on Knowledge Discovery & Data Mining 2745-2753. Identification of causal effects using instrumental variables: Comment. P R Rosenbaum, Journal of the American Statistical Association. 91ROSENBAUM, P. R. (1996). Identification of causal effects using instrumental variables: Comment. Journal of the American Statistical Association 91 444-444. Covariance adjustment in randomized experiments and observational studies. P R Rosenbaum, Statistical Science. 17ROSENBAUM, P. R. (2002). Covariance adjustment in randomized experiments and observational studies. Statis- tical Science 17 286-327. Interference Between Units in Randomized Experiments. P R Rosenbaum, Journal of the American Statistical Association. 102ROSENBAUM, P. R. (2007). Interference Between Units in Randomized Experiments. Journal of the American Statistical Association 102 191-200. Design of Observational Studies. P R Rosenbaum, 2nd Edition. Springer Series in StatisticsROSENBAUM, P. R. (2020). Design of Observational Studies, 2nd Edition. Springer Series in Statistics. Discussion of "Randomization Analysis of Experimental Data in the Fisher Randomization Test" by Basu. D B Rubin, Journal of the American Statistical Association. 75RUBIN, D. B. (1980). Discussion of "Randomization Analysis of Experimental Data in the Fisher Randomization Test" by Basu. Journal of the American Statistical Association 75 591-593. Near impressions for observational causal ad impact Technical Report. S Sapp, J Vaver, J Schuringa, S Dropsho, Google IncSAPP, S., VAVER, J., SCHURINGA, J. and DROPSHO, S. (2017). Near impressions for observational causal ad impact Technical Report, Google Inc. https://ai.google/research/pubs/pub46418. War and Wages: The Strength of Instrumental Variables and Their Sensitivity to Unobserved Biases. D Small, P R Rosenbaum, Journal of the American Statistical Association. 103SMALL, D. and ROSENBAUM, P. R. (2008). War and Wages: The Strength of Instrumental Variables and Their Sensitivity to Unobserved Biases. Journal of the American Statistical Association 103 924-933. Less vulnerable confidence and significance procedures for location based on a single sample: Trimming/Winsorization 1. J W Tukey, D H Mclaughlin, Sankhyā: The Indian Journal of Statistics, Series A. 25TUKEY, J. W. and MCLAUGHLIN, D. H. (1963). Less vulnerable confidence and significance procedures for location based on a single sample: Trimming/Winsorization 1. Sankhyā: The Indian Journal of Statistics, Series A 25 331-352. Online Ad Auctions. H R Varian, American Economic Review. 99VARIAN, H. R. (2009). Online Ad Auctions. American Economic Review 99 430-34. Causal inference in economics and marketing. H R Varian, Proceedings of the National Academy of Sciences. 113VARIAN, H. R. (2016). Causal inference in economics and marketing. Proceedings of the National Academy of Sciences 113 7310-7315. Measuring Ad Effectiveness Using Geo Experiments Technical Report. J Vaver, J Koehler, Google IncVAVER, J. and KOEHLER, J. (2011). Measuring Ad Effectiveness Using Geo Experiments Technical Report, Google Inc. https://ai.google/research/pubs/pub38355. Estimation and Inference of Heterogeneous Treatment Effects using Random Forests. S Wager, S Athey, Journal of the American Statistical Association. 113WAGER, S. and ATHEY, S. (2018). Estimation and Inference of Heterogeneous Treatment Effects using Random Forests. Journal of the American Statistical Association 113 1228-1242. ggplot2: Elegant Graphics for Data Analysis. H Wickham, Springer-VerlagNew YorkWICKHAM, H. (2016). ggplot2: Elegant Graphics for Data Analysis. Springer-Verlag New York. The Seasonality Of Paid Search Effectiveness From A Long Running Field Test. Q Ye, S Malik, J Chen, H Zhu, Proceedings of the 2016 ACM Conference on Economics and Computation. EC '16. the 2016 ACM Conference on Economics and Computation. EC '16New York, NY, USAACMYE, Q., MALIK, S., CHEN, J. and ZHU, H. (2016). The Seasonality Of Paid Search Effectiveness From A Long Running Field Test. In Proceedings of the 2016 ACM Conference on Economics and Computation. EC '16 515-530. ACM, New York, NY, USA.
[ "https://github.com/google/trimmed_match." ]
[ "Rapid production of large 7 Li Bose-Einstein condensates using D 1 gray molasses", "Rapid production of large 7 Li Bose-Einstein condensates using D 1 gray molasses" ]
[ "Kyungtae Kim \nDepartment of Physics\nKorea Advanced Institute of Science and Technology\n34141DaejeonKorea\n", "Seungjung Huh \nDepartment of Physics\nKorea Advanced Institute of Science and Technology\n34141DaejeonKorea\n", "Kiryang Kwon \nDepartment of Physics\nKorea Advanced Institute of Science and Technology\n34141DaejeonKorea\n", "Jae-Yoon Choi \nDepartment of Physics\nKorea Advanced Institute of Science and Technology\n34141DaejeonKorea\n" ]
[ "Department of Physics\nKorea Advanced Institute of Science and Technology\n34141DaejeonKorea", "Department of Physics\nKorea Advanced Institute of Science and Technology\n34141DaejeonKorea", "Department of Physics\nKorea Advanced Institute of Science and Technology\n34141DaejeonKorea", "Department of Physics\nKorea Advanced Institute of Science and Technology\n34141DaejeonKorea" ]
[]
We demonstrate the production of large 7 Li Bose-Einstein condensates in an optical dipole trap using D1 gray molasses. The sub-Doppler cooling technique reduces the temperature of 4×10 9 atoms to 25 µK in 3 ms. After microwave evaporation cooling in a magnetic quadrupole trap, we transfer the atoms to a crossed optical dipole trap, where we employ a magnetic Feshbach resonance on the |F = 1, mF = 1 state. Fast evaporation cooling is achieved by tilting the optical potential using a magnetic field gradient on the top of the Feshbach field. Our setup produces pure condensates with 2.7×10 6 atoms in the optical potential for every 11 s. The trap tilt evaporation allows rapid thermal quench, and spontaneous vortices are observed in the condensates as a result of the Kibble-Zurek mechanism.
10.1103/physreva.99.053604
[ "https://export.arxiv.org/pdf/1905.03555v1.pdf" ]
148,571,813
1905.03555
8fa72c0aa8c85656c6b670d769a76b0e97dac8b9
Rapid production of large 7 Li Bose-Einstein condensates using D 1 gray molasses Kyungtae Kim Department of Physics Korea Advanced Institute of Science and Technology 34141DaejeonKorea Seungjung Huh Department of Physics Korea Advanced Institute of Science and Technology 34141DaejeonKorea Kiryang Kwon Department of Physics Korea Advanced Institute of Science and Technology 34141DaejeonKorea Jae-Yoon Choi Department of Physics Korea Advanced Institute of Science and Technology 34141DaejeonKorea Rapid production of large 7 Li Bose-Einstein condensates using D 1 gray molasses (Dated: February 5, 2022)arXiv:1905.03555v1 [cond-mat.quant-gas] 9 May 2019 We demonstrate the production of large 7 Li Bose-Einstein condensates in an optical dipole trap using D1 gray molasses. The sub-Doppler cooling technique reduces the temperature of 4×10 9 atoms to 25 µK in 3 ms. After microwave evaporation cooling in a magnetic quadrupole trap, we transfer the atoms to a crossed optical dipole trap, where we employ a magnetic Feshbach resonance on the |F = 1, mF = 1 state. Fast evaporation cooling is achieved by tilting the optical potential using a magnetic field gradient on the top of the Feshbach field. Our setup produces pure condensates with 2.7×10 6 atoms in the optical potential for every 11 s. The trap tilt evaporation allows rapid thermal quench, and spontaneous vortices are observed in the condensates as a result of the Kibble-Zurek mechanism. I. INTRODUCTION Ultracold atoms have emerged as analog quantum simulators which can provide ideal platforms for studying quantum many-body problems [1,2]. The Bose-Einstein condensation (BEC) of the 7 Li atom is of particular interest because it is the lightest bosonic atom with a broad magnetic Feshbach resonance [3]. Using the atoms, therefore, one can study correlated phases in the strongly interacting regime [4] and develop a new form of quantum sensor composed of bright solitons that lack wave-packet dispersion [5]. Moreover, the experimental compatibility with its fermionic ( 6 Li) isotope offers new chances to study the Bose-Fermi superfluid mixture [6], and exotic ground states can be investigated in optical lattices [7][8][9]. However, producing 7 Li condensates is comparatively difficult compared with other alkali atoms because of two major limitations. First, the hyperfine structure of the D 2 excited state is not resolved so that a standard sub-Doppler cooling technique does not work efficiently. Second, it has poor scattering properties, and evaporation cooling works only in a limited parameter space window. For example, the upper hyperfine spin state |F = 2 has a negative-sign s-wave scattering length [10], and the collisional cross section shows a minimum at an energy of a few mK [11]. The lower hyperfine spin state |F = 1 has a very small scattering length under a residual magnetic field, so that evaporation cooling of laser-cooled atoms hardly works for both spin states in a conventional magnetic trap. As a result, the Bose-Einstein condensates are produced in an optical potential by sympathetic cooling with its fermionic isotope [12,13], or using the Feshbach resonance [11] but with a very small numbers of atom. These difficulties can be overcome by gray molasses on the D 1 transition line, which drops the temperature of atomic gases to a few recoil temperatures (∼10 µK). The cooling technique has been demonstrated in various atomic species [14][15][16][17][18][19][20][21][22] and, more recently, the conden- * Electronic address: [email protected] sation of 7 Li atoms has been successfully observed by implementing the gray molasses [23,24]. In the experiments, the gray molasses offers an outstanding condition for evaporation cooling in a quadrupole magnetic trap, and BECs with atom number N = 1 ∼ 4 × 10 5 have been generated in an optical potential after further evaporation cooling near a Feshbach resonance. Here, we elaborate the previous works and report the production of large 7 Li condensates containing N = 2.7 × 10 6 atoms with 11 s of duty cycle. The success of making large condensates lies in the efficient evaporation cooling in an optical trap by a trap-tilt evaporation scheme [25]. It reduces the potential depth by tilting the optical potential without losing the trap confinement, contrasting conventional evaporation cooling by intensity ramp. The trap-tilt cooling technique allows a rapid thermal quench so that spontaneous vortices of the Kibble-Zurek mechanism [26,27] appear in the condensates. Besides, we observe that the gray molasses cooling can be further improved to reduce the temperature of the atoms captured in a magneto-optical trap (MOT) to 25 µK, which corresponds to 3.5 times the recoil temperature. We also present the evaporation path for each cooling stage, where nonadiabatic spin-flip atom losses at the magnetic quadrupole trap center are suppressed by a repulsive optical barrier [28]. II. LASER COOLING A. Magneto-optical trap Our experiment starts by collecting 7 Li atoms in a magneto-optical trap from a Zeeman-slowed atomic flux. Three pairs of mutually orthogonal MOT beams are constructed by using two pairs of retroreflected light in the horizontal plane (x-y) and one pair of counter propagating beams along the vertical z direction. Each of the MOT beams contains both cooling and repumping light whose frequencies are ∆ c,2 = −7 Γ and ∆ r,2 = −4.7 Γ , respectively [ Fig. 1(a)], where Γ = 2π × 5.87 MHz is the natural linewidth of the excited state. The peak intensities of the laser beams are I c,2 = 3.3 I s and I r,2 = 1.9 I s (I s = 2.54 mW/cm 2 is the saturation intensity of the D 2 transition). An anti-Helmholtz pair of 42-turn watercooled coils generates the magnetic quadrupole field, and we apply a field gradient of 20 G/cm along the axial z direction in the MOT stage. After 5 s of loading time, we capture 6.4 × 10 9 of 7 Li atoms in the MOT at a temperature of 1.6 mK. Then, the atoms are compressed by increasing the field gradient to 46 G/cm over 25 ms. In the last 2 ms of the compression, the frequency of the cooling (repumping) light is changed to −1.5 Γ (−15 Γ ), which reduces the beam intensity to 5% of the initial value at the same time. After the ramp, most of the atoms are cooled down to 900 µK. B. Gray molasses The D 1 gray molasses consist of polarization gradient cooling and velocity-selective coherent population trapping [29] in a Lambda-type three-level system and has been applied to lithium atoms [20,21]. In the report, 6 Li gases are cooled down to 40 µK, serving as an essential step in the all-optical production of large degenerate Fermi gases [21]. Here, we employ the gray molasses to have a high collision rate in a magnetic trap, and thus generate large BECs after a rapid evaporation cooling. The molasses beam is obtained from a high-power diode laser system using tapered amplifiers. The beam passes through a resonant electro-optical modulator (EOM) working at 803.5 MHz (hyperfine splitting frequency of the 7 Li ground state), generating 2 % of the sideband for the repump light. For the molasses cooling, we set the laser frequency ∆ r,1 = 3.2 Γ and twophoton detuning δ = 0 [ Fig. 1(a)]. Then, we superimpose the molasses light onto the MOT beams path, generating three orthogonal pairs of σ + − σ − counter propagating beams. The 1/e 2 beam waist is 5 mm at the trap center and each beam has a peak intensity of 22 I s . A pulse of 2 ms of gray molasses delivers 5.4×10 9 number of the atoms in the compressed MOT at a temperature of 60 µK, which is similar to the previous experiments using lithium atoms [20,21]. Like the gray molasses experiments using other alkali atoms [18,19,22], we are able to further cool 7 Li gases by dynamically tuning the molasses beam parameters. After 1 ms of initial cooling at the maximal molasses lattice depth, the laser intensity and frequency are gradually changed to 12 I s and ∆ r,1 = 6.6 Γ with δ = 0, respectively. As shown in Fig. 1(b), we observe the temperature drops, and 4 × 10 9 atoms reach 25 µK at the optimal conditions. The lowest temperature in the experiment corresponds to 3.5 T rec , where T rec = 2 k 2 L /mk B is the recoil temperature ( is the Planck constant h divided by 2π, k L is the cooling laser wave number, m is the atomic mass, and k B is the Boltzmann constant). We also observe that the stray magnetic field B ext reduces The atoms in the compressed MOT are rapidly cooled down to T ∼ 70 µK in 1 ms and slowly settled to ∼ 60 µK after 2 ms (black circle). Changing the laser intensity and frequency, the temperature is reduced to 25 µK (red square). We also obtain a similar temperature by decreasing the molasses intensity to Ic1 = 2.2 Is at a constant frequency, but only 10% of the atoms remain. The temperature is measured by time-of-flight images at a various expansion time after pumping the atoms into the |F = 2 state using the MOT-repump beam. Each data point is averaged over 3-5 separate experimental runs, and the error bars are shorter than the marker size. the coherence of the dark state [19]. The final temperature increases quadratically as a function of the external magnetic field, ∆T = B 2 ext ×84(8) µK/G 2 , so that a compensating residual magnetic field of less than 100 mG is necessary to reach few recoil temperatures. III. EVAPORATION COOLING To generate large atom number condensates, we follow two-step evaporation cooling after the gray molasses: evaporation cooling is first taking place in a magnetic trap, and then the atoms are transferred to an optical dipole trap for further evaporation and Bose-Einstein condensation [23,24,30]. This is attributed to the scattering properties of 7 Li atoms, which has a negative scattering length a 0 = −27.6 a B (a B is the Bohr radius) in the upper hyperfine |F = 2 state so that the condensate atom number is limited to a few thousands [31]. The lower hyperfine |F = 1 state, on the other hand, has a too small scattering length (a 0 = 6.8 a B ) for efficient evaporation [13], calling for a strong magnetic field (∼700 G) to tune the scattering length [3]. Therefore, large 7 Li condensates can be produced with the |F = 1 state after evaporation cooling in an optical potential near the Feshbach resonance. However, an optical dipole trap has limited trap volume and potential depth compared to a magnetic trap, so that we first cool the atoms in the |F = 2, m F = 2 state in a magnetic potential and then transfer them to a crossed optical dipole trap. After 5 s of full evaporation cooling in the magnetic and the optical potential, we obtain a pure 7 Li condensate with 2.7 × 10 6 atoms in the |F = 1, m F = 1 state. A. Magnetic quadrupole trap After the gray molasses, evaporation cooling takes place in a magnetic quadrupole trap. The quadrupole trap is helpful for efficient evaporation because of its tight confinement and offers sufficiently large optical access to the cold atoms thanks to its simple coil geometry. In the experiment, we generate the quadrupole field using the same coil pairs in the MOT, and focus a blue-detuned 532 nm laser light at the trap center to suppress the Majorana atom loss [28]. The laser beam propagates along the x direction, and the effective potential for the |F = 2, m F = 2 spin stretched state becomes U (r) = µ B B ′ q x 2 + y 2 4 + z 2 +U p e −2 y 2 +z 2 w 2 −mgz,(1) where µ B is the Bohr magneton, B ′ q is the field gradient along the z axis, and g is the acceleration of gravity. The plug beam waist is w = 25 µm, and 10 W of the laser beam generates a repulsive potential barrier height U p = k B × 716 µK at the zero-field center. Before turning on the magnetic trap, we optically pump the atoms to the stretched state since most of the atoms after the gray molasses are in the |F = 1 state. The atoms are pumped via the |F = 2 → |F ′ = 2 D 1 transition using a laser light that contains two different frequencies, ∆ c,1 = 2.5 Γ and ∆ r,1 = 5.8 Γ . The pump beam travels in the horizontal plane in a retroreflected configuration with circular polarization so that the |F = 2, m F = 2 state becomes a dark state to the pumping light. We shine the laser light for 150 µs under a 3 G of bias field, pumping almost all of the atoms into the stretched state. In order to load the atoms with high density, the field gradient is switched on to gener- ate 58 G/cm in 100 µs by discharging a capacitor, and gradually increase to 109 G/cm in 0.2 s. Here, the frequency ramp is not implemented for additional cooling of the gray molasses to have a higher initial density in the magnetic trap. Without the frequency ramp, the optical pumping increases the temperature of atoms in the gray molasses to 70 µK, and after the magnetic trap transfer, we have N ≃ 5.1 × 10 9 and T = 270 µK. When using the frequency ramp, on the other hand, the atoms are heated up to 40 µK by the dark state pumping, and then it rises to 240 µK in the quadrupole trap with 3.6 × 10 9 atoms. Still, the gray molasses provides a very favorable condition for evaporation cooling. Without the gray molasses, the initial temperature in the quadrupole trap is ∼ 2 mK, and we cannot achieve the runaway evaporation cooling because of the scattering length drops at a few mK [11]. Forced evaporation cooling is performed by applying a microwave frequency on the |2, 2 → |1, 1 hyperfine spin state transition. We linearly sweep the microwave frequency from 840 to 820 MHz in 2 s and to 811 MHz in another 1.4 s. To prevent strong atom losses due to dipolar relaxation and the three-body molecular recombination [32], the field gradient is reduced to 76 G/cm in the last 1.4 s of evaporation [ Fig. 2(a)]. The time evolution of the atom number N and temperature T during the evaporation is displayed in Fig. 2(b), from which we estimate a peak density n = N/[32π(k B T /µ B B ′ q ) 3 ] and elastic collision rate γ = nσv, respectively. Here, σ = 8πa 2 0 is the elastic scattering cross section and v = (16k B T /πm) 1/2 is the mean relative velocity. The collision time, τ = 1/γ, decreases during the frequency sweep, demonstrating runaway evaporation cooling in the quadrupole trap [ Fig. 2(c)]. We observe the collision rate is 10 3 higher than the loss rate of the trapped atoms, ensuring thermal equilibrium during the evaporation process. B. Effects of optical plug beam The thermodynamics of atoms trapped in the quadrupole trap by the Majorana loss has been well characterized by the rate equations for atom number N and temperature T [33][34][35], T T = − ε m ε − 1 Γ m ,(2)N N = −Γ m − Γ b .(3) Here, ε m is the mean energy per atom loss by the nonadiabatic spin flip, ε = 4.5 k B T is the average energy of the atoms in the linear trap, and Γ b is the background loss rate. The Γ m is the Majorana loss rate [33], Γ m = χ m µ B B ′ q k B T 2 ,(4) where χ is a dimensionless geometrical factor, measured to be about 0.16 [34,35]. In the research [34], the optical plug beam enhances the lifetime of the trapped atoms by reducing density at the trap center. The average energy per lost atoms ε m , however, is not affected by the optical plug beam, implying that the atom loss still mostly occurs near the trap center. In this section, we investigate the mean loss energy ε m in the quadrupole trap and observe the clear effect of the optical plug beam. The background loss rate in the quadrupole trap is measured to be Γ b = 0.0093(8) s −1 and can be neglected in the rate equations. Then, the dynamical evolution of the temperature can be expressed as a function of atom number, T (t)/T (0) = [N (t)/N (0)] (εm/ε−1) ,(5) and we measure the ε m from the power-law exponent in the hold-time dynamics of N and T . Figure 3(a) shows temperature dynamics at various initial conditions with and without the plug beam. The initial temperature T (0) and atom number N (0) are set by the microwave frequency, which is turned off during the hold time to exclude the evaporation effect. The atom loss leads to heating of the system, which becomes more evident at low temperature and without the optical plug beam. These observations are reflected in the temperature dependence of the average loss energy as shown in Fig. 3(b). Without the plug potential, the ε m decreases as the temperature is reduced and saturates to 2.8(1) k B T . This can be explained by the Majorana heating process, which becomes the dominant heating source for a cold-atomic sample (∝ T −2 ) and is expected to show 2.5 k B T of mean loss energy [35]. The increases of the ε m at higher temperature can be attributed to a residual heating mechanism such as current noise in the magnetic trap and inelastic collisional events, rather than the Majorana loss. We speculate that an imperfection of polarization in the pumping beam induces the inelastic collisional loss at the beginning of the microwave evaporation. At sufficiently low temperature (k B T ≪ U p ), the spin-flip atom loss in the optically plugged quadrupole trap can cool the gases since the atoms have to climb up the plug hill (ε m > ε). However, we observe that it stays around 3.9(1) k B T without noticeable temperature dependence [ Fig. 3(b)]. C. Crossed optical dipole trap As a final step to produce BECs, we load the cold atoms into a crossed optical dipole trap. The optical trap consists of two laser beams with 1064 and 1070 nm wavelength, propagating in the horizontal plane at a folding angle θ ≃ 90 • . The laser beams are crossed at 300 µm away from the quadrupole trap center so that the optical trap has a negligible influence on the optical suppression of the Majorana atom loss. At the crossing point, we have a beam radius of 156 µm for 1064 nm light and 205 µm for 1070 nm laser, respectively. After the microwave evaporation in the magnetic trap, the dipole potential is gradually turned on, producing U 0 = 42 µK of potential depth in 300 ms, and the field gradient is ramped down to zero in 600 ms. To maximize loading efficiency, we evaporate the atoms by applying a linear microwave frequency sweep from 811 to 804 MHz in 600 ms. About 5 × 10 7 number of atoms at 9 µK are transferred in the crossed trap. Then, the atoms are prepared in the |1, 1 state by a Landau-Zener sweep. By turning on a uniform bias field of 700 G along the z direction, the scattering length is set to about 100 a B . After the Feshbach field ramp, the magnetic field gradient B ′ q is turned on to 12 G/cm. This produces a linear potential in the z axis that lowers the potential depth and cools the atoms without losing the trap confinement [25]. By linearly increasing the field gradient to 30 G/cm in 300 ms, we achieve a rapid evaporation in the optical potential [ Fig. 4(a)]. The trap beam intensity is simultaneously lowered to maintain the peak density, n ≃ 3 × 10 13 cm −3 . The truncation parameter η = U 0 /k B T is increased from 5.5 to 7.5. After 180 ms of evaporation, a Bose-Einstein condensation is observed from the bimodal density distribution in a time-of-flight image [ Fig. 5(a) inset]. The BEC transition temperature is T c = 2.4 µK (calculated T c = 2.9 µK) with the critical atom number N c = 10 7 . After an additional 120 ms of evaporation, a pure condensate of 2.7 × 10 6 atoms is obtained. The trapping frequencies at the end of the evaporation are (ω x , ω y , ω z ) = 2π×(165, 280, 324) Hz. Figure 5 shows the evaporation trajectory both in the magnetic and the optical dipole trap. The peak phasespace density D = nλ 3 dB (λ dB = h/ √ 2πmk B T is the thermal de Broglie length) is increased five orders of magnitude by the evaporation cooling. The efficiency of evaporation, Γ = −d(ln D)/d(ln N ), in each potential is 2.2 and 1.5, respectively. Since we are able to capture only 10 6 number of atoms in the dipole trap at 100 µK right after the molasses and the dark state pumping, the evaporation cooling in the quadrupole trap, which increases the peak phase-space density to a factor of 10 4 , is essential in obtaining large condensates. Cooling atoms with a trap tilt can be a new technique for studying non-equilibrium phenomena in the BEC phase transition of atomic gases [36][37][38]. We search for the possibility of thermal quenching by increasing the field gradient to 60 G/cm in 65 ms. Here, optical decompression is not applied so that we can ignore the sloshing motions generated by a sudden change in trapping frequency during the rapid intensity ramp of evaporation. With the scheme, we can still make a pure condensate with 10 6 atoms, but with a vortex, as shown in Fig. 4(c). The vortices appear more frequently as we increase the evaporation speed, suggesting the defects are nucleated by the Kibble-Zurek mechanism [26,27]. A detailed study of the vortex nucleation process and the vortex number scaling with quench time are worthy of future investigation. IV. CONCLUSION We have described an experiment that rapidly produces large 7 Li condensates in an optical dipole trap. Our method relies on the combination of gray molasses cooling on the D 1 transition line and two-stage evaporation cooling in a conservative trapping potential. The sub-Doppler cooling lowers the temperature of the atoms in the compressed MOT to 25 µK, which might allow the all-optical production of 7 Li BECs after directly loading the atoms into a deep optical potential [21,39]. Runaway evaporation cooling is achieved in an optically plugged magnetic quadrupole trap, where the Majorana atom loss is highly suppressed by the plug beam. After subsequent evaporation in a crossed optical trap, we obtain a pure condensate of ∼ 3 × 10 6 atoms. We adopt a traptilting scheme for a rapid evaporation in the optical trap, which can be useful for studying the Kibble-Zurek mechanism [36][37][38] and universal dynamics in a far out-ofequilibrium state [40,41] in an optical potential. FIG. 1 . 1The level structure of 7 Li atoms and laser cooling.(a) We use the D2 transition line for the MOT [red (dark gray) arrow] and the D1 transition line for the sub-Doppler cooling [blue (light gray) arrows]. (b) Temperature of the atoms during the molasses. FIG. 2 . 2Evaporation cooling in the magnetic quadrupole trap. (a) Magnetic field gradient, B ′ q , during the evaporation process. Time evolution of (b) atom number, temperature, (c) peak density, and collision rate. (b) The dash-dotted line represents the estimated temperature solely from the adiabatic decompression without evaporation cooling. The atom number and temperature are determined from absorption images after te ≥ 13 ms of the time of flight. (c) We open the trap (gray zone) after 2 s of evaporation to keep the central peak density below 3 × 10 13 cm −3 . The data points consist of eight independent measurements and the error bars are smaller than the points size. FIG. 3 . 3Majorana heating and atom loss in the magnetic quadrupole trap. (a) Evolution of temperature by the atoms loss with (closed symbol) and without (open symbol) the optical plug potential under various initial temperature, T (0) = 230 µK (red circle), T (0) = 150 µK (yellow square), T (0) = 120 µK (green triangle), and T (0) = 84 µK (blue diamond). The solid (dashed) lines represent the power-law fit curve with (without) the plug beam. (b) The average loss energy per atom loss vs initial temperature with (closed circle)and without (open circle) the plug beam. We take 3-5 measurements for each data point, and the error bars represent one standard deviation of the mean and the fit uncertainty. FIG. 4 . 4Evaporation cooling for Bose-Einstein condensation in the crossed dipole trap. (a) Atom number, temperature, (b) peak density, and collision rate as a function of evaporation time. Data points represent mean values of five individual realizations, and the error bars denote the standard deviation of the mean. (c) Absorption images of BEC with a vortex. The condensates are obtained after rapid thermal quench (details described in the main text), and the vortex is represented as a density-depleted region in the condensates after 18 ms of expansion time. FIG. 5 . 5Experimental path from the (i) to (iii) stage that produces large 7 Li BECs. (i) Microwave evaporation in an optically plugged magnetic trap, (ii) transfer into a crossed dipole trap, and (iii) evaporation by trap tilting in the crossed optical trap. (a) Temperature T vs atom number N . Inset: absorption image after 8 ms of expansion, indicating the onset of the BEC. (b) Peak phase-space density D as a function of the atom number N . ACKNOWLEDGMENTSThe authors thank I. Dimitrova, R. Senaratne, and C. Gross for discussion about the experimental setup and H. Jeong and W. Noh for experimental help. This work was supported by National Research Foundation of Korea Grant No. 2017R1E1A1A01074161. . I Bloch, J Dalibard, S Nascimbène, 10.1038/nphys2259Nat Phys. 8267I. Bloch, J. Dalibard, and S. Nascimbène, Nat Phys 8, 267 (2012). . C Gross, I Bloch, 10.1126/science.aal3837Science. 357995C. Gross and I. Bloch, Science 357, 995 (2017). . C Chin, R Grimm, P Julienne, E Tiesinga, 10.1103/RevModPhys.82.1225Rev. Mod. Phys. 821225C. Chin, R. Grimm, P. Julienne, and E. Tiesinga, Rev. Mod. Phys. 82, 1225 (2010). . F Chevy, C Salomon, 10.1088/0953-4075/49/19/192001J. Phys. B: At. Mol. Opt. Phys. 49192001F. Chevy and C. Salomon, J. Phys. B: At. Mol. Opt. Phys. 49, 192001 (2016). . G D Mcdonald, C C N Kuhn, K S Hardman, S Bennetts, P J Everitt, P A Altin, J E Debs, J D Close, N P Robins, 10.1103/PhysRevLett.113.013002Phys. Rev. Lett. 11313002G. D. McDonald, C. C. N. Kuhn, K. S. Hardman, S. Ben- netts, P. J. Everitt, P. A. Altin, J. E. Debs, J. D. Close, and N. P. Robins, Phys. Rev. Lett. 113, 013002 (2014). . I Ferrier-Barbut, M Delehaye, S Laurent, A T Grier, M Pierce, B S Rem, F Chevy, C Salomon, 10.1126/science.1255380Science. 3451035I. Ferrier-Barbut, M. Delehaye, S. Laurent, A. T. Grier, M. Pierce, B. S. Rem, F. Chevy, and C. Salomon, Science 345, 1035 (2014). . A B Kuklov, B V Svistunov, 10.1103/PhysRevLett.90.100401Phys. Rev. Lett. 90100401A. B. Kuklov and B. V. Svistunov, Phys. Rev. Lett. 90, 100401 (2003). . L.-M Duan, E Demler, M D Lukin, 10.1103/PhysRevLett.91.090402Phys. Rev. Lett. 9190402L.-M. Duan, E. Demler, and M. D. Lukin, Phys. Rev. Lett. 91, 090402 (2003). . M Lewenstein, L Santos, M A Baranov, H Fehrmann, 10.1103/PhysRevLett.92.050401Phys. Rev. Lett. 9250401M. Lewenstein, L. Santos, M. A. Baranov, and H. Fehrmann, Phys. Rev. Lett. 92, 050401 (2004). . C C Bradley, C A Sackett, R G Hulet, 10.1103/PhysRevLett.78.985Phys. Rev. Lett. 78985C. C. Bradley, C. A. Sackett, and R. G. Hulet, Phys. Rev. Lett. 78, 985 (1997). . N Gross, L Khaykovich, 10.1103/PhysRevA.77.023604Phys. Rev. A. 7723604N. Gross and L. Khaykovich, Phys. Rev. A 77, 023604 (2008). . F Schreck, L Khaykovich, K L Corwin, G Ferrari, T Bourdel, J Cubizolles, C Salomon, 10.1103/PhysRevLett.87.080403Phys. Rev. Lett. 8780403F. Schreck, L. Khaykovich, K. L. Corwin, G. Fer- rari, T. Bourdel, J. Cubizolles, and C. Salomon, Phys. Rev. Lett. 87, 080403 (2001). . T Ikemachi, A Ito, Y Aratake, Y Chen, M Koashi, M Kuwata-Gonokami, M Horikoshi, 10.1088/1361-6455/50/1/01lt01J. Phys. B: At. Mol. Opt. Phys. 50T. Ikemachi, A. Ito, Y. Aratake, Y. Chen, M. Koashi, M. Kuwata-Gonokami, and M. Horikoshi, J. Phys. B: At. Mol. Opt. Phys. 50, 01LT01 (2017). . G Grynberg, J.-Y Courtois, Europhysics Letters). 2741EPLG. Grynberg and J.-Y. Courtois, EPL (Europhysics Letters) 27, 41 (1994). . D Boiron, C Triché, D R Meacher, P Verkerk, G Grynberg, 10.1103/PhysRevA.52.R3425Phys. Rev. A. 523425D. Boiron, C. Triché, D. R. Meacher, P. Verkerk, and G. Grynberg, Phys. Rev. A 52, R3425 (1995). . T Esslinger, F Sander, A Hemmerich, T W Hänsch, H Ritsch, M Weidemüller, 10.1364/OL.21.000991Opt. Lett. 21991T. Esslinger, F. Sander, A. Hemmerich, T. W. Hänsch, H. Ritsch, and M. Weidemüller, Opt. Lett. 21, 991 (1996). . D Boiron, A Michaud, P Lemonde, Y Castin, C Salomon, S Weyers, K Szymaniec, L Cognet, A Clairon, 10.1103/PhysRevA.53.R3734Phys. Rev. A. 533734D. Boiron, A. Michaud, P. Lemonde, Y. Castin, C. Sa- lomon, S. Weyers, K. Szymaniec, L. Cognet, and A. Cla- iron, Phys. Rev. A 53, R3734 (1996). . R D Fernandes, F Sievers, N Kretzschmar, S Wu, C Salomon, F Chevy, 10.1209/0295-5075/100/63001Epl Europhys Lett. 10063001R. D. Fernandes, F. Sievers, N. Kret- zschmar, S. Wu, C. Salomon, and F. Chevy, Epl Europhys Lett 100, 63001 (2012). . G Salomon, L Fouché, P Wang, A Aspect, P Bouyer, T Bourdel, 10.1209/0295-5075/104/63002Epl Europhys Lett. 10463002G. Salomon, L. Fouché, P. Wang, A. Aspect, P. Bouyer, and T. Bourdel, Epl Europhys Lett 104, 63002 (2013). . A T Grier, I Ferrier-Barbut, B S Rem, M Delehaye, L Khaykovich, F Chevy, C Salomon, 10.1103/PhysRevA.87.063411Phys. Rev. A. 8763411A. T. Grier, I. Ferrier-Barbut, B. S. Rem, M. Dele- haye, L. Khaykovich, F. Chevy, and C. Salomon, Phys. Rev. A 87, 063411 (2013). . A Burchianti, G Valtolina, J Seman, E Pace, D M Pas, M Inguscio, M Zaccanti, G Roati, 10.1103/PhysRevA.90.043408Phys Rev A. 9043408A. Burchianti, G. Valtolina, J. Seman, E. Pace, D. M. Pas, M. Inguscio, M. Zaccanti, and G. Roati, Phys Rev A 90, 043408 (2014). . G Colzi, G Durastante, E Fava, S Serafini, G Lamporesi, G Ferrari, Physical Review A. 9323421G. Colzi, G. Durastante, E. Fava, S. Serafini, G. Lam- poresi, and G. Ferrari, Physical Review A 93, 023421 (2016). . I Dimitrova, W Lunden, J Amato-Grill, N Jepsen, Y Yu, M Messer, T Rigaldo, G Puentes, D Weld, W Ketterle, 10.1103/PhysRevA.96.051603Phys. Rev. A. 9651603I. Dimitrova, W. Lunden, J. Amato-Grill, N. Jepsen, Y. Yu, M. Messer, T. Rigaldo, G. Puentes, D. Weld, and W. Ketterle, Phys. Rev. A 96, 051603 (2017). . Z A Geiger, K M Fujiwara, K Singh, R Senaratne, S V Rajagopal, M Lipatov, T Shimasaki, R Driben, V V Konotop, T Meier, D M Weld, 10.1103/PhysRevLett.120.213201Phys. Rev. Lett. 120213201Z. A. Geiger, K. M. Fujiwara, K. Singh, R. Senaratne, S. V. Rajagopal, M. Lipatov, T. Shimasaki, R. Driben, V. V. Konotop, T. Meier, and D. M. Weld, Phys. Rev. Lett. 120, 213201 (2018). . C.-L Hung, X Zhang, N Gemelke, C Chin, Physical Review A. 7811604C.-L. Hung, X. Zhang, N. Gemelke, and C. Chin, Phys- ical Review A 78, 011604 (2008). . T W B Kibble, Journal of Physics A. 91387T. W. B. Kibble, Journal of Physics A 9, 1387 (1976). . W Zurek, 10.1038/317505a0Nature. 317505W. Zurek, Nature 317, 505 (1985). . K B Davis, M O Mewes, M R Andrews, N J Van Druten, D S Durfee, D M Kurn, W Ketterle, 10.1103/PhysRevLett.75.3969Phys. Rev. Lett. 753969K. B. Davis, M. O. Mewes, M. R. Andrews, N. J. van Druten, D. S. Durfee, D. M. Kurn, and W. Ketterle, Phys. Rev. Lett. 75, 3969 (1995). . A Aspect, E Arimondo, R Kaiser, N Vansteenkiste, C Cohen-Tannoudji, 10.1103/PhysRevLett.61.826Phys. Rev. Lett. 61826A. Aspect, E. Arimondo, R. Kaiser, N. Vansteenkiste, and C. Cohen-Tannoudji, Phys. Rev. Lett. 61, 826 (1988). . A G Truscott, K E Strecker, W I Mcalexander, G B Partridge, R G Hulet, 10.1126/science.1059318Science. 2912570A. G. Truscott, K. E. Strecker, W. I. McAlexander, G. B. Partridge, and R. G. Hulet, Science 291, 2570 (2001). . E R I Abraham, W I Mcalexander, J M Gerton, R G Hulet, R Côté, A Dalgarno, 10.1103/PhysRevA.55.R3299Phys. Rev. A. 553299E. R. I. Abraham, W. I. McAlexander, J. M. Ger- ton, R. G. Hulet, R. Côté, and A. Dalgarno, Phys. Rev. A 55, R3299 (1997). . J M Gerton, C A Sackett, B J Frew, R G Hulet, 10.1103/PhysRevA.59.1514Phys. Rev. A. 591514J. M. Gerton, C. A. Sackett, B. J. Frew, and R. G. Hulet, Phys. Rev. A 59, 1514 (1999). . W Petrich, M H Anderson, J R Ensher, E A Cornell, 10.1103/PhysRevLett.74.3352Phys. Rev. Lett. 743352W. Petrich, M. H. Anderson, J. R. Ensher, and E. A. Cornell, Phys. Rev. Lett. 74, 3352 (1995). . M.-S Heo, J Choi, Y.-I Shin, 10.1103/PhysRevA.83.013622Phys. Rev. A. 8313622M.-S. Heo, J.-y. Choi, and Y.-i. Shin, Phys. Rev. A 83, 013622 (2011). . R Dubessy, K Merloti, L Longchambon, P.-E Pottie, T Liennard, A Perrin, V Lorent, H Perrin, 10.1103/PhysRevA.85.013643Phys. Rev. A. 8513643R. Dubessy, K. Merloti, L. Longchambon, P.-E. Pot- tie, T. Liennard, A. Perrin, V. Lorent, and H. Perrin, Phys. Rev. A 85, 013643 (2012). . C N Weiler, T W Neely, D R Scherer, A S Bradley, M J Davis, B P Anderson, Nature. 455948C. N. Weiler, T. W. Neely, D. R. Scherer, A. S. Bradley, M. J. Davis, and B. P. Anderson, Nature 455, 948 (2008). . G Lamporesi, S Donadello, S Serafini, F Dalfovo, G Ferrari, 10.1038/nphys2734Nat Phys. 9656G. Lamporesi, S. Donadello, S. Serafini, F. Dalfovo, and G. Ferrari, Nat Phys 9, 656 (2013). . N Navon, A L Gaunt, R P Smith, Z Hadzibabic, 10.1126/science.1258676Science. 347167N. Navon, A. L. Gaunt, R. P. Smith, and Z. Hadzibabic, Science 347, 167 (2015). . G Salomon, L Fouché, S Lepoutre, A Aspect, T Bourdel, 10.1103/PhysRevA.90.033405Phys. Rev. A. 9033405G. Salomon, L. Fouché, S. Lepoutre, A. Aspect, and T. Bourdel, Phys. Rev. A 90, 033405 (2014). . J Berges, K Boguslavski, S Schlichting, R Venugopalan, 10.1103/PhysRevLett.114.061601Phys. Rev. Lett. 11461601J. Berges, K. Boguslavski, S. Schlichting, and R. Venu- gopalan, Phys. Rev. Lett. 114, 061601 (2015). . S Erne, R Bücker, T Gasenzer, J Berges, J Schmiedmayer, 10.1038/s41586-018-0667-0Nature. 563225S. Erne, R. Bücker, T. Gasenzer, J. Berges, and J. Schmiedmayer, Nature 563, 225 (2018).
[]
[ "Study of first-order interface localization-delocalization transition in thin Ising-films using Wang-Landau sampling", "Study of first-order interface localization-delocalization transition in thin Ising-films using Wang-Landau sampling" ]
[ "B J Schulz ", "K Binder ", "M Müller ", "\nInstitut für Physik\nDepartment of Physics\nJohannes Gutenberg Universität\nStaudinger Weg 7WA331, D55099MainzGermany\n", "\nUniversity of Wisconsin-Madison\n1150 University Avenue53706-1390MadisonWI\n" ]
[ "Institut für Physik\nDepartment of Physics\nJohannes Gutenberg Universität\nStaudinger Weg 7WA331, D55099MainzGermany", "University of Wisconsin-Madison\n1150 University Avenue53706-1390MadisonWI" ]
[]
Using extensive Monte Carlo simulations, we study the interface localization-delocalization transition of a thin Ising film with antisymmetric competing walls for a set of parameters where the transition is strongly first-order. This is achieved by estimating the density of states (DOS) of the model by means of Wang-Landau sampling (WLS) in the space of energy, using both, single-spinflip as well as N-fold way updates. From the DOS we calculate canonical averages related to the configurational energy, like the internal energy, the specific heat, as well as the free energy and the entropy. By sampling microcanonical averages during simulations we also compute thermodynamic quantities related to magnetization like the reduced fourth order cumulant of the order parameter. We estimate the triple temperatures of infinitely large systems for three different film thicknesses via finite size scaling of the positions of the maxima of the specific heat, the minima of the cumulant and the equal weight criterion for the energy probability distribution. The wetting temperature of the semi-infinite system is computed with help of the Young equation. In the limit of large film thicknesses the triple temperatures are seen to converge towards the wetting temperature of the corresponding semi-infinite Ising model in accordance with standard capillary wave theory. We discuss the slowing down of WLS in energy space as observed for the larger film thicknesses and lateral linear dimensions. In case of WLS in the space of total magnetization we find evidence that the slowing down is reduced and can be attributed to persisting free energy barriers due to shape transitions.I.
10.1103/physreve.71.046705
[ "https://arxiv.org/pdf/cond-mat/0410260v1.pdf" ]
934,066
cond-mat/0410260
1ebaf42ba70db13c0f82c556bc51c9709a24f15e
Study of first-order interface localization-delocalization transition in thin Ising-films using Wang-Landau sampling 11 Oct 2004 B J Schulz K Binder M Müller Institut für Physik Department of Physics Johannes Gutenberg Universität Staudinger Weg 7WA331, D55099MainzGermany University of Wisconsin-Madison 1150 University Avenue53706-1390MadisonWI Study of first-order interface localization-delocalization transition in thin Ising-films using Wang-Landau sampling 11 Oct 2004(Dated: August 24, 2018) Using extensive Monte Carlo simulations, we study the interface localization-delocalization transition of a thin Ising film with antisymmetric competing walls for a set of parameters where the transition is strongly first-order. This is achieved by estimating the density of states (DOS) of the model by means of Wang-Landau sampling (WLS) in the space of energy, using both, single-spinflip as well as N-fold way updates. From the DOS we calculate canonical averages related to the configurational energy, like the internal energy, the specific heat, as well as the free energy and the entropy. By sampling microcanonical averages during simulations we also compute thermodynamic quantities related to magnetization like the reduced fourth order cumulant of the order parameter. We estimate the triple temperatures of infinitely large systems for three different film thicknesses via finite size scaling of the positions of the maxima of the specific heat, the minima of the cumulant and the equal weight criterion for the energy probability distribution. The wetting temperature of the semi-infinite system is computed with help of the Young equation. In the limit of large film thicknesses the triple temperatures are seen to converge towards the wetting temperature of the corresponding semi-infinite Ising model in accordance with standard capillary wave theory. We discuss the slowing down of WLS in energy space as observed for the larger film thicknesses and lateral linear dimensions. In case of WLS in the space of total magnetization we find evidence that the slowing down is reduced and can be attributed to persisting free energy barriers due to shape transitions.I. INTRODUCTION The restriction of the geometry of a condensed-matter system has fundamental impact on a phase transition. In a finite system, sharp phase transitions can no longer occur, since the free energy is then an analytic function of its independent variables and the transition is rounded off and shifted. A particular realization of a confined geometry in d = 3 dimensions, playing a pivotal role due to its fundamental importance in material science and technology, are thin films, infinitely extended in two directions but of finite thickness D, where the transition is now not only shifted away from its bulk value, corresponding to D → ∞, but also changes its character from threedimensional to two-dimensional. As an example we may consider here a fluid near a gas-liquid coexistence in the bulk, or similarly, an (A,B) binary mixture or alloy near phase coexistence, confined between two parallel walls. Of particular interest is the case, where the two walls of the system prefer different phases, i.e., one wall favors high-density liquid (or A-particles) while the other one prefers low-density gas (or B-particles), which is commonly termed "competing walls". A generic model for such systems actually is the nearest neighbor Ising model in a thin film geometry where one now has two surfaces a distance D apart, on which magnetic surface fields * Electronic address: [email protected] H 1 = −H D of opposite sign but equal magnitude act in order to mimic the competing walls (cf. Fig. 1). In addition one allows for a different interaction J s > 0 between nearest neighbors located in the surfaces, while nearest neighbors in the bulk interact with a coupling J > 0. The meaning of the magnetic surface fields becomes apparent, when reinterpreting the Ising Hamiltonian as a lattice gas for a fluid or a mixture, where Ising spins S i = −1 or S i = +1 now correspond to lattice sites i being empty or occupied, or being taken by an A-particle or a B-particle, respectively. Then, surface magnetic fields translate into chemical potentials, i.e., binding energies to the walls. Remarkably, the transition that one encounters in the Ising film differs from the transition in the bulk system at T cb [1,2,3,4,5,6,7,8]: For all finite thicknesses D of the film, the transition at T cb is completely rounded off and no singular behavior shows up, despite the fact that the system is infinite in the other directions. Instead, one observes a transition at a lower temperature T 0 (D) < T cb , at which the system changes from a state with a delocalized interface running parallel to the walls in the center of the film (T 0 (D) < T < T cb ), to a twofold degenerate state (T < T 0 (D)), where the interface is now localized near one of the two walls (cf. Fig. 1). Most interestingly, for D → ∞, the transition temperature T 0 (D) of the interface localization-delocalization does not converge towards the bulk critical temperature T cb , but towards the wetting temperature T w (H 1 ) at which a macroscopically thick liquid layer (spins pointing upwards) wets the surface in the corresponding semi- infinite system. Thus, the nature of the transition at finite D is seen to depend on the nature of the wetting transition in the underlying semi-infinite system. Upon enhancing the interaction J s of spins in the surfaces with respect to the bulk interaction J one can tune the wetting transition and thus the interface transition for finite film thicknesses D to be of first order [8], i.e., T 0 (D) ≡ T tr (D) is now a triple point where the three phases shown in Fig. 1(b)-(c) coexist. By reducing the film thickness one may then pass through a tricritical point where the order of the transition changes from first to second order [2,8,9]. Our paper is arranged as follows: First, we briefly introduce the thin film Hamiltonian and give a description of the employed Wang-Landau sampling (WLS) which aims at sampling the density of states (DOS) directly. The slowing down of WLS for our model, encountered especially for large system sizes is discussed. With regard to these difficulties we then propose to split the DOS in a branch contributing to the ordered phase and one contributing to the disordered phase, which we normalize separately. We then present the thermodynamic quantities calculated from the DOS and compute the infinite lattice triple temperatures from the various finite size data. Finally, the wetting temperature of the semiinfinite system is determined via the Young equation and the convergence of the triple temperatures towards the wetting temperature for increasing film thickness is examined. We close with a brief discussion of our results. II. MODEL AND SIMULATION METHOD We consider the Ising Hamiltonian on a cubic lattice in a L × L × D geometry (cf. Fig. 1(a)), where N = L 2 D is the total number of spins S i : H = −J i,j b S i S j − J s i,j s S i S j − H i S i −H 1 i∈surface 1 S i − H D i∈surface D S i .(1) Here, the sum i, j b runs over all pairs of nearest neighbors where at least one spin is not located in one of the surfaces and the sum i, j s runs over all pairs of nearest neighbors with both spins located in one of the two surfaces. In this paper we study three different film thicknesses D = 6, 8, 12, and linear lateral dimensions ranging from L = 16 to L = 128 (for the two largest choices of D the minimal L is L = 32 and L = 48, respectively). We restrict ourselves here to antisymmetric surface fields H 1 = −H D and bulk field, H = 0. By virtue of the symmetry there is an exact degeneracy of the phases where the interface is bound to either of the surfaces, and the triple point and the phase coexistence below T 0 (D) occurs at H = 0. We do not study pre-wetting like phase coexistence for T > T 0 (D) and H = 0. Specifically we choose H 1 /J = 0.25 and J s /J = 1.5. For these parameters the interface localization-delocalization transition is clearly first-order for all thicknesses D. Already for a smaller surface-to-bulk coupling ratio J s /J = 1.45, the transition turned out to be so strongly first order according to the study of Ferrenberg et. al [8], that lattices with D = 8 and L > 32 could not be equilibrated using a standard canonical heatbath algorithm. The reason for such difficulties can be seen directly from the canonical probability distribution, P L,D (E), of the energy that develops two pronounced peaks at the transition point, corresponding to the coexisting ordered (−) and disordered phases (+) which are separated by a deep minimum P min L,D (E) corresponding to the mixed phase configurations (cf. Fig.2). Here, one has additional interfaces in the system which cost an extra free energy ∆F L,D = γDL, where γ is of the order of the interface tension between the two oppositely oriented domains of spins. This yields P min L,D (E) ∝ exp(−β∆F L,D ), where β = 1/k B T denotes the inverse temperature. Hence, any simulation technique which aims at sampling a canonical energy probability distribution ∝ g(E) exp(−E/k B T ) directly will become trapped in the phase in which the system was initially prepared and may practically never escape from there, even in case of relatively small systems. In order to give an example for the strong metastability, Fig. 3 shows hysteresis-loops of the internal energy per spin e ≡ E /N which were recorded using a conventional Metropolis Monte Carlo algorithm for a system of size D = 12 and L = 48. The simulations were started in the disordered phase. In case cooling is per- formed too fast (open circles in Fig. 3) one reaches the roughening temperature T R while still being in the disordered soft-mode phase, i.e., the interface becomes flat in the center of the film and it becomes impossible to reach the ordered phase upon further cooling. Using a much larger simulational effort (∼ 10 7 MCS) one obtains a closed loop -although the observed hysteresis is still huge -which clearly indicates a phase transition in the range 0.244 < Jβ tr < 0.341. Locating the exact transition point in this way would however require an enormous simulational effort even for the moderate system size at hand. An improvement results from thermodynamic integration of the low-and high-temperature branches of the internal energy, which yields the free energy per site f [11,12]: βf (β) = β ref f (β ref ) + β β ref e β ′ dβ ′ .(2) For the reference values we have regarded the spins as noninteracting at Jβ ref = 0.00005, i.e., f (β ref ) = −β −1 ref ln 2, while on the low temperature side the free energy was matched with a series expansion based on the first two excited states at Jβ ref = 1.10005. The crossing point of both branches of the free energy then yields the transition point, which can be determined with an accuracy of 0.4%. The result, that the correct location of the first order transition is not in the middle of the hysteresis loop but very close to its end at the high temperature side (dashed curve in Fig. 3) is very surprising at first sight. It should be noted however, that the hysteresis observed in Monte Carlo simulations has nothing to do with the "Maxwell equal area rule" of mean field theories, but is of kinetic origin. The almost free interface in the center of the film is very slowly relaxing and feels only a very weak potential from the walls, and thus is much more metastable than the state where the interface is tightly bound to one of the walls. A. Wang-Landau Sampling In order to avoid problems due to metastability and to further increase accuracy, we have decided to use Wang-Landau sampling (WLS) [13,14,15,16] in order to compute thermodynamic quantities of the systems via estimating the density of states (DOS) of Hamiltonian (1). In WLS one accepts trial configurations with probability min [1, g(E)/g(E ′ )], where g(E) is the DOS and E and E ′ are the energies of the current and the proposed configuration, respectively. At each spin flip trial the DOS is modified g(E) → g(E) · f i by means of a modification factor f i , which is reduced according to f i+1 = √ f i in case the recorded energy histogram H(E) is flat within some percentage ǫ of the average energy histogram, i.e., H(E) ≥ ǫ H(E ′ ) E ′ for all E. H(E) is then reset to zero and the procedure is repeated until a flat H(E) is achieved using a final modification factor f final . In practice one samples a logarithm of the DOS, i.e., log 10 g(E) since g(E) may become very large and modifying the DOS then corresponds to adding a small modification increment ∆s i = log 10 f i . The implementation of the single-spin-flip WLS is straightforward and we refer the reader to Refs. [13,14,15] for details. When considering systems with a large number of distinct energy levels it is useful to partition the entire energy range into adjacent subintervals in order to sample the DOS in a parallel fashion. For energy intervals that contain states with low degeneracy, e.g., the ground state, one can further accelerate WLS by combining it with the rejection-free N-fold way of Bortz et. al [16,17]. Here, the underlying idea is to partition all spins S ν , ν ∈ {1, ..., N } into M classes c ν ∈ {0, ..., M − 1} according to the change in energy ∆E cν caused by flipping a spin S ν at site ν. Making explicit use of H 1 = −H D and H = 0 in the Ising Hamiltonian (1), we can evaluate ∆E cν as follows: ∆E cν =    2(Ju ν + J s v ν + H 1 )S ν if ν ∈ surfc.1 2(Ju ν + J s v ν − H 1 )S ν if ν ∈ surfc.D 2Ju ν S ν else,(3) where S ν is the spin value before it is overturned and u ν and v ν denote sums over the nearest neighbor sites µ(ν) of site ν: and u ν =      µ(ν) S µ(ν) if ν / ∈ surface µ(ν) / ∈surface S µ(ν) if ν ∈ surface,(4)v ν = 0 if ν / ∈ surface µ(ν)∈surface S µ if ν ∈ surface,(5) This results in a number of M = 27 different classes. Within the context of N-fold way WLS the probability of any spin of a class i being overturned is then given by P (∆E i ) = n(C, ∆E i ) N p C→C ′ , i = 1, ..., M,(6) where n(C, ∆E i ) denotes the number of spins of configuration C which belong to class i and p C→C ′ is given by p C→C ′ = min 1, g(EC) g(E C ′ ) if E C ′ ∈ I sub 0 if E C ′ ∈ I sub ,(7) where I sub denotes the considered energy subinterval over which the DOS is sampled and E C ′ = E C + ∆E i . Classes are now chosen as follows. Firstly, one computes the integrated probabilities for a spin flip within the first m classes: Q m = i≤m P (∆E i ), m = 1, ..., M and Q 0 = 0. (8) By generating a random number 0 < r < Q M one then finds the class m from which to flip a spin via the condition Q m−1 < r < Q m . The spin to be overturned is chosen from this class with equal probabilities, whereby log 10 g(E) and the energy histogram are now updated by means of the average lifetime τ = 1/Q M . A detailed description of the algorithm was given in Ref. [16]. B. Normalization of the DOS In order to estimate the DOS using WLS, the considered energy range E/JN ∈ I = [E ground /JN, 0.2] ,(9) where E ground = −[(3D − 5)J + 4J s ]/JD, is the twofold degenerated ground state energy, was partitioned into several adjacent subintervals each containing an order of 10 2 to 10 3 distinct energy levels, which were sampled on a Cray T3E in a parallel fashion using mostly 64 processors at a time. The DOS obtained from these simulations was then matched at the edges and suitably normalized, which we will describe in detail below. For the system thicknesses D = 8 and D = 12, as well as for the largest choices of L in case of D = 6 (L = 96, 128) only one run was performed over the entire energy range (9) denoted as basis-run, whereas all further runs have been restricted to a smaller energy range E/JN ∈ I center = [E 1 /JN, E 2 /JN ] ,(10) covering the mixed phase region in between the peaks of the doubly peaked energy distribution. As an illustration, I center is marked in Fig. 8 by small arrows on the energy axis. Thus, the entire energy range (9) is decomposed as I = I left ∪ I center ∪ I right , where we have I left = [E ground /JN, E 1 /JN ] and I right = [E 2 /JN, 0.2]. Correspondingly, one obtains the density of states g(E) by joining g(E) estimated for the intervals I left (taken from the basis run), I center , and I right (again taken from basis run). The single-spin-flip algorithm is more efficient in the regions covered by I center which is due to the added ex- pense of the N-fold way algorithm concerning the bookkeeping of classes. This was affirmed by a rough comparison between both implementations for L = 128 and D = 12. The flatness parameter ǫ varied between 0.8-0.95, and the final modification increment was usually of order ∆s final ∼ 10 −9 which yielded an overall simulational effort of order 10 6 MCS to 10 7 MCS for estimating the DOS over the range (9). As clear from the algorithm, WLS only yields a relative density of states, hence available reference values must be employed in order to get the absolute DOS g(E). Normalizing the simulational outcome firstly with respect to the twofold degeneracy of the ground state, i.e., the free energy f will be exact for β → ∞, it instructive to examine how this accuracy for low temperatures carries over to infinite temperature (β → 0), where the partition function Z is dominated by the density of states around E = 0, and one has lim β→0 βF (β)/N = −(1/N ) ln Z(β = 0) = − ln 2. Table I shows the latter quantity for all considered system sizes. As can be seen from the table there is an increasing deviation from the exact value with increasing width of the film D. While the results for D = 6 and D = 8 (the latter for small sizes L) agree with the expected value, a deviation for the larger system sizes, especially L = 128 and D = 12 becomes apparent. This is related to a slowing down in the equilibration process in the multicanonical (Wang-Landau) ensemble for decreasing modification increment ∆s i , as illustrated in Fig. 4, which shows the visited states (E/JN , M/N ) and the energy histogram H(E) recorded during Wang-Landau sampling for different stages i of the simulation, where the modification increment ∆s i is used to modify the density of states. In case one has a small number of tunneling events during a certain simulation stage i, H(E) exhibits a kink at the barrier, since the stage is completed once the flatness criterion is fulfilled. Correspondingly, g(E) will suffer from large errors in the ordered phase, in case it is normalized using a reference in the disordered phase, and vice versa, errors will be enhanced in the disordered phase when using a reference in the ordered phase (ground state). For D = 8 (excluding L = 32) and D = 12 we therefore employed the following approach. Utilizing the fact that one has at least random walk behavior for small modification factor in each of the phases alone, we normalize the branch of the density of states g − (E) contributing to the low-energy ordered phase and the branch g + (E) contributing to the high-energy disordered phase separately, i.e., one has D L # runs − ln Z β=0 N − ln Z exact β=0 N | ln Z β=0 −ln Z exact β=0 | | ln Z exactg(E) = g − (E) for E ≤ E cut , g + (E) for E > E cut ,(11) where one obtains g − (E) by normalizing the simulational outcome g(E) with respect to the ground state g(E ground ) = 2 and is g + (E) obtained by normalizing g(E) with respect to the total number of states E g(E) = 2 L 2 D .(12) In Eq. (11), E cut is taken to be the energy for which the energy probability distribution, estimated directly from the simulational outcome g(E), takes its minimum in between the peaks at equal weight. Note, that in the sum E g(E) = E≤Ecut g − (E) + E>Ecut g + (E) the term E≤Ecut g − (E) is negligible. The additional error which is introduced by this normalization procedure then depends on the contribution of the mixed phase configurations to the energy distribution (and the choice of E cut ). However, since these mixed phase configurations are exponentially suppressed at the transition point, the error is expected to be of the same order, and correspondingly the error due to the choice of E cut as well. Note, that already for L = 32 and D = 8 the double Gaussian approximation to the energy probability distribution, which neglects any mixed phase contribution, provides a reasonably good approximation to the measured distribution, apart from small deviations in in the tails (cf. Fig. 8). C. Shape transitions In Ref. [18], Neuhaus and Hager addressed the severe problem of slowing down in simulations of first-order transitions in the multicanonical ensemble [19]. This was exemplified by studying the two-dimensional Ising model on L × L square lattices (periodic boundary conditions) below the critical bulk temperature on the whole magnetization interval [−L 2 , L 2 ] whereby the sampling of configurations with magnetization M = i S i was biased with the inverse probability distribution of the magnetization g −1 (M ). Specifically, it was found in Ref. [18] that these simulations suffered from a slowing down due to a discontinuous droplet-to-strip transition [20], i.e., τ ∝ exp(2RσL), where τ is the tunneling-time between droplet and strip configurations, σ is the interfacial tension, and R was measured to be R = 0.121 (14). Note, that one has R ≈ 1 for non-multicanonical simulations. Of course, one needs a fairly good approximation to g(M ), in order to sample the considered Hamiltonian in the multicanonical ensemble. Within the framework of WLS one may therefore simulate the system at a certain inverse temperature β of interest by employing the flipping probability (single-spin-flip Metropolis) p C→C ′ = min 1, g(M C ) g(M C ′ ) exp(−β [E C ′ − E C ]) ,(13) for the transition from the state C to the state C ′ . Each time a state with magnetization M is visited, one updates g(M ) according to g(M ) → g(M )·f i in complete analogy to the case where g(E) is used. Once this procedure has rendered g(M ) accurate enough, one makes a production run, where g(M ) is not altered anymore. Thermodynamic quantities can then be obtained by reweighting to the canonical ensemble. For the first order interface transitions in thin Ising-films as studied here, we have found evidence, that geometri- Fig. 8), we observe pronounced effects for the largest considered system size. This is shown in Fig. 5(b) where part of a time series is depicted which was recorded for D = 12 and L = 128 during WLS sampling in the space of total magnetization. (Note also that the slowing down is more severe in case WLS using g(E) is employed for the same system size as obvious from Fig. 4. Ref. [18]. In case Wang-Landau sampling is performed in energy space, the governing mechanism of the slowing down is not determined by now. From the joint energyorder parameter distribution (Fig. 6) however, recorded for Wang-Landau sampling in the space of energy, one can at least conclude that one suffers from the fact that the ordered and the disordered phase are not distinctly separated in energy, as can be seen by inspecting the distribution of magnetization (along lines of constant energy) which shows a noticeable three-peak structure for a range of energies e/J. Further studies are clearly necessary in order to clarify whether there are connections to droplet related phenomena. III. SIMULATION RESULTS A. Thermodynamic Quantities From the simulated DOS, as depicted in Fig. 7, we have calculated the first and second moment of the energy per spin e n = 1 N Z(β) E E n g(E) exp(−βE),(14) and the specific heat c = N k B T 2 e 2 − e 2 .(15) Furthermore, important quantities like the free energy per spin can be directly computed f = − 1 N β ln Z(β) = − 1 N β ln E g(E)e −βE ,(16) and the entropy per spin can be obtained from the internal energy (14) and the free energy (16) s = e − f T . By measuring microcanonical averages · E during the last stage of a one-dimensional random walk in energy space, where g(E) is updated with the smallest increment ∆s final we can also compute canonical averages of the order parameter (and higher moments) |m| = 1 N |M | = 1 N N i=1 S i ,(18) i.e., |m| n = E |m| n E g(E)e −βE E g(E)e −βE .(19) Thus quantities like the finite lattice susceptibility χ χ = N k B T m 2 − |m| 2 ,(20) as well as the fourth order cumulant U 4 on which we concentrate in the following and which is defined as U 4 = 1 − m 4 3 m 2 2 ,(21) become accessible. The distinctive feature of first-order phase transitions are phase coexistence and metastability. For the interface localization-delocalization transition considered here, this is reflected by jump discontinuities in the internal energy e as well as the (absolute) magnetization |m| per site, which are depicted in Figs. 9 and 10, respectively, and also by hysteresis effects encountered when heating and cooling the system as exemplified in Fig. 3. Considering the internal energy ( Fig. 9) for fixed D and varying linear dimension L, one can clearly see that one actually does not observe discontinuous jumps where the curves are expected to cross. In (b) the data obtained from WLS using g(M ) (D = 8 and L = 32) is plotted for comparison. Here, g(E) for D = 8 and L = 32 was normalized solely with respect to the ground state. Within the inverse temperature range displayed in the inset of (a), the average relative errors in e for D = 6 amount to 0.17%, 0.54%, 0.55%, 0.42%, and 0.45% for L = 16, ..., 128, respectively. For D = 8 and L = 32 the average error amounts to 0.18% in the range 0.2455 ≤ J/kBT ≤ 0.2485, when using g(M ) while it is 1.0% within the same range when using g(E). Note, that for D = 8 and L = 96, 128, as well as for D = 12 and L = 64 the DOS was determined only once, hence no error bars are displayed. of the quantities in question but a continuous behavior that sharpens to the asserted step-like behavior with increasing linear system size L. This rounding is related to the fact that a true phase transition can only occur in the thermodynamic limit, where in equilibrium, approaching the transition temperature from above, the energy of the system discontinuously jumps from e + (interface in the center of the film) to e − (interface tightly bound to the wall), while for a finite volume the system may jump back and forth between the latter states and the observed equilibrium behavior is thus continuous in temperature. The rounding of the transition in finite systems can also be observed for the specific heat c depicted in Fig. 11 which exhibits narrow peaks that are remnants of the δ-function singularities one would get when differentiating the discontinuous energy in the infinite volume limit. Apart from the finite size rounding, one can see that the positions of the maxima of c and the minimum of the fourth order cumulant U 4 (Fig. 12) are systematically shifted towards higher β-values for increasing linear dimension L. From the crossings of the energy curves for different linear dimensions L, one can get a first idea about the achieved accuracy for the different film thicknesses D, because they should cross to a very good approximation in the Within the inverse temperature range displayed in the inset of (a), the average relative errors in |m| for D = 6 amount to 1.0%, 5.2%, 6.3%, 6.1%, and 9.3% for L = 16, ..., 128, respectively. point [21] (β tr (D), (e + + 2e − )/3), where β tr (D) is the infinite system transition point. Hence, the crossing points for different L e(β cross , L, D) = e(β cross , L ′ , D) , actually provide an estimator for the infinite system transition temperature, which is expected to deviate from β tr (D) only by an amount exponentially small in system size [21]. As can be seen from the inset of Fig. 9(a) in case of D = 6, the various crossings are indeed scattered in a narrow region around the extrapolated infinite volume transition point for L ≥ 32. For smaller values of L exponential corrections still make a noticeable contribution. For the larger thicknesses D ≥ 8 the region where the energy curves cross is noticeably larger. Particularly, one obtains that the errors resulting from averaging over different runs are too small to fully account for the deviations (excluding L = 32 for which g(M ) was employed). This is related to the fact that for the thicknesses D = 8 and D = 12 only a single run was performed over the entire energy range (9) while further runs were restricted to the mixed phase region in between the peaks, because the slowing down, as described in the preceding subsection, was not foreseen. When one uses the normalization condition (11), the proper strategy would certainly be to enhance the simulational effort in the pure phases, down to the ground state and up to E = 0, since the reference density of states is known for T = 0 and β = 0. This is necessary, in order to minimize the accumulation of errors in the density of states, since the Wang-Landau method and similar adaptive algorithms, do in general not exhibit an error distribution that is flat in energy [36]. Hence, for D = 8 (excluding the simulation using g(M )) and D = 12, we believe the true errors to be larger than the error bars displayed in Figs. 9(b)-(c), 12(b), 11(b)-(c) and when quantitatively referring to errors of the thermodynamic quantities, we thus restrict ourselves here to D = 6, where we have reliable error estimates. Exponential corrections to the crossing points are presumably much smaller than the scatter in the energy crossings for D ≥ 8 and one may therefore conclude that the deviations in the crossings for D ≥ 8 are not due to corrections to scaling, but reveal the actual error in the density of states for this region. This is also the case for the other quantities like the specific heat for example ( Fig. 11(b)-(c)). Thus, the analysis of the systems with larger thicknesses D = 8 and D = 12 is certainly more difficult and less accurate. One can however roughly estimate the order of magnitude of the latter uncontrolled error, which also serves to support the above picture. For example, from the density of states of the largest system (D = 12 and L = 128), one can estimate, that a relative error in the density of states g(E) of the order ∼ 10 −1 (referring to the results for the 50 × 50 2D Ising model in Ref. [16] this seems to be a reasonable assumption), in the narrow region corresponding to the peak of the ordered phase of the energy probability distribution, can result in a displacement ∆β of the peak position β c max of the specific heat and also of the step location of the internal energy, which is approximately of the order ∆β/β c max ∼ 10 −4 . In case of D = 12 and L = 48, a relative deviation of this order could already be caused by a relative error in g(E) which is of the order ∼ 10 −2 in the above region. These considerations comply well with the observed scatter. B. Finite size scaling When one deals with second order phase transitions, the characteristic feature is a divergent spatial correlation length ξ at the transition point β c (where one observes fluctuations on all length scales) implying powerlaw singularities in thermodynamic functions such as the correlation length, magnetization, specific heat and susceptibility. This is in sharp contrast to a first order transition where the correlation length in the coexisting pure phases remains finite and concerning finite size scaling the volume of the system turns out to be the relevant quantity. For a thin film geometry where one has fixed D and varying linear dimension L, finite size scaling will thus involve the quantity L 2 . This can be shown by approximating the energy distribution P L,D (e) of the pure phases by a Gaussian [21,22,23] centered around the infinite-lattice energy per spin e where c denotes the infinite-lattice specific heat. Since one has phase coexistence at a first-order transition, the probability distribution of the energy will be double peaked at the transition point β tr (D) = 1/k B T tr (D), where e jumps from e − (low energy phases, interface at one of the two walls) to e + (single high energy phase, interface centered in the middle of the film), i.e., the free energy branches f ± intersect at a finite angle in the infinite system, as can be seen from Fig. 13(a), when inspecting the curves around the transition point (cf. also Fig 3(b)) It is essentially this non-analyticity in the free energy, which gives rise to the discontinuous behavior of the internal energy. In a finite system however, the free energy remains differentiable and the intersection is rounded. P L,D (e) = L 2 D 2πk B T 2 c exp (e − e ) 2 2k B T 2 c L 2 D ,(24) Hence, at the transition point, P L,D (e) is a superposition of two Gaussians (24) centered at e = e ± , while slightly away from the transition at T = T tr + ∆T they are centered at energies e ± + c ± ∆T , where c ± are the specific heats in the disordered (+) and ordered phases (-), which are assumed to be constant in the vicinity of the transition, i.e., for sufficiently small ∆T . Each of the Gaussians is then weighted by Boltzmann factors of the corresponding free energies f ± , and one thus arrives at P L,D (e) = A a + √ c + exp − [e − (e + + c + ∆T )] 2 2k B T 2 c + L 2 D + a − √ c − exp − [e − (e − + c − ∆T )] 2 2k B T 2 c − L 2 D ,(25) where the weights a ± are given by a ± = q ± exp ∓ f + − f − 2k B T L 2 D ,(26) and A reads A = exp − (f + + f − ) 2k B T L 2 D L 2 D 2πk B T 2 .(27) Since we have a single high energy phase and two low energy ordered phases we set q + = 1 and q − ≡ q = 2 in the following. At the transition all phases have equal weight [21,24] such that the area under the peak at e − is q times the area under the peak at e + which is satisfied by Eq. (25). Within the framework of the Ansatz (25) one then proceeds by calculating the energy moments as usual via [22] e n = de ′ e ′n P L,D (e ′ ) de ′ P L,D (e ′ ) . Computing then e at the transition point by means of Eq. (28) we obtain e = e + + qe − 1 + q ,(29) (horizontal lines in Fig. 9), which is exact, apart from exponential corrections due to mixed phase contributions which are neglected in the double Gaussian approximation. Upon using the fluctuation relation (15) or c = d e /dT in conjunction with Eq. (28) one can calculate the specific heat to leading order c = a + c + + a − c − a + + a − + [e + − e − + (c + − c − )∆T ] 2 (a + + a − ) 2 a + a − k B T 2 L 2 D,(30) which is seen to take its maximum for a + = a − in Eq. (25). The position of the latter is thereby shifted away from the infinite lattices transition temperature by an amount of ∆T = T c max (D, L) − T tr (D) = k B T 2 tr ln q ∆eD 1 L 2 ,(31) and the height of the peak is found to be c max = c + + c − 2 + ∆e 2 D 4k B T 2 tr L 2 ,(32) where ∆e ≡ e + − e − is the latent heat. For convenience we may re-express Eq. (31) in terms of the inverse temperature β = 1/k B T which yields β c max (D, L) = β tr (D) − ln 2 ∆eD 1 L 2 .(33) Thus, the inverse temperature β c max (D) at which the specific heat peaks, provides a definition for a finite-lattice (pseudo) transition temperature from which the infinitelattice transition temperature can be estimated via finite size scaling, i.e., by extrapolating L → ∞. A similar argumentation applies to the distribution of the order parameter P L,D (m) [25] yielding the same scaling behavior for the susceptibility χ, i.e. β χ max (D, L) − β tr (D) ∝ (L 2 D) −1 ,(34)χ max ∝ L 2 D,(35) and one can show that the fourth order cumulant U 4 (Fig. 12) takes a minimum value at an inverse temperature β U min 4 (D, L) which is again shifted β U min 4 (D, L) − β tr (D) ∝ (L 2 D) −1 ,(36) while the minimum U min 4 obeys U min 4 ∝ −L 2 D.(37) Furthermore it was shown [25], that the shift in the crossing points of the cumulants for different system sizes is proportional to N −2 , which is negligibly small on the scale of N = L 2 D. Fig. 14(b) now shows the maximum values of the response functions c max , χ max , and the minimum U min 4 of the cumulant as function of 1/L 2 for the three different thicknesses D = 6, 8, 12. As can be seen from the plots, the data comply well with the behavior predicted by expressions (32), (35) and (37). Considering the fourth order cumulant U 4 in case of D = 6, one observes that sub-leading corrections to scaling are still present for the smaller linear dimensions L, but the expected linear behavior in L 2 is born out for the three largest choices of L. The definition for the finite lattice transition temperature considered so far, e.g. Eq. (33), involve leading order corrections of 1/L 2 . An alternative definition of the transition temperature which has the additional benefit that the latter corrections are absent was given in Ref. [26]. Here, it is utilized that at the infinite-lattice transition point β tr (D) all phases coexist which implies that the sum of the weights of the q ordered phases equals q times the weight of the disordered phase, i.e., R(β ew , L, D) ≡ e≤ecut P L,D (e, β ew ) e>ecut P L,D (e, β ew ) = q,(38) where P L,D (e) is the (finite-size) energy probability distribution, and β ew (D, L) differs from β tr (D) only by corrections exponentially small in system size. The energy e cut appearing in Eq. (38) is taken to be the internal energy at the temperature where the specific heat is maximal [26]. C. Transition temperatures Now, we can extract the infinite volume transition point β tr (D) from the finite size data, i.e., as Eqs. (33) and (36) suggests by fitting the peak positions for fixed D to The individual results for the infinite system transition points are summarized in Table II. In the last column of Table II we state our final estimate of the infinite system transition point β tr (D), based on weighted averages over the estimates listed in column 2-4. Concerning the error in our final estimate of β tr (D) we have also accounted for the scatter in the crossings of the energy curves as depicted in Fig. 9 and the crossings in the fourth order cumulant U 4 , see Fig. 12. While we find that the order of magnitude of the error as determined from the various finite lattice estimators considered above, complies well with all the data for D = 6, especially the latter crossing points, we may have uncontrolled errors in case of the larger thicknesses D = 8 and D = 12, due to the aforementioned lack of statistics deep in the pure phases. In these cases we consider here as a conservative error estimate the extremal crossing points as an upper bound to the transition point, which results in the error of β tr (D ≥ 8) as given in the last column of Table II. Fitting the locations of the maxima of the specific heat to Eq. (39), as depicted in Fig. 14(a), one can also determine the latent heat ∆e which is however less accurate than computing ∆e from the distribution P L,D (E) via [27] ∆e(L, D) = ∆e(D) β max,min (D, L) = β max,min (D, ∞) + a L 2 ,(39)+ const × L −2 ,(40) which yields the values stated in column 2 of Table II. Concerning the extrapolation (39) of the positions of the minima U min 4 and the maxima c max we have used only data for L > 32 in case of D = 6. For these lattices, exponential corrections to β ew (D, L) cannot be resolved within the achieved accuracy. This is also the case for the larger film thicknesses D and all choices of L. Hence, the values listed in Table II for β ew (D) are simply averages over the various lateral system sizes L (L > 32 in case of D = 6). D. Wetting Temperature of the semi-infinite system In order to determine the wetting temperature β w = lim D→∞ β tr (D) of the semi-infinite system, we have studied Hamiltonian (1) with D = 12 and L = 48 along the branch of positive bulk-magnetization at the inverse temperature β = 0.251 near the expected location of the wetting temperature β w (H 1 ). We have performed simulations for five different sets of surface fields (symmetric, i.e., H 1 = H D ), namely H 1 /J = −0.25, −0.125, 0, 0.125, and 0.25, utilizing a conventional Metropolis algorithm in order to measure the surface magnetization m 1 = i∈surface1 S i /N using up to 10 7 MCS for averaging. This selection of surface fields allows one to reweight to all fields in the range [−0.25J, 0.25J] for a range of inverse temperatures Jβ ∈ [0.249, 0.253]. Note, that the metastability is strong enough (cf. Fig. 3) that the system remains in the ordered phase (initially all spins up) even for H 1 /J = −0.25. According to the Young equation [29] the walls are wet by spin down, if the difference ∆σ w between the surface free energy of the wall with respect to a positively magnetized bulk σ w+ and the surface free energy against a negatively magnetized bulk σ w− exceeds the interfacial tension σ of the 3D Ising-model [28] at an infinite distance from the wall. thermodynamic integration [30] (42) and determine the wetting temperature β w (H 1 ) by the condition ∆σ w = σ, which yields Jβ w (H 1 ) = 0.25212 (5) as depicted in Fig. 15(a). ∆σ w = σ w+ − σ w− > σ(41)∆σ w = σ w+ (−H 1 ) − σ w+ (H 1 ) = H1 −H1 dH ′ 1 m 1 (H ′ 1 ) β , H 1 = 0.25J Describing the semi-infinite system by means of the wetting film thickness l leads to the effective interface potential [5] V eff (l) = a exp(−κl) − b exp(−2κl) + c exp(−3κl), (43) which has the meaning of a free energy cost when placing a (flat) interface at distance l from the wall. Upon minimizing V eff (l) with respect to l one finds the equilibrium position of the interface. Eq. (43) includes only the lowest powers of exp(−κl) which are necessary to describe a first order wetting transition in the semi-infinite system. The coefficient a explicitly depends on temperature, while the temperature dependence of b and c is neglected (c > 0 in the following) [37]. All coefficients have the same magnitude as the interfacial tension between bulk phases and one finds a first order wetting transition for b > 0 at a w = b 2 /4c, where the interface jumps discontinuously into the bulk [9,31]. Now, for a film one has an additional contribution from the second wall and the effective potential reads [32] ∆V eff,Film (l) = V eff (l) + V eff (D − l) − 2V eff (D/2) = c m 2 (m 2 − r) 2 + tm 2 , In the film, r > 0 gives rise to first order interface localization-delocalization transitions and t = 0 then denotes the triple temperature. Hence, for large D we have from Eq. (46) a tr = a wet + b exp(−κD/2), i.e., the triple temperature differs from the wetting temperature only by a term exponentially small in κD/2 and is larger than the wetting temperature (b > 0). Within mean field theory κ would have to be identified with the inverse bulk correlation length ξ b [5]. However, from the two-field Hamiltonian approach developed in Ref. [33] we know that κ/2 has to be replaced by κ 2 = 1 2ξ b θ , θ = 1 + ω eff /2,(49) where ω eff is the effective wetting parameter which becomes lim T →T + w ω eff = k B T /4πσξ 2 b upon lowering the temperature T towards the wetting temperature T w [34]. From a simple exponential fit of the form (48) we get κ/2 = 0.430 (8). (Note, that this has to be regarded as an effective value since we neglect any temperature dependence of κ within our range of triple temperatures β tr (D)). Evaluating now θ at T w /T cb = 0.88 where we employ ξ b ∼ 0.88 [28], yields θ ∼ 1.3, which is compatible with the values extracted for θ by Parry et al. [34] and clearly differs from the value θ = 1 expected from mean-field theory. Of course, making more quantitative statements would require data from additional film thicknesses D, but the above considerations clearly indicate that our data nicely supports the asserted functional dependence of β tr (D) on D, i.e., Eq. (48). IV. CONCLUSION We have studied the interface localizationdelocalization transition in a thin Ising-film (1) for a choice of parameters, where the transition is pronounced first order for all studied thicknesses D = 6, 8, and 12. Checking for the correct behavior of the logarithm of the partition function ln Z which should converge to N ln 2 as β → 0, we find reasonable agreement for D = 6 within error bars (cf . Table I) In contrast, for D > 6 we see rather clear deviations from the expected value with relative deviations up to 10 −3 . We attribute this behavior to a slowing down encountered in the flat energy-histogram ensemble. Difficulties also arise, when one considers to sample a flat magnetization distribution, although simulation results suggest that the slowing down is less severe. Here, we find evidence for a discontinuous shape transition, as studied by Neuhaus and Hager [18]. For the larger thicknesses (D > 6) we therefore suggest to employ an additional reference for the disordered phase (total number of states), in order to get the proper relative weight between the coexisting phases, thus correcting for the lack of tunneling events, in the late stages of the algorithm. The triple temperatures β tr (D) of the interface localization-delocalization transition can then be determined with a relative accuracy of the order 10 −4 while the relative error in the latent heats is of the order 10 −2 . The triple temperatures are seen to differ from the wetting temperature of the semi-infinite system by a term exponentially small in film thickness D as predicted by the sharp-kink approximation to the capillary wave Hamiltonian, provided the length scale κ is identified with the results of Parry and co-workers, i.e., Eq. (49). When one compares the present results based on Wang-Landau sampling [13,14,15,16] to the first study of first order interface localization-delocalization transitions [8] where simple Metropolis and heatbath Monte Carlo algorithms were used, a major improvement of accuracy is clearly seen. On the other hand, the systematic problems due to entropic barriers described in our work show that it would be problematic to apply the Wang-Landau algorithm to larger systems than used here. Note, that the largest sizes used by us, 128 × 128 × 12 ∼ 1.97 · 10 5 Ising spins, distinctly exceed the sizes analyzed in most previous applications of this algorithm [13,14,15,16]. FIG. 1 : 1(a) Thin film geometry with two free surfaces at n = 1 and n = D (shaded gray) on which magnetic surface fields H1 and HD act. Here, the surface at n = 1 favors spin up (+), while the surface at n = D favors spin down (-). Parallel to the L×L surfaces, periodic boundary conditions are imposed. (b) Delocalized Interface. (c) Interface located at either of the two surfaces. FIG. 2 : 2Energy probability distributions PL,D at equal weight. The peak positions e − (L, D) and e + (L, D) (indicated for D = 6 and L = 128) define the finite volume latent heats ∆e(L, D) = e + (L, D) − e − (L, D). Arrows pointing on the energy axis indicate the interval Icenter, Eq. (10), in case of L = 128 and D = 6. FIG. 3 : 3(a) Energy hysteresis curves. Cooling (heating) was performed at a rate of J∆β/MCS = 4.3403 · 10 −6 (open circles, note that not all data points are plotted) and in steps of J∆β = 0.0005, using 100 MCS for equilibration at each β and another 10 4 MCS for measuring the energy (solid line). The equilibrium curve obtained from WLS in the space of energy is also shown. The roughening temperature J/kBTR = 0.40758(1)[10] is indicated by an arrow. (b) Low and high temperature branch of the free energy per site f ± as obtained from thermodynamic integration, which yields J/kBTtr(D = 12) = 0.2511(10). The relative deviation |fWLS − f ± |/fWLS between the thermodynamic integration and the WLS result is plotted in the inset. I: Logarithm of the partition function −(1/N ) ln Z β=0 of a thin Ising film for different linear dimensions L and D, in case the density of states is normalized with respect to the ground state. The value in brackets states the standard deviation. The exact value and the deviations from the latter are listed in the last two columns, respectively. For L = 32 and D = 8 the run showing the largest deviation from the exact value (−(1/N ) ln Z β=0 = −0.692624) was excluded from data analysis. Then one has −(1/N ) ln Z β=0 = −0.69307(9). Under # runs we have listed the number of independent simulations. FIG. 4 : 4Visited states (E/JN , M/N ) (left hand side) and energy histogram H(E) (right hand side), recorded during WLS in the space of energy over the interval E/JN ∈ [−2.4414, −1.2207] for different stages i of the simulation. The simulation used 3.844 · 10 7 MCS in total, with a flatness criterion for the energy histogram of ǫ = 0.9 and a final modification increment of ∆s final ≈ 4.139 · 10 −7 . FIG. 5 : 5(b) Selected part of a time-series of the total magnetization per spin m = M/N as produced by WLS (single-spinflip) in the space of magnetization. (a) Droplet at surface n = D where the positive field HD/J = 0.25 acts. (c) Percolated strip-like droplet. Note that in (a) and (c) only the positive spins are displayed as small spheres. Those spins closest to the shown L × L-surface are the lightest. cal transitions in the ensemble realized by Wang-Landau sampling in the space of magnetization, indeed hamper the simulations. While this poses no problem for the smaller systems like D = 8 and L = 32 where WLS using g(M ) yields very good results (cf. ) The simulation was restricted to the interval m = M/N ∈ [−0.55949, −0.45776] after monitoring the time series of m for a much larger interval m ∈ [−0.91553, 0.10173] where ∆s i decreased from 5.0 · 10 −3 to 7.629 · 10 −8 over a simulation time of 1.632 · 10 7 MCS at J/k B T = 0.249719. The distribution g(M ) was then further iterated on the interval m ∈ [−0.55949, −0.45776] where ∆s i was refined from 1.0 · 10 −5 to 1.953 · 10 −8 within 7.27 · 10 6 MCS and finally held fixed such that the depicted time series could be recorded. Configurations were thereby monitored along the estimated position of the barrier m ≈ −103000/N = −0.523885. Fig. 5(a) and (c) show snapshots of the two possible coexisting structures which are the three-dimensional analogs (in the presence of a surface) to the droplet and strip shapes as studied in FIG. 6 : 6Joint energy-order parameter distribution as obtained from WLS in the space of energy for a system size of D = 6 and L = 16. The distribution was recorded using a fixed DOS g(E), which was taken from an usual adaptive WLS.FIG. 7: Logarithm of the energy density of states log10(g(E)) for thicknesses D = 6, 8, 12 and linear dimensions L = 48, ..., 128. Smaller choices for L (in case of D = 6 and D = 8)are omitted in order to preserve clarity. Also indicated is the region where Ecut, appearing in Eq. (11), is typically located. Here, both branches of the density of states g− and g+, are joined (D = 8, 12). In case of D = 6, g(E) was normalized solely with respect to the ground state degeneracy. FIG. 8 :FIG. 9 : 89Panel (a) shows the double Gaussian approximation (25) to the energy probability distribution PL,D(e) for the system of size L = 32 and D = 8 at the finite volume transition point (βtr(L, D) = 0.247255(10)) as obtained from WLS in the space of magnetization (single-spin-flip) by reweighting back to the canonical ensemble. Panel (b) shows the corresponding full joint energy-order parameter distribution PL,D(e, m), while (c) shows the projection onto the magnetization axis. Internal energy e for different linear dimensions L and film thicknesses D. Estimates for the inverse temperature Jβtr(D) = J/kBTtr(D) of the triple point are indicated by arrows. The horizontal solid lines mark the value (e + + 2e − )/3 FIG. 10 : 10Average absolute magnetization per spin |m| of a thin Ising film for different linear dimensions L and film thicknesses D. FIG. 11 :FIG. 12 : 1112Specific heat c of a thin Ising film for different linear dimensions L and film thicknesses D. In the interval 0.2400 ≤ J/kBT ≤ 0.2415 the average relative error for D = 6 amounts to 3.9%, 9.6%, 12.0%, 5.4%, and 15.9% for L = 32, ..., 128, respectively. For D = 8 and L = 32, Wang-Landau sampling in g(M ) yields an average error of 2.6% within the range 0.2455 ≤ J/kBT ≤ 0.2480. Note, that we have no statistics for D = 8 and L = 96, 128 (b), as well as for D = 12 in case of L = 64 (c). Reduced fourth order cumulant of a thin Ising film for different linear dimensions L and film thicknesses D. The inset of panel (a) shows the region where the cumulants for the various linear sizes L cross (D = 6). In the vicinity of the minima positions, the relative errors in U4 in case of D = 6 amount to 11%, 30%, 17%, 11%, and 7% for L = 24, ..., 128, respectively. WLS in the space of total magnetization M with fixed g(M ) yields an error of 4% (D = 8, L = 12). Note, that for the data corresponding to D = 8, as plotted in (b), we have no statistics for L = 96 and L = 128, i.e., for the latter sizes the DOS was estimated only once. FIG. 13 :FIG. 14 : 1314(a) Free energy per spin f of a thin Ising film for different linear dimensions L and film thicknesses D. Note, that in (a) only the data for L = 128 is plotted, while (b) shows f on a finer scale. In the range 0.23 ≤ J/kBT ≤ 0.27, the average relative error in the free energy (D = 6) is 0.0143%, 0.011%, and 0.00026% for L = 16, 32, and 128, respectively. (c) entropy per spin s for D = 6. The error in s amounts to 0.64%, 0.64%, 0.47%, and 0.47%, within the range depicted in the inset of panel (c). Panel (a): extrapolation of peak positions βmax,min(D, L) of the specific heat c max and the fourth order cumulant U min 4 for the different film thicknesses D. Panel (b): maxima of specific heat c max and susceptibility χ max , as well as minimum of the fourth order cumulant U min 4 as function of L 2 . where β max,min (D, L) stands for the location of the maximum of the specific heat β c max (D, L) and the location of the minimum β U min 4 (D, L) of the fourth order cumulant at finite L, while β max,min (D, ∞) denotes the infinite volume limit (L → ∞) of the corresponding inverse temperatures, which is an estimate of the infinite system transition point β tr (D). Alternatively, we have also employed the finite volume estimator β ew (D, L) of the transition point, as defined by the condition (38). : Estimates for the latent heats ∆e(D) and the inverse transition temperatures of the first order interface localization-delocalization transition for different film thicknesses D. βcmax (D, ∞) and β U min 4 (D, ∞) are the estimates of the transition point βtr(D) originating from an extrapolation of peak positions as described in the text, while βew(D, ∞) denotes the estimate from the equal weight rule (38). The final estimate of the inverse temperature βtr(D) of the triple point is stated in the last column. FIG. 15 : 15By symmetry σ w− (−H 1 ) equals σ w+ (H 1 ), i.e., the free energy cost of a wall favoring spin up with respect to a positively magnetized bulk. Thus we can perform a (a) Shown are the interfacial tension σ of the 3D Ising-model (HP) taken from Ref.[28], fitted by an 8 th degree polynomial in order to smoothly interpolate between the data points as well as the the quantity ∆σw appearing in the Young equation (41). The position of the crossing point yields the wetting temperature Jβw(H1) = J/kBTw(H1) = 0.25212(5). . (44) we have utilized the auxiliary variablẽ m = 2 exp(−κD/2){cosh [κ (l − D/2)] − 1} = (exp(−κD/4)κ[l − D/2]) 2 +higher orders of [l − D/2]. TABLE AcknowledgmentsThis work was supported in part by the Deutsche Forschungsgemeinschaft under grants No Bi314/17 and Tr6/c4. Helpful and stimulating discussions with D. P. Landau and P. Virnau are gratefully acknowledged. We thank NIC Jülich and HLR Stuttgart for a grant of computer time at the CRAY-T3E supercomputer. . A O Parry, R Evans, Phys. Rev. Lett. 64439A. O. Parry and R. Evans, Phys. Rev. Lett. 64, 439 (1990). . M R Swift, A L Owczarek, J O Indekeu, Europhys. Lett. 14475M. R. Swift, A. L. Owczarek, and J. O. Indekeu, Euro- phys. Lett. 14, 475 (1991). . J O Indekeu, A L Owczarek, M R Swift, Phys. Rev. Lett. 662174J. O. Indekeu, A. L. Owczarek, and M. R. Swift, Phys. Rev. Lett. 66, 2174 (1991). . A O Parry, R Evans, Phys. Rev. Lett. 662175A. O. Parry and R. Evans, Phys. Rev. Lett. 66, 2175 (1991). . A O Parry, R Evans, Phys. A. 181250A. O. Parry and R. Evans, Phys. A 181, 250 (1992). . K Binder, D P Landau, A M Ferrenberg, Phys. Rev. Lett. 74298K. Binder, D. P. Landau, and A. M. Ferrenberg, Phys. Rev. Lett. 74, 298 (1995). . K Binder, D P Landau, A M Ferrenberg, Phys. Rev. E. 512823K. Binder, D. P. Landau, and A. M. Ferrenberg, Phys. Rev. E 51, 2823 (1995). . A M Ferrenberg, D P Landau, K Binder, Phys. Rev. E. 583353A. M. Ferrenberg, D. P. Landau, and K. Binder, Phys. Rev. E 58, 3353 (1998). . M Müller, E V Albano, K Binder, Phys. Rev. E. 625281M. Müller, E. V. Albano, and K. Binder, Phys. Rev. E 62, 5281 (2000). . M Hasenbusch, K Pinn, J. Phys. A. 3063M. Hasenbusch and K. Pinn, J. Phys. A 30, 63 (1997). . K Binder, Z. Phys. B. 4561K. Binder, Z. Phys. B 45, 61 (1981). . R Liebmann, Z. Phys. B. 45243R. Liebmann, Z. Phys. B 45, 243 (1982). . F Wang, D P Landau, Phys. Rev. Lett. 862050F. Wang and D. P. Landau, Phys. Rev. Lett. 86, 2050 (2001). . F Wang, D P Landau, Phys. Rev. E. 6456101F. Wang and D. P. Landau, Phys. Rev. E 64, 056101 (2001). . B J Schulz, K Binder, M Müller, D P Landau, Phys. Rev. E. 6767102B. J. Schulz, K. Binder, M. Müller, and D. P. Landau, Phys. Rev. E 67, 067102 (2003). . B J Schulz, K Binder, M Müller, Int. J. Mod. Phys. C. 13477B. J. Schulz, K. Binder, and M. Müller, Int. J. Mod. Phys. C 13, 477 (2002). . A B Bortz, M H Kalos, J L Lebowitz, J. Comput. Phys. 1710A. B. Bortz, M. H. Kalos, and J. L. Lebowitz, J. Comput. Phys. 17, 10 (1975). . T Neuhaus, S Hager, J. Stat. Phys. 11347T. Neuhaus and S. Hager, J. Stat. Phys. 113, 47 (2003). . B A Berg, U Hansmann, Phys. Rev. B. 47497B. A. Berg and U. Hansmann, Phys. Rev. B 47, 497 (1993). . K.-T Leung, R K P Zia, J. Phys. A. 234593K.-T. Leung and R. K. P. Zia, J. Phys. A 23, 4593 (1990). . C Borgs, R Kotecky, S Miracle-Sole, J. Stat. Phys. 62529C. Borgs, R. Kotecky, and S. Miracle-Sole, J. Stat. Phys. 62, 529 (1991). . M M S Challa, D P Landau, K Binder, Phys. Rev. B. 341841M. M. S. Challa, D. P. Landau, and K. Binder, Phys. Rev. B 34, 1841 (1986). . C Borgs, K Kotecky, J. Stat. Phys. 6179C. Borgs and K. Kotecky, J. Stat. Phys. 61, 79 (1990). . C Borgs, J Z Imbrie, Commun. Math. Phys. 123305C. Borgs and J. Z. Imbrie, Commun. Math. Phys. 123, 305 (1989). . K Vollmayer, J D Reger, M Scheucher, K Binder, Z. Phys. B. 91113K. Vollmayer, J. D. Reger, M. Scheucher, and K. Binder, Z. Phys. B 91, 113 (1993). . C Borgs, W Janke, Phys. Rev. Lett. 681738C. Borgs and W. Janke, Phys. Rev. Lett. 68, 1738 (1992). . A Billoire, T Neuhaus, B Berg, Nucl. Phys. B. 413795A. Billoire, T. Neuhaus, and B. Berg, Nucl. Phys. B 413, 795 (1994). . M Hasenbusch, K Pinn, Phys. A. 203189M. Hasenbusch and K. Pinn, Phys. A 203, 189 (1994). . T Young, Philos. Trans. R. Soc. London. 565T. Young, Philos. Trans. R. Soc. London 5, 65 (1805). . M Müller, K Binder, Macromolecules. 318323M. Müller and K. Binder, Macromolecules 31, 8323 (1998). . M Müller, K Binder, E V Albano, Phys. A. 279188M. Müller, K. Binder, and E. V. Albano, Phys. A 279, 188 (2000). . M Müller, K Binder, Phys. Rev. E. 6321602M. Müller and K. Binder, Phys. Rev. E 63, 021602 (2001). . A O Parry, C J Boulter, Physica A. 218109A. O. Parry and C. J. Boulter, Physica A 218, 109 (1995). . A O Parry, C J Boulter, P S Swain, Phys. Rev. E. 525768A. O. Parry, C. J. Boulter, and P. S. Swain, Phys. Rev. E 52, 5768 (1995). . S Trebst, D A Huse, M Troyer, cond-mat/0401195S. Trebst, D. A. Huse, and M. Troyer, cond-mat/0401195. Recently, an adaptive algorithm was proposed [35] which aims at maximizing the number of round trips between both edges of an energy interval which has the additional benefit of exhibiting a flat error distribution. Recently, an adaptive algorithm was proposed [35] which aims at maximizing the number of round trips between both edges of an energy interval which has the additional benefit of exhibiting a flat error distribution. The description of the interface in terms of the effective interface potential V eff follows from the sharpkink approximation to the capillary wave Hamiltonian H eff = dρ. σ/2)(∇l) 2 + V eff {l(ρ}] where fluctuations of the local interface position are neglectedThe description of the interface in terms of the ef- fective interface potential V eff follows from the sharp- kink approximation to the capillary wave Hamiltonian H eff = dρ[(σ/2)(∇l) 2 + V eff {l(ρ}] where fluctuations of the local interface position are neglected.
[]
[ "Finite size effects in global quantum quenches: examples from free bosons in an harmonic trap and the one-dimensional Bose-Hubbard model", "Finite size effects in global quantum quenches: examples from free bosons in an harmonic trap and the one-dimensional Bose-Hubbard model" ]
[ "Guillaume Roux \nLPTMS\nUMR8626\nCNRS and Universite Paris-Sud\nBat. 10091405OrsayFrance\n" ]
[ "LPTMS\nUMR8626\nCNRS and Universite Paris-Sud\nBat. 10091405OrsayFrance" ]
[]
We investigate finite size effects in quantum quenches on the basis of simple energetic arguments. Distinguishing between the low-energy part of the excitation spectrum, below a microscopic energy-scale, and the high-energy regime enables one to define a crossover number of particles that is shown to diverge in the small quench limit. Another crossover number is proposed based on the fidelity between the initial and final groundstates. Both criteria can be computed using ground-state techniques that work for larger system sizes than full spectrum diagonalization. As examples, two models are studied: one with free bosons in an harmonic trap which frequency is quenched, and the one-dimensional Bose-Hubbard model, that is known to be non-integrable and for which recent studies have uncovered remarkable non-equilibrium behaviors. The diagonal weights of the time-averaged density-matrix are computed and observables obtained from this diagonal ensemble are compared with the ones from statistical ensembles. It is argued that the "thermalized" regime of the Bose-Hubbard model, previously observed in the small quench regime, experiences strong finite size effects that render difficult a thorough comparison with statistical ensembles. In addition, we show that the non-thermalized regime, emerging on finite size systems and for large interaction quenches, is not related to the existence of an equilibrium quantum critical point but to the high energy structure of the energy spectrum in the atomic limit. Its features are reminiscent of the quench from the non-interacting limit to the atomic limit.
10.1103/physreva.81.053604
[ "https://arxiv.org/pdf/0909.4620v3.pdf" ]
119,310,464
0909.4620
6fbe0a7aadfd1f1907533a038bf2128d7fe77ff5
Finite size effects in global quantum quenches: examples from free bosons in an harmonic trap and the one-dimensional Bose-Hubbard model 20 May 2010 Guillaume Roux LPTMS UMR8626 CNRS and Universite Paris-Sud Bat. 10091405OrsayFrance Finite size effects in global quantum quenches: examples from free bosons in an harmonic trap and the one-dimensional Bose-Hubbard model 20 May 2010(Dated: May 21, 2010)arXiv:0909.4620v3 [cond-mat.quant-gas] We investigate finite size effects in quantum quenches on the basis of simple energetic arguments. Distinguishing between the low-energy part of the excitation spectrum, below a microscopic energy-scale, and the high-energy regime enables one to define a crossover number of particles that is shown to diverge in the small quench limit. Another crossover number is proposed based on the fidelity between the initial and final groundstates. Both criteria can be computed using ground-state techniques that work for larger system sizes than full spectrum diagonalization. As examples, two models are studied: one with free bosons in an harmonic trap which frequency is quenched, and the one-dimensional Bose-Hubbard model, that is known to be non-integrable and for which recent studies have uncovered remarkable non-equilibrium behaviors. The diagonal weights of the time-averaged density-matrix are computed and observables obtained from this diagonal ensemble are compared with the ones from statistical ensembles. It is argued that the "thermalized" regime of the Bose-Hubbard model, previously observed in the small quench regime, experiences strong finite size effects that render difficult a thorough comparison with statistical ensembles. In addition, we show that the non-thermalized regime, emerging on finite size systems and for large interaction quenches, is not related to the existence of an equilibrium quantum critical point but to the high energy structure of the energy spectrum in the atomic limit. Its features are reminiscent of the quench from the non-interacting limit to the atomic limit. The study of the non-equilibrium evolution of closed quantum many-body systems has been triggered by the recent progresses in cold atoms experiments in which atoms are hardly coupled to the environment [1,2]. Furthermore, microscopic parameters of the Hamiltonian governing the dynamics can be controlled at will and changed on microscopic timescales. In this context, the question of the unitary evolution of an isolated quantum system after a sudden change of one parameter, the so-called quantum quench, has attracted a lot of interest in both the experimental and theoretical communities [3]. Many different questions are raised by such a set-up, among which are the relaxation of observables [4][5][6][7][8][9][10][11][12][13][14][15][16][17], the question of thermalization [11,[18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34], the existence of a subsystem steadystate [35][36][37][38], and the propagation of the entanglement [39][40][41][42][43]. Beyond these academic concerns, practical applications of quenches have been proposed through the engineering of metastable states [44,45] and of an out-of-equilibrium supersolid state in a cold atoms set-up [46]. This paper is dedicated to the thermalization issue, but restricted to specific examples and without claims on general results about the thermalization mechanism. In this context, a quench can be understood as a way to create an initial state that evolves through the dynamics of a given Hamiltonian. A common wisdom in classical mechanics is that the long-time evolution will forget about the initial state and will explore all the accessible phase-space, provided the dynamics are chaotic. Then, ergodicity allows for the use of statistical ensembles in place of time-averaging. For a closed quantum system, as the evolution is unitary and the spectrum discrete, long-time recurrences occur and the contribution of the eigenstates involved * Electronic address: [email protected] in the dynamics is fixed by the initial state. For large enough systems, a quantum ergodic theorem was proposed [47], supporting the emergence of the microcanonical ensemble which is the usual statistical ensemble for an isolated system. This approach aims at showing that time-averaged density-matrixρ (see below for the definition) is macroscopically equivalent to the microcanonical ensemble. In a quantum quench, the initial state is not a "typical" state of a given energy but usually is the ground-state of the same Hamiltonian with different parameters. Consequently, the quench amplitude, or how much do we change the Hamiltonian, is here another relevant quantity. Another way to regard a quench can be as a perturbation of the initial state and one may wonder whether the long-time response is sensitive to the initial state. Furthermore, numerical tools and experiments on closed systems cannot easily reach a large number of particles so finite size effects can be important in the interpretation of the observed phenomena. This paper suggests possible approaches to the question of these finite size effects after a quantum quench, and a possible interpretation of the observations made on a particular model: the one-dimensional Bose-Hubbard model. The other model, consisting of free bosons in an harmonic trap, offers another example of finite size effects and remarkable behaviors. Some of the features of the two models are surprisingly connected. The central object governing the long-time physics after a quantum quench is the time-averaged density-matrixρ that predicts the time-averaged expectation values of any observable. This density-matrix has also connections to the heat or work done on a system [48][49][50][51]. The weights of this "diagonal ensemble" are difficult to compute for large systems as one needs to fully diagonalize the Hamiltonian so one unfortunately has to work with small systems (Hilbert spaces). Other methods have been used to tackle the physics of quenches. For instance, "Ab-initio" numerics have been used on both in-tegrable and non-integrable models [11,12,19,27,35,37,38,[52][53][54][55]. Numerical methods like time-dependent densitymatrix renormalization group (tDMRG) [56][57][58] can be used to compute the time-evolution of the wave-function but the interpretation is restricted to observables and to a finite window of time, and cannot give access the these weights. Exact results on integrable models [4,25,35,37,38,55,[59][60][61] have the advantage to treat in a non-perturbative way large systems, but on the other hand, it is not surprising that they do not always thermalize due to the extensive number of conserved quantities. Luttinger liquid theory, which describes the lowenergy physics of one-dimensional models in terms of free bosonic fields (thus an integrable theory), has been used to compute the time-evolution of the observables [7-9, 62, 63]. Quantum chaos methods have also helped studying the timeevolution of the Bose-Hubbard model [64][65][66]. Some studies focused on the relation between fidelity and on the energy distribution [54,67]. All these methods suffer from approximations and/or finite size effects and it is sometimes hard to determine what is an artifact or not. Some of the results from numerical simulations seems to be contradictory [11,12,27,32,33,54] but were carried out on different models with different range of parameters, and not necessarily starting from the ground-state [27] of a simply related Hamiltonian. Performing a quantum quench amounts to projecting an initial state onto the energy spectrum of the final Hamiltonian, corresponding to a certain distribution of energyρ(E). In the thermodynamical limit, a global quench is expected to drive the mean energy to the bulk of the energy spectrum since the perturbing operator is extensive. In this high energy domain, semi-classical physics and random-matrix theory arguments are expected to work and make expectation values hardly depend on the energy (within a window given by the energy fluctuations) [18,20,21]: thermalization can occur in the sense that the energy distribution obtained from the quench gives the same averages for the observables as the microcanonical ensemble. This so-called "eigenstates thermalization hypothesis" (ETH) has been tested numerically [19,27,32,33] for given models (typically fermionic and hard-core bosonic models) and some given set of parameters. No memory of the initial state (for a given mean energy) is thus found on simple observables. These results agree well with the previous findings of Ref. 12 on a similar model. Having in mind this qualitative argument, the results of Ref. 11 on the non-integrable onedimensional (1D) Bose-Hubbard model (BHM) look rather counter-intuitive: for small quenches, a thermalized regime was found in the sense that two independent observables computed within a (grand)-canonical ensemble (and not microcanonical) and from time-evolution gave the same results. On the contrary, a mean-field treatment of the 1D BHM interpreted in the framework of chaos theory [65] supports nonthermalization below an interaction threshold and thermalization above (mean-field theory is however known to fail for this strongly-correlated model so the results are not under control). The findings of Ref. 11 were later supported by the calculation of the diagonal ensemble distributions which looked like an approximate Boltzmann law [54] in the small quench regime. Surprisingly, for large quenches, a non-thermalized regime was found in Ref. 11 in which the correlations bear a strong memory of the initial state (in the sense that they are closer to the ones in the initial state than to the thermalized ones). This non-equilibrium behavior was attributed to the very peculiar shape of the diagonal ensemble in this regime [54]. An important step towards the understanding of the non-thermalized regime on finite size systems was made very recently [55] by giving numerical evidences on the 1D BHM that ETH does not apply for large quenches in finite systems and suggesting a general framework in terms of rare events contributing to the distribution, providing a refined version of the ETH. As integrability is often one of the ingredients that play a role in the physics of quenches, we briefly recall that, for 1D quantum many-body models, integrability can be well defined for a class of models which have the property of scattering without diffraction [68]. This has two consequences that are in relation with the question of thermalization: the momenta of the particles do not redistribute [68] (a process which is believed to be essential to get the thermalized momentum distribution), and there is an extensive number of conserved quantities that separate the eigenstates in many sectors, constraining the time evolution. In the context of nuclear physics, random-matrix theory has been proposed to describe the statistical features of the bulk of the spectrum and it is commonly conjectured that non-integrable quantum many-body or classically chaotic models display universal level statistics [69]. Level statistics have been computed in a few many-body models [70], supporting the conjecture, but these results are restricted to a few models and it cannot be excluded that diffractive models could display non-universal level statistics. The Bose-Hubbard model is a bit peculiar in this sense: if one denotes by N max the maximum number of bosons onsite, the model is non-diffractive only for N max = 1 [71]. In addition, if U is the interaction strength, U = 0 is an integrable point (the atomic limit J = 0 is as well exactly solvable). Level statistics and delocalization properties of the eigenstates have shown [71,72] that the BHM display features of quantum chaotic systems for non-zero U (and larger N max ). The first goal of this paper is to discuss the crossover from small to large quench amplitude regimes on the basis of energetic and static fidelity arguments, and to evaluate the finite size effects that are associated to this crossover. We then turn to a detailed discussion of the diagonal ensemble and the verification of the ETH in the BHM, complementary to what has been done in Refs. 54 and 55. We show that the observed Boltzmann-like regime is spoiled by strong finite size effects that prevent both an accurate definition of an effective temperature and the comparison with the microcanonical ensemble. In the large quench limit, we explain in details that the breakdown of the ETH is actually related to the "integrable" quench limit U i = 0 → U f = ∞. Thus, non-thermalization in the 1D BHM is, on finite systems, reminiscent of the atomic limit. While the U = 0 limit of the Bose-Hubbard model is trivially integrable as a free boson model, the infinite U (or atomic) limit is a bit particular: for very large U and focusing on the low-energy part of the spectrum, the model is effectively identical to an integrable 1D hard-core bosons model (N max = 1). However, we will see that, to understand the large-U limit of the quench, we will have to consider the whole excitation spectrum and not only the low-energy part. This result can be qualitatively and partially connected to the effect of the proximity to integrable points in quantum quenches, studied very recently in fermionic and hard-core bosonic models [32,33], in the sense that the observed non-thermalized regime on finite systems is connected to a particular limit in which the model has high degeneracies. Throughout the paper, we also give a simple but interesting example of a quench in a toy model consisting of free bosons confined in an harmonic trap. The motivation for it is that it surprisingly shares some qualitative features with the 1D BHM and that it allows for analytical calculations on some properties of the diagonal ensemble distribution. This model also corresponds to a standard experimental setup (so for the BHM) although interactions would have to be taken into account for a realistic comparison. The paper is organized as follows: we first review in Sec. I the definitions of the time-averaged density-matrix, the ETH and the computation of the diagonal weights for the two models under study. In Sec. II, we suggest two kinds of crossover number of particles to distinguish the small and large quench regimes. Lastly, we discuss in Sec. III the fate of the ETH in the 1D BHM and on small finite size systems. I. MODELS AND COMPUTATION OF THE WEIGHTS OF THE DIAGONAL ENSEMBLE A. The time-averaged density-matrix and the "eigenstate thermalization hypothesis" As discussed in recent papers [27,32,33,54,55,60], the time-averaged expectation values of any observable are governed by the time-averaged density-matrixρ, which is diagonal in the final Hamiltonian eigenstate basis, provided the spectrum is non-degenerate. From now on, we only consider finite size systems that have a discrete spectrum. This leads to the so-called "diagonal ensemble" that has weights fully determined by the overlaps between the initial state |ψ 0,i and eigenstates |ψ n,f of the final Hamiltonian H f . Usually, |ψ 0,i is the ground-state of the initial Hamiltonian H i and we assume in the following that we start from this zerotemperature pure state. We also consider that the final Hamiltonian takes the form H f = H i + λH 1 ,(1) where λ (that has the dimension of an energy) is called the quench amplitude, and H 1 is the dimensionless perturbing operator. Working on a global quantum quench means that H 1 is assumed to be an extensive operator that scales with the number of particles N . The time-averaged densitymatrix is defined byρ = lim t→∞ 1 t t 0 |ψ(s) ψ(s)| ds with |ψ(t) = e −iH f t |ψ 0,i . It is important to realize that the infinite time limit is taken before the thermodynamical limit. If the spectrum has exact degeneracies, the time-averaged density-matrix reads: ρ = n p n |ψ n,f ψ n,f | + d |ψ d,f ψ d,f |(2) where n labels non-degenerate eigenstates of H f and p n = | ψ n,f |ψ 0,i | 2 are the diagonal weights. d labels the basis of the degenerate subspaces, and the vectors |ψ d,f = q d,f q d,f |ψ 0,i |q d,f keep a memory of the initial phases of |ψ 0,i with respect to the |q d,f . In the situation whereρ is block-diagonal, in order to get time-averaged results for an observable O which has off-diagonal matrix elements in the H f eigenstate basis, one would have to compute all the overlaps q d,f |ψ 0,i and q ′ d,f |O|q d,f and sum up the contributions of all a degenerate subspace. In the following, this would be the case only for the free boson model and we will actually only use observables that are diagonal because the dimensions of the degenerate sectors grows (roughly) exponentially with the number of bosons N . For the Bose-Hubbard model, one can check that the spectra are non-degenerate in each symmetry sector. For a generic non-integrable model, the "eigenstate thermalization hypothesis" (ETH) has been surmised [20][21][22][23]27], suggesting an explanation for thermalization in an isolated quantum system and a justification for the use of the microcanonical ensemble. The ETH is supported by semi-classical and random-matrix theory arguments [18,[20][21][22][23], and was checked numerically on particular models [19,27,32,33]. The ETH boils down to the fact that, in a given small window of energy, the diagonal observables O n = ψ n,f |O|ψ n,f that contribute to the time-averaged expectation valueŌ = Tr[ρO] = n p n O n hardly depend on the eigenstate n (in short, O n ≃Ō in a small energy window). Consequently, any distribution peaked around the mean energy, and one can show on general grounds that the relative width of the distribution scales to zero as N −1/2 [27] (although some slower scalings could occur [54]), will give the same observables as the microcanonical ensemble, therefore accounting for thermalization. For integrable models [27,32,33], non-thermalization is explained from the fact that observables fluctuate a lot within a given energy window, which may be associated with the extensive number of conserved quantities that exist in these models. A more subtle scenario for the breakdown of the ETH was recently proposed [55], in which some "rare" states have a significant contribution to the averaged observables. B. Free bosons in an harmonic trap We now describe how to get the diagonal weights for two particular models. Firstly, we consider a model of N noninteracting bosons initially confined in an harmonic trap of frequency ω i and lying in the zero-temperature ground-state. The frequency is changed to ω f at time t = 0. For this model, the quench amplitude is defined as λ = ω f /ω i − 1 (taking ω i as the unit of energy), according to the expression of the quench parameter in terms of the harmonic oscillator ladder operators. We start with the computation of the singleparticle wave-function overlaps p n since the results for the many-body wave-function will be expressed as a function of them. The single-particle spectrum is non-degenerate and the single-particle eigenfunctions are: φ n (x) = 1 2 n n! √ πσ e − x 2 2σ 2 H n x σ , with σ = /mω and H n the Hermite polynomials. The single-particle excitation spectrum is split into the odd and even parity sectors and the overlaps are non-zero for evenparity wave-functions only. They read: p 2n = (2n)! 2 2n (n!) 2 √ 1 + λ 1 + λ/2 λ λ + 2 2n (3) for integer n. The many-body wave-function of an N -bosons excited configuration {n j } = {n 0 , · · · , n m } of the final Hamiltonian H f (with highest occupied level m) is: |{n j } = n 0 !n 1 ! · · · n m ! N ! p∈P |φ 1,f :p(1), · · · , φ m,f :p(N ) . with P the set of all permutations and n j the occupation of the single-particle orbital φ j,f . Overlapping this state with the N -bosons initial ground-state |φ 0,i , · · · , φ 0,i gives the manybody weights p {nj} = N ! (p 0 ) n0 n 0 ! (p 2 ) n2 n 2 ! · · · (p m ) nm n m !(4) In this equation, all m's are even integers. The total energy of this excitation is (4) is nothing but the multinomial distribution associated with the elementary probabilities p m and it is thus clear that it is normalized. We also see that formula (4) is in general valid for a free boson model, starting from the condensed ground-state (and specifying the p m ). If one takes the single-particle Boltzmann factor for the p m , one recovers the many-body Boltzmann factor for the configuration. Contrary to statistical ensemble distributions, the weights do not show a simple dependence of the configuration energy. This quench is qualitatively similar to a Joule compression/expansion as the 1D effective density n = N ω suddenly changes. In fact, λ = n f /n i − 1 is related the ratio of the effective densities. Other examples of quantum mechanical treatments of the Joule expansion can be found in the literature [73,74]. E {nj } = ω f (2n 2 + 4n 4 + · · · + mn m ) + ω f N/2 with the constraint m/2 j=0 n 2j = N . Eq. In order to get the distribution of the weights versus energy, we resort to numerics: using a fixed number of low-lying even parity levels N s , we scan all possible configurations of N bosons in these N s levels iteratively up to roughly 62 × 10 9 configurations (N = 18 and N s = 22). The truncation error associated with a finite N s is checked by summing up the weights. C. The one-dimensional Bose-Hubbard model The Bose-Hubbard model in a one-dimensional lattice, known to be non-integrable for U = 0, is described by the following Hamiltonian: H = −J j [b † j+1 b j + b † j b j+1 ] + U 2 j n j (n j − 1) , with b † j the operator creating a boson at site j and n j = b † j b j the local density. J is the kinetic energy scale while U is the magnitude of the onsite repulsion. In an optical lattice, the ratio U/J can be tuned by changing the depth of the lattice and using Feshbach resonance. When the density of bosons is fixed at n = 1 and U is increased, the zero-temperature equilibrium phase diagram of the model displays a quantum phase transition from a superfluid phase to a Mott insulating phase in which particles are localized on each site. The critical point has been located at U c ≃ 3.3J using numerics [75]. The quenches are performed by changing the interaction parameter U i → U f (we set J = 1 as the unit of energy in the following), so we have λ = (U f − U i )/2, and the perturbing operator H 1 = j n j (n j − 1) is diagonal. Numerically, one must fix a maximum onsite occupancy N max and we take N max = 4 unless stated otherwise. Exact diagonalization calculations are carried out using periodic boundary conditions and translational invariance. We denote by 0 ≤ k ≤ L − 1 the total momentum symmetry sectors. The algorithm to get the ground-state and eigenstates of the Hamiltonian is a full diagonalization scheme for sizes up to L = 10 at unitary filling. For some of the quantities, we use the Lanczos algorithm up to L = 15. In Ref. 54, the Lanczos algorithm has been proposed to compute the low-energy weights of the distribution. This worked relatively well for the 1D BHM, and in particular for the spectrum-integrated quantities but it may not be suited for all possible kind of quenches. We notice that in the case of quenches with a mean energy deep in the bulk of the spectrum, a generalization of the Lanczos algorithm [76] that works in the bulk of a spectrum could be used to get the main weights. For what we call small quenches in the following, the larger weights are in the low-energy region so Lanczos can generically give good results in such situations. II. ARGUMENTS ON FINITE SIZE EFFECTS AND THE DIFFERENT REGIMES OF A QUANTUM QUENCH The goal of this part is to quantify the distance of the quench distributionρ(E) from the many-body ground-state and the low-energy region of the spectrum. A first distance is defined from an energetic argument and a second one from the overlap with the ground-state of H f . Both criteria leads to a crossover number of bosons N c (λ) that can be computed numerically and that diverge with small λ as a power-law. When N ≪ N c , the quench probes the low-energy part of the spectrum while when N ≫ N c , high-energy physics govern the time-evolution. Both definitions do not depend on the integrability of the model but we may argue that for non-integrable models, there is a strong qualitative difference between the low-energy part of the spectrum and the bulk of the spectrum. These finite-size effects are rather generic while other kind of finite-size effects can emerge for a given model: this will be for instance the case for the BHM at large U . A. Crossover number of particles from an energetic argument The low-energy part of the spectrum -We first have to specify what we mean by the low-energy region of the spectrum: it corresponds to the typical energies of a few elementary excitations above the ground-state. These elementary excitations are quasi-particles, collectives modes, particle-hole excitations. . . Single or few excitations give a structure (dispersion relations, continuum of low-lying excitations) to the lowenergy part of the many-body spectrum (see an example in Fig. 1). We denote by ∆ f the typical energy scale of a single excitation, it is a microscopic energy scale. In Bethe-ansatz solvable or free systems, a high energy excitation can be understood as a superposition of single-particle excitations while this is no longer true for non-integrable systems [69,70]. If the number of elementary excitations remains small enough, they may hardly interact and have integrable-like features in the low-energy part of the spectrum. We thus expect a smooth crossover between integrable-like and non-integrable like behaviors with increasing the energy above the ground-state, but the typical energy of this crossover is hard to evaluate, except that it must be above E 0,f + ∆ f . Criteria -We consider that the energy distributionρ(E) is centered around the the mean energyĒ = ψ 0,i |H f |ψ 0,i of the distribution (fixed by the initial state) as in general ∆E/(Ē −E 0,f ) ∼ 1/ √ N . Since |ψ 0,i is not an eigenstate of H f , we necessarily haveĒ > E 0,f . The criteria we choose to distinguish between low-energy (or small) quenches and highenergy (or large) quenches isĒ = E * f (see Fig. 2 ) where E * f is such that E * f − E 0,f = ∆ f with the ground-state energy E 0,f . It corresponds to the situation where the mean-energy put into the system excites roughly only one elementary exci-FIG. 1: A typical many-body spectrum of a finite-size system: this example is taken from the 1D Bose-Hubbard model with U f /J = 2.5 and L = N = 10. Energies are given as a function of the total momentum k. The width of the spectrum is typically proportional to N or N 2 depending on the statistics. Zooms in the low-energy region and in the bulk of the spectrum (grey region) are given. The lowenergy region features elementary excitations up to a typical energy scale ∆ f which is assumed to be microscopic, i.e. non-extensive. Here, we take ∆ f = U f and the relation dispersion of the excitation branch is sketch (the line is a guide to the eyes). 2: (Color online) Sketch of the energy scales in a quantum quench. The initial state builds up an energy distributionρ(E) (diagonal ensemble) around a mean energyĒ fixed by the initial state. The quench amplitude λ tunes bothĒ and the ground-state energy E 0,f and the low- E 0,f E * fĒ ρ(E) ∆ f Energy FIG.energy scale E * f . ∆ f = E * f − E 0,f is assumed to be non-extensive while E 0,f andĒ are extensive.Ē = E * f defines the crossover number of particles Nc. In the thermodynamical limit, one expectsĒ ≫ E * f for any finite λ. tation and is thus a finite-size effect. Another way to introduce the same criteria is the following: (Ē − E 0,f )/∆ f is the energy difference between the initial state and the final groundstate in units of the typical elementary excitation energy ∆ f , the criteria corresponding to a distance of one ∆ f [87]. The criteria thus amounts to a lower bond of the energies at which one enters in the bulk of the spectrum. The order of magnitude of ∆ f is set by the microscopic units of energy of the model. For instance, we'll take U f in the BHM as it controls the sound velocity in the superfluid region and the Mott gap in the Mott phase. This criteria gives a relation between the crossover number of particles N c (on a lattice we work at finite density so it also corresponds to a crossover length L c ) and the quench amplitude λ is such that, if N ≪ N c (λ), the energy is mostly distributed among the low-energy excitations while if N ≫ N c (λ), most of the weights are on high-energy excitations. We can rewrite the criteria in a more tractable way: using the notation e = E/N for the energy per particle, and the label 0 for ground-state energies, it reads N c (λ) = ∆ f (λ) e(λ) − e 0,f (λ) .(5) Interestingly, we expect N c (λ) to generically diverge as λ −2 in the limit of small λ. Indeed, we haveē = e 0,i + λh 1,i with h 1,i = ψ 0,i |H 1 |ψ 0,i /N , the expansion e 0,f ≃ e 0,i + (de 0 /dλ) i λ + (d 2 e 0 /dλ 2 ) i λ 2 /2, and (de 0 /dλ) i = h 1 after Feynman-Hellman theorem. With Eq. (1), one finally gets N c (λ)λ 2 → 2∆ i /(d 2 e 0 /dλ 2 ) i at λ → 0. A few comments can be made on the criteria: • When comparing quenches from the same initial state but with different H f , λ controls the mean energy per particle put into the system. Thus, λ is a meaningful parameter even in the thermodynamical limit. • This definition looks qualitative due to the rather arbitrary choice of ∆ f and to the fact that, on finite systems, the energy distribution can have a rather large width associated with energy fluctuations ∆E. We point out that N c is a crossover number so that N ≃ N c has no particular meaning. Furthermore, from the divergence at small λ, one can have 1 ≪ N ≪ N c , i.e. a situation where energy fluctuations vanish. • When λ is scanned from 0 to a finite value, both the mean energy and the region of the spectrum that plays a role in the time-evolution (aroundē) are continuously changed. One can also notice that a quench that starts from a ground-state does not necessarily allow to access any energy of the H f spectrum, contrary to the situation where one prepares the initial state at will. • The regimes N ≫ N c and N ≪ N c are expected to be physically different for generic (non-integrable) systems. Below ∆ f , the density of states is usually much smaller than in the bulk of the spectrum: level spacings are of order of 1/N and observables can strongly fluctuate with the eigenstate number as it can be seen in Figs. 5 and 6 (similar observations can be made in the figures of Refs. 27,32,33). In this low-energy region, RMT arguments are not expected to work [69] and the eigenstates may not be "typical" so we expect the ETH to fail. These qualitative observations support the difference between the low-energy region and the highenergy region of the spectrum made at the beginning of this section. As the full spectrum width grows as N or N 2 (depending on the statistics of the particles) while the number of eigenstates grows exponentially with N , the density of states in the bulk of the spectrum is exponentially large. In this "high-energy" regime (with respect to elementary excitations), semi-classical and RMT arguments are believed to work reasonably well for non-integrable models [69], which was checked on some strongly correlated systems [70]. As observed numerically on several examples [27,32,33], simple observables hardly depend on the eigenstate number in this regime, supporting the ETH. • In the thermodynamical limit, we always have N ≫ N c and the small quench regime is thus expected to vanish. If one wants to check the ETH on a finite size systems, one needs sufficiently large λ in order to try to reach the bulk of the spectrum. However, we will see in this paper a counter-example (the BHM) where ETH fails at large λ (see also Ref. 55). Even though it looks difficult to use quenches to probe very low-energy excitations in a very large system, on a finite system, one could tune the mean energy from the low to high energy part of the spectrum using λ. Furthermore, this small quench regime is certainly of interest for numerical simulations, and also for experiments using a relatively small number of atoms (few hundreds or thousands). • Lastly, it could be interesting to compare this criteria with the domain of validity of bosonization [7,62] and conformal field theory [9,10] but this is beyond the scope of this paper. We note that conformal field theory can describe accurately quenches in certain integrable models in the thermodynamical limit and for arbitrary quench amplitudes [9,10]. Non-integrable models lowenergy features that are described in terms of a free particles (integrable) theory, as bosonization, should display non-thermalized features as for integrable models. In this respect, Ref. 59 gives interesting examples on the applicability of these methods to the quench situation. We now give examples of N c (λ) for the two models under study. In the free boson model, the mean energy after the quench can be computed analytically: e = e 0,i + ω f 4 ω f ω i − ω i ω f ,N c = ω f e − e 0,f = 4 ω i ω f + ω f ω i − 2 −1 = 4 λ + 1 λ 2 . This expression diverges as 4/λ 2 in the small quench regime and vanishes as 4/λ in the large quench regime. For the 1D BHM, we take ∆ f = U f and N c is given in Fig. 3 for the particular initial value U i = 2. It displays the expected λ −2 divergence at small quenches. We notice that the finite size effects on this energy-based criteria are pretty small. This can be put on general grounds for 1D systems: for critical systems the finite-size effects on the ground-state energy per particles have a universal correction [77]: e 0 (L) = e 0 (∞) − cπu 6L 2 + O 1 L 2 with u the sound velocity and c the central charge. If the system is gapped, the corrections are even smaller as they are exponentially suppressed, by a factor exp(−L/ξ) with ξ the correlation length enters in the formula. In the large quench limit of the BHM, one can argue that N c saturates to a finite value. Indeed, in the limit of large λ, one finds that N c → 2/( n 2 0,i − n 2 0,f ) + O(1/λ) ≃ 2/ n 2 0,i , as the density fluctuations n 2 0,f are suppressed in the Mott phase. Notice that the energy fluctuations, that scale as N −1/2 in the 1D BHM, have been computed numerically in Ref. 54. The full curve and the two asymptotic behaviors can be simply computed from ground-state calculations. B. Crossover number of particles based on the static fidelity In the thermodynamical limit, the (squared) fidelity between the two ground-states F = | ψ 0,i |ψ 0,f | 2 is generally expected to vanish exponentially with the system size or number of particles. Interestingly, 1−F counts the contribution of the excited states to the time-evolution. A possible definition of a crossover number of particles can thus be the value of λ and N such that F = 1/2, i.e. half of the total weight in the ground-state and half in the excited states. In the limit λ → 0, one can introduce the fidelity susceptibility χ i,L through the expansion F ≃ 1 − λ 2 χ i,L /2. The scaling of χ i,L is in general non-trivial. If H i is gapped, the scaling χ i,L ∼ L has been proposed [78], which gives the divergence N c ∼ λ −2 . In critical systems, super-extensivity, corresponding to a scaling χ i,L /L ∼ L αi with α i > 0, can occur [78,79], leading to a slower divergence N c ∼ λ −2/(1+αi) that depends on the initial state. Notice that we qualitatively expect that the N c from the fidelity will be smaller than the one based on energetic argument because, on sufficiently large systems, F can be very small while the mean energy is still in the low-energy part of the spectrum. For the free boson model, the static fidelity as a function of λ is F = ( √ 1 + λ/(1 + λ/2)) N . Setting F = 1/2, one has the crossover number of bosons N c : N c = ln 2 ln 1+λ/2 √ 1+λ(6) Notice that it also diverges in the small quench regime as N c = 8 ln 2/λ 2 with the same power-law as for the energetic arguments. Put in other words, this means that the many-body ground-state occupation is robust within a 25% change in ω for N = 10 2 , 7% for N = 10 3 and 2% for N = 10 4 (see next section for the single-particle level occupation). In the large amplitude limit, it decreases only logarithmically with λ, N c ≃ 2 ln 2/ ln λ but the prefactor is already small. The fidelity can also be computed for the 1D BHM by Lanczos calculations. Using the curves F (λ) obtained numerically, we determined N c (λ) for numbers of bosons from 6 to 15. The result is plotted in Fig. 3. Due to the relatively small sizes accessible with Lanczos, we cannot investigate the scaling exponent of the small quench divergence. The ground-state fidelity of the 1D BHM has been studied in Ref. 80. We observe that the static fidelity could be computed on larger chains with matrix-product states based algorithms [56,81] or quantum Monte-Carlo techniques [82]. C. Quench and transition temperature to the Bose-condensed regime in the free bosons model The free bosons model undergoes a transition to a Bosecondensed state below a critical temperature T c . In the 1D harmonic trap and on a finite size system, the lowest single particle level occupation n 0 becomes of the order of N below T c ≃ ωN/ ln(N ) (standard calculations of T c are performed in the grand-canonical ensemble and one sees that for fixed effective density ωN and N → ∞, T c → 0 in agreement with the fact that their is no Bose-condensation in this model in the thermodynamical limit although condensed and non-condensed regimes are clearly seen on finite systems). This critical temperature corresponds to a critical energy E c − E 0 ∼ ωN 2 . These standard results can be used to answer the question : whether or not a large quench from the many-body ground-state can drive the system into the non-condensed regime? We found that the meanenergy put into the system scales asĒ ∼ E 0,f + ω f N λ so that λ ∼ N is required to reach E c and the non-condensed regime. This surprising behavior (diverging with the number of bosons) actually agrees with the exact scaling of the single-particle ground-state occupation number which can be computed for the quench since we have seen that the distribution is the multinomial one : we have n 0 = N p 0 ∼ N/ √ λ at large λ. Similarly, the fluctuations can be computed and read n 2 0 − n 0 2 = N p 0 (1 − p 0 ) so that the relative fluctuations scale as 1/ √ N with a λ-dependent prefactor. Consequently, starting from the many-body ground-state (for which n 0 = N ), one stays in the condensed regime for finite λ and one needs λ ∼ N z with z > 2 to make n 0 scale to zero in the thermodynamical limit. The physical origin of the fact that the quench process makes it difficult to reach the critical temperature is that the many-body ground-state has vanishing overlaps with the excited states above T c because they have negligible contributions from the single-particle ground-state. Starting from a finite temperature state, the quench could help cross the critical temperature. III. DIAGONAL ENSEMBLE AND THERMALIZATION In this section, we compare averages of the expectation values of observables obtained from different ensembles: the diagonal, microcanonical and canonical ones. We also show the behavior of some local and global observables as a function of the energy per particle to discuss the possibility of thermalization according to the ETH. The first numerical evidences that the ETH does not work for large quenches on finite systems of the 1D BHM were recently given in Ref. 55. A. Microcanonical temperature and the density of states As a preliminary, we discuss the finite size effects and possible issues with the microcanonical ensemble in the model under study. The standard way to define the microcanonical temperature T M of a closed system is from Boltzmann's formula 1 T M = ∂s M ∂ē ,(7) where we use the entropy per particle s M = S M /N and the statistical entropy S M (Ē) = k b ln Ω(Ē). Ω(Ē) is the number of states within a small energy window δE aroundĒ. Any distribution that is peaked enough (δE/Ē → 0 in the thermodynamical limit) will pick up the local density of states g(ē) through Ω(Ē) ≃ g(Ē)δE. Usually, δE is taken as the energy fluctuations with δE ∼Ē/ √ N . Thus, δE is typically much larger than microscopic energy scales such as ∆ f . For the free boson model, energies per particle are separated by ω f /N and the degeneracy g(e) of each level can be computed numerically for small systems. Asymptotic analytical results exist in the large energy limit for g(e) [83][84][85]. We can thus have access to the microcanonical entropy per particle through s M = ln g(e)/N . In Fig. 4, we show the logarithm of the density of states of the 1D Bose-Hubbard model on a finite size system (L = N = 10) for increasing values of the interaction U as a function of the energy per particle in units of U . For small interactions, the behavior is smooth and one may safely take the derivative to get the microcanonical temperature. The system has a density of states typical of a bound spectrum Hamiltonian, displaying first positive and then negative temperature regimes. For U = 12J, in the Mott phase, one observes a gap to the ground-state in the low-energy part of the spectrum and also some oscillations over a typical scale 1/N . These oscillations are easily understood in the atomic limit (J = 0) where they correspond to Mott peaks that have a high degeneracy, giving this macroscopic density of states at the center of the lobes. A small J broadens the peaks but the lobes are expected to survive for large enough U in a finite system, as one can see for U = 20J. In this large-U limit, e 0 /U gets close to zero while the maximum energy per site is proportional to the number of particles (in Fig. 4, the situation at high energies is a bit different because we cut the maximum number of bosons onsite). The number of Mott lobes being of order N 2 , the density of lobes per unit of e/U grows as N (this remark remains valid with a cut in the maximum number of bosons per site). This means that the density of states, as a function of the energy per site, will be a curve carved into more and more lobes as N increases. For large enough systems, δe will be much larger than the inter-lobe distance and will pick up the envelope of the lobes as a local density of states. On finite systems, δe and 1/N could be of the same order of magnitude, which makes the definition of the microcanonical temperature rather difficult since it is very sensitive to the choice of δe and the shape of the peaked distribution. In the following, the microcanonical ensemble densitymatrix ρ M is defined in the usual way: ρ M = En∈[Ē−δE,Ē+δE] 1 Ω |ψ n,f ψ n,f |(8) with the "free" parameter δE as a "small" energy window energy. Ω is simply the number of eigenstates in the energy window [Ē − δE,Ē + δE]. The sum over the eigenstates of H f must be taken over all symmetry sectors. Notice that δE can be chosen by hand [27,32,33], or in the same way as the effective canonical temperature will be: by looking for an approximate solution of the equationĒ = Tr[ρ M H f ] (we recall thatĒ = ψ 0,i |H f |ψ 0,i is fixed by the initial state). In that case, the solution can be multi-valued so it does not necessarily help. Taking δE as the computed energy fluctuations does not help either because on finite systems, the distributions for the 1D BHM are quite asymmetric and have large moments. The choice of δE is in general arbitrary and we have tried to choose the one that gives best results for both the correlations and the energy. A partial conclusion is that number of particles required to have a reliable definition of the microcanonical ensemble can vary a lot depending on the model and the chosen parameters. For the 1D BHM, we see that the peculiar shape of the density of states can be an issue, although it intimately linked to the physics of the model. B. Canonical ensemble and effective temperature Even though we work on a closed system, we introduce a canonical density-matrix as it was done in Ref. Tr[ρ B H f ]. As the mean energy is a continuous and increasing function of T B , the solution is unique and the optimization procedure works well. We take k B = 1 in the following so that temperatures are given in the same units as the energies. Here again, the trace is taken over all symmetry sectors. The diagonal ensemble, on the contrary, has non-zero weights only in the initial state symmetry sector, that is the even parity sector for the free boson model and the k = 0 sector in the 1D BHM. As the clouds of points of the distributions sometimes look exponential, another temperature can be defined by fitting the cloud of data with a normalized Boltzmann law and using a procedure that minimizes the following cost function between two distributions ρ 1 and ρ 2 : χ(ρ 1 , ρ 2 ) = n (ln p n,1 − ln p n,2 ) 2 . Once convergence is reached, we call T D the effective temperature obtained from the distribution. We lastly recall that provided the density of states scales exponentially with the energy and the energy fluctuations are negligible in the thermodynamical limit, the microcanonical and canonical ensembles will lead to the same thermodynamic functions, and same temperatures. C. Comparison of observables from different ensembles We here focus on the comparison of observables obtained from different ensembles in the 1D BHM. The evolution of one local and one global observable as a function of the eigenstates energy per particle is given in Fig. 5 and 6. Each of these two observables are used separately in the literature so we here give results for both for completeness. The observables are the one-particle density-matrix, defined for a translationally invariant Hamiltonian as: g r (e) = 1 L L i=1 ψ f (e)|b † i+r b i |ψ f (e) ,(10) FIG. 6: Global observable n k=0 (e) as a function of the energy per particle (same parameters as in Fig. 5). where |ψ f (e) is the eigenstate of energy e. g r (e) is a local observable since, for a given r, it can be attributed to a subsystem. On the contrary, the momentum distribution n k integrates information from all distances and may be considered as a global quantity: n k (e) = L−1 r=−L+1 e ikr g r (e) .(11) In Fig. 5 and 6, one observes that both g 1 (e) and n k=0 (e) evolve smoothly in the superfluid regime (U/J = 2.5). One also realizes that the largest fluctuations are found in the lowenergy part of the spectrum, supporting the energetic argument for the finite size effects. If one were able to chooseē in the bulk of the "superfluid" spectrum, one would possibly find agreement with ETH. However, for the finite size systems at hand, one cannot reach the bulk of the spectrum before the Mott lobes emerge with λ. As it was shown in Ref. 55 and here confirmed, the observables strongly vary within each Mott lobe. We now turn the nature of the distributions for different quenches and compare the results for g r obtained by the different ensembles. Fig. 7 and 8 gather the data for a small and large quench from the superfluid region with U i = 2. Small quench regime in the 1D BHM When U f = 2.5, the distribution is peaked on the final ground-state with a large weight p 0 . The tail displays an exponential-like behavior that, however, has an effective temperature T D different from T B , determined from the energy. This is easily understood from the fact that only the very few first weights significantly contribute to the energy, and they are not aligned with the tail. Asē is very close to e 0,f in this regime and as there are only a very few energies at the bottom of the spectrum, the microcanonical ensemble gives a bad mean energy and has only a few number of eigenstates. In this regime where p 0 is close to one, a minimal microcanonical ensemble would simply be |ψ 0,f ψ 0,f |, although it has no statistical meaning. Looking at the correlations g r in Fig. 8 shows that they seem to be thermalized in the sense that ρ B gives a reasonable account of the correlations. However, |ψ 0,f ψ 0,f | also gives a reasonable account for the correlations while ρ M does not satisfactorily reproduce them. The system is in a regime dominated by finite size effects, far below the crossover number of bosons. The points of Ref. 11 in the "thermalized" region of the phase diagram seem to belong to this regime dominated by finite size effects. We have also looked at a slightly larger quench amplitude with U i = 1 and U f = 4 as in Fig. 3 of Ref. 11 (however, we work on a slightly smaller system size and the data displayed in Ref. 11 were averaged over time, so correlations cannot be quantitatively compared). Since p 0 is smaller, there is a substantial difference between the correlations in the final ground-state U f and the one from the diagonal ensemble. The canonical ensemble still gives the best agreement withρ. In a sense, the shape of the distributions as given in Ref. 54 does explain the observation of Ref. 11. Yet, the distribution is clearly not a true Boltzmann one as the temperature obtained from the mean energy and other observables are not identical. In order to investigate this deviation, or difficulty to define an effective temperature, we have computed the ratio between the two effective temperatures T D and T B in Fig. 9. For L = 6 to 9, it remains between 1 and 3.5 and has a tendency to diverge at small quenches. Consequently, ETH does not apply here due to the presence of strong finite size effects, but one cannot claim either that the system is thermalized even though some correlations look thermalized in the canonical ensemble. The observed distributions are specific to this model and to these system lengths and parameters. We also point out that a similar regime has been observed in Ref. 32, corresponding to low effective temperatures, but for which the diagonal ensemble distributions were not plotted. Still, the behavior of large systems (N ≥ N c ) in the small quench regime remains an open but very interesting question as the low-energy physics will control the behavior. In this respect, we draw an argu- ment in favor of non-thermalization: for symmetry reasons, the quench only excites states in the ground-state symmetry sector while the statistical ensembles average over all symmetry sectors. For instance, a system with a branch of excitation E(k) can have a k = 0 gap while the whole spectrum is gapless, hence it could not look thermalized. Starting from a finite temperature state or including symmetry breaking terms, like disorder, could partially cure this symmetry constraint. Large quench regime Results for two large quenches at a commensurate density n = 1, from the superfluid parameters to deep into the Mott limit and reversely, are given in Fig. 7 and Fig. 8. For the first one, from U i = 2 to U f = 20, the distribution shows very strong fluctuations of the weights within each Mott lobes [54]. In particular, large weights are present in the low-energy part of the first sub-bands. In Ref. 55, it was shown that the larger values of g 1 were correlated to the larger weights (see another example of such a plot for an incommensurate density in Fig. 11), explaining that the ETH does not apply in these finite size systems. This is confirmed by looking at the timeaveraged correlations that are neither reproduced by ρ M nor by ρ B . DMRG calculations [11,55] gave evidence that a nonthermal correlations g r survive for system sizes of order 100. We now elucidate the origin of the observed nonthermalized regime, first by looking at the effect of the commensurability of the density in order to determine whether the presence of an equilibrium critical point plays a role for large quenches. As shown in Fig. 10 and 11, the phenomenology is very similar to the commensurate case with a non-thermalized regime at large quenches, except that there is no gap above the ground-state. Quenches that remain in the superfluid region (data not shown) also have the same behavior as for the commensurate case. These results suggest that the reason for nonthermalization is not related to the features of the low-energy spectrum, i.e. to the presence of a gap above the ground-state, but is related to the proximity of the U = ∞ limit of the model. However, in the small quench regime where the lowenergy part of the spectrum governs the out-of-equilibrium physics, the opening of a gap can certainly play a role. Unfortunately, due to the finite size effects discussed in this paper, this interesting question cannot be addressed with reliability. For instance, it has been shown recently [61] that a quench in the quantum Ising model, which is integrable, is sensitive to the presence of the critical point. We note that the lobes could be qualitatively interpreted as stemming from a 1D gapped single-particle dispersion relation both in the commensurate and incommensurate regimes. However, in the latter case, there will not be any transition to an insulating state as a function of temperature. One can actually argue that the large-U structure of the distribution is reminiscent of the atomic limit U = ∞ in which we show that both the weights and the observables fluctuate and are correlated so that ETH is violated in this limit. What one can show is that the weights of a quench from U i = 0 to U f = ∞ depend on the configuration in each of the degenerated Mott peaks of the U f = ∞ limit. This argument does not rely on the n = 1 commensurability condition. Indeed, the eigenstates of the final Hamiltonian are simply the set of configurations {n j } j=1,L with n j the onsite occupations. The energy per particle of the configuration is e({n j }) U f = 1 2N L i=j n j (n j − 1) . The initial ground-state is the superfluid state that has equal single-particle probabilities p j = 1/L on each site. Using formula (4), we get for the diagonal weights: p n = p {nj } = N ! n 1 !n 2 ! · · · n L ! 1 L N .(12) This makes a connection to the free boson model that we also study, having the U f energy spacing between the degenerate levels instead of ω f and a different energy-configuration relation. The formula is valid for bare configurations, i.e. when they are not symmetrized. Using symmetries, formula (12) picks up an additional factor depending on the degeneracy of the generalized Bloch state. One can see by taking an example of two configurations with the same energy, or check numerically, that the weights can be different for configurations with the same energy, in the same way as for the free boson model. Consequently, in a strongly degenerate Mott peak, the diagonal weights are not equal and fluctuate. As soon as a non-integrable perturbation (here the hopping J) is turned on and lifts the degeneracy, the distribution of the weights will still strongly fluctuate within the Mott lobe. This explains the findings of Refs. 54, 55 and of Fig. 7. Another simple observation in this limit is that two degenerate configurations can have different expectation values for the observables. An obvious one is the onsite particle distribution that counts empty, single, double occupations and so on. The off-diagonal correlation g r can be non-zero if the configurations are symmetrized and one can check numerically that they actually strongly differ for degenerate states. Notice that, in principle, one has to take into account the off-diagonal part of the time-averaged densitymatrix that is non-zero in this highly degenerate limit. When one turns on J, this off-diagonal part vanishes and the g r still fluctuate strongly for eigenstates close in energy. Lastly, the asymmetrical correlation between the weights p n and the observables is also observed in this limit. We show this numerically on a system with U i = 0 and U f = 100 in Fig. 11 (we take U f /J = 100 and not J f = 0 because one needs a finite, yet very small, J to makeρ diagonal). The numerics for a small J/U in Fig. 7 and Fig. 5 strongly supports this mechanism as an explanation for the behavior of both the distributions and the observables. We remark that the argument works as well for the 2D version of the model that was shown to have a non-thermalized regime too [11]. The fate of this explanation in the thermodynamical limit is yet an open question. A scenario could be that this mechanism works above a certain critical quench amplitude λ c (N ) but how this critical value behaves as N → ∞ remains a difficult question. Consequently, one may understand the finite size effects stemming from the large-U limit as another N c (λ) line in Fig. 3 increasing with λ and that is specific to this model. Yet, nonthermalization in the thermodynamical limit in the BHM cannot be excluded as well. Experiments in cold atoms [1] work with a relatively small number of atoms and can easily reach this large-U limit so that such considerations are physically relevant. We also give results for a quench from the Mott to the superfluid limit. There, one could expect from Fig. 5 and 6 that ETH could work since the observables behave smoothly with e in the final Hamiltonian. However, for the accessible sizes, one observes that the Boltzmann law still work better than the microcanonical ensemble, with large weights at low energies. We conclude that the breakdown of the ETH could here be attributed to finite size effects. Free boson model We now briefly discuss the evolution of the distribution for the free boson model for a fixed number of bosons and increasing λ. Very surprisingly, the distribution of the single-particle weights versus single-particle energies ε 2n = 2n ω f + ω f /2 has some remarkable features (we recall that only the even levels can be occupied from symmetry reasons). In the limit of large energy ε ∼ 2n, we have which has an exponential tail with the effective temperature T (λ) = ω f / ln λ+2 λ . In the limit of small quenches, the distribution is Boltzmann-like with a temperature T (λ) ≃ − ω f / ln |λ/2| going to zero. This exponential-like behavior is not generic and a simple counter-example can be found in the case of an expanding box [74]. For the many-particle situation with N = 18 and N s = 22, we give in Fig. 12 the evolution of the distribution for increasing λ. For small quench, the behavior looks like Boltzmann (we do not expect a pure exponential law due to the presence of the degeneracy function g(e), see below for a quantitative comparison) and it can be understood from the fact that the main contribution comes from single-boson excitations that have the same weights as the single-particle ones. When λ is increased, the energy distribution gets peaked around a low-energy level and is strongly anisotropic with the maximum at a different place from the mean energy. This distribution finally develops a high-energy tail for large λ. One can compute analytically the third moment M 3 = Tr(ρ(H −Ē) 3 ) which is non-zero and scales as N showing that the distribution remains anisotropic and that the anisotropy (M 3 ) 1/3 /(Ē − E 0,f ) decreases as N −2/3 . In order to compare the distributions from different ensembles, we use the von-Neumann entropy of a density-matrix ρ which is defined as S vN (ρ) = −Tr[ρ ln ρ]. Contrary to observables, this quantity is more sensitive to the tail of the distribution. S vN /N for the Boltzmann and diagonal ensembles are shown in Fig. 13. The density-matrix ρ ′ B is a Boltzmann distribution but restricted to the even parity levels only. We see that for small quenches, s(ρ ′ B ) and s(ρ) are very close. The larger entropy for s(ρ B ) is simply due to the fact that half of the Hilbert space is not accessible toρ for symmetry reasons: s(ρ ′ B ) and s(ρ B ) are actually the same up to a factor 2 in the energy. Comparing the data to the microcanonical entropy is not relevant here because of finite size effects (energy discretization and small degeneracy of the first levels) for the values of the mean energy accessible here. p 2n ≃ p 0 (λ) e −2n ln | λ+2 λ | √ π2n ∝ e −ε/T (λ) √ ε IV. CONCLUSIONS The first conclusion we would like to highlight is that, when carrying out numerical simulations on a finite system, one has to care both about the quench amplitude and the size of the system to see in which region of the spectrum are the main weights of the diagonal ensemble distribution. It has been shown that, although the low-energy part of the spectrum is the place where the most interesting physics is expected, one experiences large finite size effects when exploring it. A crossover number of particles, distinguishing between the small quench regime and the large quench regime, can be tentatively defined from energetic considerations or from the static fidelity between the ground-states of the initial and final Hamiltonians. One advantage is that they can be computed numerically with few finite size effects (for the energy based criteria) or with ground-state techniques that work on larger systems (for both criteria). The numbers have been computed for the two models under study. As the system follows a finite size crossover between the two regimes, it can actually happen to be difficult for nowadays numerics to be close enough to the thermodynamical limit, where ETH is expected to work generically, even though some examples can be found in the literature [27]. This actually is what happens for the one-dimensional Bose-Hubbard model as we have seen. Hence, the thermalization-like regime in the small quench limit deduced from observables comparison and the qualitative Boltzmann-like structure of the distribution cannot be considered as truly thermalized because of dominant finite size effects. Furthermore, sizes accessible with full diagonalization cannot reach the bulk of the energy spectrum before the structure of the spectrum resembles the infinite-U atomic limit. The free boson model nicely illustrates the crossover from a Boltzmann-like distribution, up to phase space constraints, at small quench to a different distribution. We note that due to the large density of states and to negligible energy fluctations, we may expect the quench, canonical and microcanonical distributions to eventually be equivalent in the thermodynamical limit. However, we have discussed the fact that the smaller the mean-energy (or equivalently the temperature), the larger are finite-size effects. We do not believe that the observed finite-size and canonical-like distributions at small quenches are generic (notice that no claim in that direction was made in Ref. 54) and they may better be simply understood as (counter-)examples. The second important conclusion is that we have shown that the non-thermalized regime observed on finite systems for large quenches in the 1D Bose-Hubbard model is actually related to the proximity of the U = ∞ atomic limit, something that may qualitatively be equivalent to the proximity of an integrable point. Indeed, this regime does not depend on the low-energy features at a commensurate density, i.e. to the presence of the superfluid-Mott transition, and besides, the structure of the diagonal ensemble stems from the U = 0 → U f = ∞ quench limit of the Bose-Hubbard model. In non-integrable models, the challenging issues on what are the features of the small quench regime for very large sizes and how the non-thermalized regime neighboring an integrable point survive in the thermodynamical regime seem to be hardly accessible to current numerical algorithms. FIG. 3 : 3(Color online) Crossover number of bosons Nc obtained from the energetic argument in the 1D BHM at filling n = 1 and starting from Ui = 2. A few points obtained from the criteria based on the static fidelity are also given. with e 0 ,i/f = ω i/f / 2 . 02The energy fluctuations are given by ∆e = (ē − e 0,i ) 2/N , showing that the distribution gets peaked in the thermodynamical limit with the usual scaling. A natural choice for ∆ f is ω f (the only microscopic energy scale) and the crossover number of bosons can be expressed as a function of the quench amplitude: FIG. 4 : 4Logarithm of the density of states g(e) as a function of the energy per particle e in the 1D BHM with density n = 1 for three different interactions. Mott gaps develop at large U , splitting the density of states into many lobes separated by 1/N . 30, 32, 33 and implicitly in the (grand)-canonical calculations of Ref. 11:ρ B = e −H f /kB TB Z , with Z = Tr[e −H f /kB TB ] (9)The effective canonical temperature T B can be defined, as in Refs. 30, 32, 33, as the solution of the equationĒ =FIG. 5: Local observable g1(e) of the 1D BHM as a function of the energy per particle for N = L = 10 and increasing interactions (for large quenches, the results were first given in Ref. 55) FIG. 7 : 7(Color online) Comparison of different ensembles for different quenches. The effective temperature TB (given inFig. 8)is fixed by the mean energy. The results are obtained on a system with L = N = 10. FIG. 8 :FIG. 9 : 89(Color online) Comparison of averaged correlations gr corresponding the parameters of Fig. (Color online) Ratio of effective canonical temperatures obtained from the distribution (TD) and from the mean energy (TB). Results are obtained for a quench starting at Ui = 2. FIG. 10 : 10(Color online) Quench from Ui = 2 to U f = 20 for an incommensurate density n = 2/3 for which there is no equilibrium quantum critical point. The structure of the density of states, the evolution of the local correlations gr and the shape of the distributions are very similar to the commensurate case. FIG. 11 : 11(Color online) Upper panel: comparison of the different observables in different ensembles (same parameters as in Fig. 10).Middle panel: the pn vs g1(n) curve gives proof of the nonrelaxation towards a thermal state for the same parameters as in the upper panel. Lower panel: same plot but for the "integrable" quench limit Ui = 0 and U f = 100. 12: (Color online) Evolution of the distribution times the density of states of the diagonal ensemble for free bosons in an harmonic trap as a function of the quench amplitude. Here, the pn are the sum of the diagonal weights in each highly degenerate excitation sector. 13: (Color online) Von Neumann entropy per particle versus energy for the free boson model. Dashed lines are data for N = 17, full lines for N = 18 (Ns = 22). AcknowledgmentsI thank T. Barthel[86]for pointing out that the temperature extracted from the distribution T D can be different from the one defined from the mean energy T B . . M Greiner, O Mandel, T W Hänsch, I Bloch, Nature. 41951M. Greiner, O. Mandel, T. W. Hänsch, and I. Bloch, Nature 419, 51 (2002). . T Kinoshita, T Wenger, D S Weiss, Nature. 440900T. Kinoshita, T. Wenger, and D. S. Weiss, Nature 440, 900 (2006); . L E Sadler, M Higbie, S R Leslie, M Vengalattore, D M Stamper-Kurn, Nature. 443312L. E. Sadler, M. Higbie, S. R. Leslie, M. Vengalattore, and D. M. Stamper-Kurn, Nature 443, 312 (2006); . S Hofferberth, I Lesanovsky, B Fischer, T Schumm, J Schmiedmayer, Nature. 449324S. Hoffer- berth, I. Lesanovsky, B. Fischer, T. Schumm, and J. Schmied- mayer, Nature 449, 324 (2007). . M Moeckel, S Kehrein, Annals of Physics. 3242146M. Moeckel and S. Kehrein, Annals of Physics 324, 2146 (2009). . F Iglói, H Rieger, Phys. Rev. Lett. 853233F. Iglói and H. Rieger, Phys. Rev. Lett. 85, 3233 (2000). . K Sengupta, S Powell, S Sachdev, Phys. Rev. A. 6953616K. Sengupta, S. Powell, and S. Sachdev, Phys. Rev. A 69, 053616 (2004). . M Rigol, A Muramatsu, M Olshanii, Phys. Rev. A. 7453616M. Rigol, A. Muramatsu, and M. Olshanii, Phys. Rev. A 74, 053616 (2006). . M A Cazalilla, Phys. Rev. Lett. 97156403M. A. Cazalilla, Phys. Rev. Lett. 97, 156403 (2006). . E Perfetto, Phys. Rev. B. 74205123E. Perfetto, Phys. Rev. B 74, 205123 (2006). . P Calabrese, J Cardy, Phys. Rev. Lett. 96136801P. Calabrese and J. Cardy, Phys. Rev. Lett. 96, 136801 (2006). . P Calabrese, J Cardy, J. Stat. Mech.: Theor. Exp. 6008P. Calabrese and J. Cardy, J. Stat. Mech.: Theor. Exp., P06008 (2007). . C Kollath, A M Läuchli, E Altman, Phys. Rev. Lett. 98180601C. Kollath, A. M. Läuchli, and E. Altman, Phys. Rev. Lett. 98, 180601 (2007). . S R Manmana, S Wessel, R M Noack, A Muramatsu, Phys. Rev. Lett. 98210405S. R. Manmana, S. Wessel, R. M. Noack, and A. Muramatsu, Phys. Rev. Lett. 98, 210405 (2007). . D M Gangardt, M Pustilnik, Phys. Rev. A. 7741604D. M. Gangardt and M. Pustilnik, Phys. Rev. A 77, 041604(R) (2008). . A Hackl, S Kehrein, Phys. Rev. B. 7892303A. Hackl and S. Kehrein, Phys. Rev. B 78, 092303 (2008). . M Babadi, D Pekker, R Sensarma, A Georges, E Demler, arXiv:0908.3483M. Babadi, D. Pekker, R. Sensarma, A. Georges, and E. Dem- ler, arXiv:0908.3483 (2009). . A Hackl, S Kehrein, M Vojta, Phys. Rev. B. 80195117A. Hackl, S. Kehrein, and M. Vojta, Phys. Rev. B 80, 195117 (2009). . T Barthel, C Kasztelan, I P Mcculloch, U Schollwöck, Phys. Rev. A. 7953627T. Barthel, C. Kasztelan, I. P. McCulloch, and U. Schollwöck, Phys. Rev. A 79, 053627 (2009). . A Peres, Phys. Rev. A. 301610A. Peres, Phys. Rev. A 30, 1610 (1984); . Phys. Rev. A. 30504Phys. Rev. A 30, 504 (1984). . M Feingold, N Moiseyev, A Peres, Phys. Rev. A. 30509M. Feingold, N. Moiseyev, A. Peres, Phys. Rev. A 30, 509 (1984); . M Feingold, A Peres, Phys. Rev. A. 34591M. Feingold and A. Peres, Phys. Rev. A 34, 591 (1986); . R V Jensen, R Shankar, Phys. Rev. Lett. 541879R. V. Jensen and R. Shankar, Phys. Rev. Lett. 54, 1879 (1985). . J M Deutsch, Phys. Rev. A. 432046J. M. Deutsch, Phys. Rev. A 43, 2046 (1991). . M Srednicki, Phys. Rev. E. 50888M. Srednicki, Phys. Rev. E 50, 888 (1994). . M Srednicki, arXiv:cond-mat/9410046M. Srednicki, arXiv:cond-mat/9410046 (2004). . M Srednicki, J. Phys. A: Math. Gen. 321163M. Srednicki, J. Phys. A: Math. Gen. 32, 1163 (1999). . S O Skrovseth, Europhys. Lett. 761179S. O. Skrovseth, Europhys. Lett. 76, 1179 (2006). . M Rigol, V Dunjko, V Yurovsky, M Olshanii, Phys. Rev. Lett. 9850405M. Rigol, V. Dunjko, V. Yurovsky, and M. Olshanii, Phys. Rev. Lett. 98, 050405 (2007). . D C Brody, D W Hook, L P Hughston, J. Phys. A: Math. Theor. 40503D. C. Brody, D. W. Hook, and L. P. Hughston, J. Phys. A: Math. Theor. 40, F503 (2007). . M Rigol, V Dunjko, M Olshanii, Nature. 452854M. Rigol, V. Dunjko, and M. Olshanii, Nature 452, 854 (2008). . P Reimann, Phys. Rev. Lett. 101190403P. Reimann, Phys. Rev. Lett. 101, 190403 (2008). . M Moeckel, S Kehrein, Phys. Rev. Lett. 100175702M. Moeckel and S. Kehrein, Phys. Rev. Lett. 100, 175702 (2008). . D Rossini, A Silva, G Mussardo, G E Santoro, Phys. Rev. Lett. 102127204D. Rossini, A. Silva, G. Mussardo, and G. E. Santoro, Phys. Rev. Lett. 102, 127204 (2009). . M Eckstein, M Kollar, P Werner, Phys. Rev. Lett. 10356403M. Eckstein, M. Kollar, and P. Werner, Phys. Rev. Lett. 103, 056403 (2009). . M , Phys. Rev. Lett. 103100403M. Rigol, Phys. Rev. Lett. 103, 100403 (2009). . M , Phys. Rev. A. 8053607M. Rigol, Phys. Rev. A 80, 053607 (2009). . S Sotiriadis, P Calabrese, J Cardy, Europhys. Lett. 8720002S. Sotiriadis, P. Calabrese, and J. Cardy, Europhys. Lett. 87, 20002 (2009). . T Barthel, U Schollwöck, Phys. Rev. Lett. 100100601T. Barthel and U. Schollwöck, Phys. Rev. Lett. 100, 100601 (2008). . M Cramer, C M Dawson, J Eisert, T J Osborne, Phys. Rev. Lett. 10030602M. Cramer, C. M. Dawson, J. Eisert, and T. J. Osborne, Phys. Rev. Lett. 100, 030602 (2008). . M Cramer, A Flesch, I P Mcculloch, U Schollwöck, J Eisert, Phys. Rev. Lett. 10163001M. Cramer, A. Flesch, I. P. McCulloch, U. Schollwöck, and J. Eisert, Phys. Rev. Lett. 101, 063001 (2008). . A Flesch, M Cramer, I P Mcculloch, U Schollwöck, J Eisert, Phys. Rev. A. 7833608A. Flesch, M. Cramer, I. P. McCulloch, U. Schollwöck, and J. Eisert, Phys. Rev. A 78, 033608 (2008). . P Calabrese, J Cardy, J. Stat. Mech.: Theor. Exp. 4010P. Calabrese and J. Cardy, J. Stat. Mech.: Theor. Exp. 2005, P04010 (2005). . G D Chiara, S Montangero, P Calabrese, R Fazio, J. Stat. Mech.: Theor. Exp. 3001G. D. Chiara, S. Montangero, P. Calabrese, and R. Fazio, J. Stat. Mech.: Theor. Exp., P03001 (2006). . A M Läuchli, C Kollath, J. Stat. Mech.: Theor. Exp. 5018A. M. Läuchli and C. Kollath, J. Stat. Mech.: Theor. Exp., P05018 (2008). . M Fagotti, P Calabrese, Phys. Rev. A. 7810306M. Fagotti and P. Calabrese, Phys. Rev. A 78, 010306(R) (2008). . S R Manmana, S Wessel, R M Noack, A Muramatsu, Phys. Rev. B. 79155104S. R. Manmana, S. Wessel, R. M. Noack, and A. Muramatsu, Phys. Rev. B 79, 155104 (2009). . F Heidrich-Meisner, M Rigol, A Muramatsu, A E Feiguin, E Dagotto, Phys. Rev. A. 7813620F. Heidrich-Meisner, M. Rigol, A. Muramatsu, A. E. Feiguin, and E. Dagotto, Phys. Rev. A 78, 013620 (2008). . F Heidrich-Meisner, S R Manmana, M Rigol, A Muramatsu, A E Feiguin, E Dagotto, Phys. Rev. A. 8041603F. Heidrich-Meisner, S. R. Manmana, M. Rigol, A. Muramatsu, A. E. Feiguin, and E. Dagotto, Phys. Rev. A 80, 041603(R) (2009). . T Keilmann, I Cirac, T Roscilde, Phys. Rev. Lett. 102255304T. Keilmann, I. Cirac, and T. Roscilde, Phys. Rev. Lett. 102, 255304 (2009). . J Neumann, ; S Goldstein, J L Lebowitz, C Mastrodonato, R Tumulka, N Zanghi, arXiv:0907.0108Z. Physik. 5730J. von Neumann, Z. Physik 57, 30 (1929). S. Goldstein, J. L. Lebowitz, C. Mastrodonato, R. Tumulka and N. Zanghi, arXiv:0907.0108 (2009). . A Silva, Phys. Rev. Lett. 101120603A. Silva, Phys. Rev. Lett. 101, 120603 (2008). . S Dorosz, T Platini, D Karevski, Phys Rev E. 7751120S. Dorosz, T. Platini, and D. Karevski, Phys Rev E 77, 051120 (2008). . A Polkovnikov, arXiv:0806.2862A. Polkovnikov, arXiv:0806.2862 (2008). . A Polkovnikov, Phys. Rev. Lett. 101220402A. Polkovnikov, Phys. Rev. Lett. 101, 220402 (2008). . M Eckstein, M Kollar, Phys. Rev. Lett. 100120404M. Eckstein and M. Kollar, Phys. Rev. Lett. 100, 120404 (2008). . M Kollar, M Eckstein, Phys. Rev. A. 7813626M. Kollar and M. Eckstein, Phys. Rev. A 78, 013626 (2008). . G Roux, Phys. Rev. A. 7921608G. Roux, Phys. Rev. A 79, 021608(R) (2009). . G Biroli, C Kollath, A Laeuchli, arXiv:0907.3731G. Biroli, C. Kollath, and A. Laeuchli, arXiv:0907.3731 (2009). . G Vidal, Phys. Rev. Lett. 9340502G. Vidal, Phys. Rev. Lett. 93, 040502 (2004). . S R White, A E Feiguin, Phys. Rev. Lett. 9376401S. R. White and A. E. Feiguin, Phys. Rev. Lett. 93, 076401 (2004). . A J Daley, C Kollath, U Schollwöck, G Vidal, J. Stat. Mech.: Theor. Exp. 4005A. J. Daley, C. Kollath, U. Schollwöck, and G. Vidal, J. Stat. Mech.: Theor. Exp., P04005 (2004). . P Barmettler, M Punk, V Gritsev, E Demler, E Altman, arXiv:0911.1927Phys. Rev. Lett. 102130603P. Barmettler, M. Punk, V. Gritsev, E. Demler, and E. Altman, Phys. Rev. Lett. 102, 130603 (2009); arXiv:0911.1927 (2009). . A Faribault, P Calabrese, J.-S Caux, J. Stat. Mech.: Theor. Exp. 3018A. Faribault, P. Calabrese, and J.-S. Caux, J. Stat. Mech.: Theor. Exp., P03018 (2009). . Y Li, M Huo, Z Song, Phys. Rev. B. 8054404Y. Li, M. Huo, and Z. Song, Phys. Rev. B 80, 054404 (2009). . A Iucci, M A Cazalilla, arXiv:0903.1205A. Iucci and M. A. Cazalilla, arXiv:0903.1205 (2009); . arXiv:1003.5167Phys. Rev. A. 8063619Phys. Rev. A 80, 063619 (2009); arXiv:1003.5167 (2010). . G S Uhrig, Phys. Rev. A. 8061602G. S. Uhrig, Phys. Rev. A 80, 061602(R) (2009). . J D Bodyfelt, M Hiller, T Kottos, Europhys. Lett. 7850003J. D. Bodyfelt, M. Hiller, and T. Kottos, Europhys. Lett. 78, 50003 (2007). . A C Cassidy, D Mason, V Dunjko, M Olshanii, Phys. Rev. Lett. 10225302A. C. Cassidy, D. Mason, V. Dunjko, and M. Olshanii, Phys. Rev. Lett. 102, 025302 (2009). . M Hiller, T Kottos, T Geisel, Phys. Rev. A. 7923621M. Hiller, T. Kottos, and T. Geisel, Phys. Rev. A 79, 023621 (2009). . Paolo Lorenzo Campos Venuti, Zanardi, Phys. Rev. A. 8122113Lorenzo Campos Venuti, and Paolo Zanardi, Phys. Rev. A 81, 022113 (2010); . Phys. Rev. A. 8132113Phys. Rev. A 81, 032113 (2010). B Sutherland, Beautiful Models. SingapureWorld ScientificB. Sutherland, Beautiful Models (World Scientific, Singapure, 2004). . T A Brody, J Flores, J B French, P A Mello, A Pandey, S S M Wong, Rev. Mod. Phys. 53385T. A. Brody, J. Flores, J. B. French, P. A. Mello, A. Pandey, and S. S. M. Wong, Rev. Mod. Phys. 53, 385 (1981); . T Guhr, A Müller-Groeling, H A Weidenmüller, Phys. Rep. 299189T. Guhr, A. Müller-Groeling, and H. A. Weidenmüller, Phys. Rep. 299, 189 (1998); . H A Weidenmüller, G E Mitchell, Rev. Mod. Phys. 81539H. A. Weidenmüller, and G. E. Mitchell, Rev. Mod. Phys. 81, 539 (2009). . G Montambaux, D Poilblanc, J Bellissard, C Sire, Phys. Rev. Lett. 70497G. Montambaux, D. Poilblanc, J. Bellissard, and C. Sire, Phys. Rev. Lett. 70, 497 (1993); Anglès d'Auriac. T C Hsu, J C , Phys. Rev. B. 4714291T. C. Hsu and J. C. Anglès d'Auriac, Phys. Rev. B 47, 14291 (1993); . D Poilblanc, T Ziman, J Bellissard, F Mila, G Montambaux, Europhys. Lett. 22537D. Poilblanc, T. Ziman, J. Bel- lissard, F. Mila, and G. Montambaux, Europhys. Lett. 22, 537 (1993); . R Mélin, J. Phys. I (France). 5787R. Mélin, J. Phys. I (France) 5, 787 (1995); . T Prosen, Phys. Rev. E. 603949T. Prosen, Phys. Rev. E 60, 3949 (1999). . C Kollath, G Biroli, A Laeuchli, G Roux, in preparationC. Kollath, G. Biroli, A. Laeuchli, and G. Roux, in preparation (2010). . A R Kolovsky, A Buchleitner, Europhys. Lett. 68632A. R. Kolovsky and A. Buchleitner, Europhys. Lett. 68, 632 (2004). . S Camalet, Phys. Rev. Lett. 100180401S. Camalet, Phys. Rev. Lett. 100, 180401 (2008). . C , J. Phys. A: Math. Theor. 4175301C. Aslangul, J. Phys. A: Math. Theor. 41, 075301 (2008). . T D Kühner, S R White, H Monien, Phys. Rev. B. 6112474T. D. Kühner, S. R. White, and H. Monien, Phys. Rev. B 61, 12474 (2000). . T Ericsson, A Ruhe, Math. Comp. 351251T. Ericsson and A. Ruhe, Math. Comp. 35, 1251 (1980). . I Affleck, Phys. Rev. Lett. 56746I. Affleck, Phys. Rev. Lett. 56, 746 (1986). . Wen-Long You, Ying-Wai Li, Shi-Jian Gu, Phys. Rev. E. 7622101Wen-Long You, Ying-Wai Li, and Shi-Jian Gu, Phys. Rev. E 76, 022101 (2007); . L Campos-Venuti, P Zanardi, Phys. Rev. Lett. 9995701L. Campos-Venuti and P. Zanardi, Phys. Rev. Lett. 99, 095701 (2007). . M Cozzini, R Ionicioiu, P Zanardi, Phys. Rev. B. 76104420M. Cozzini, R. Ionicioiu, and P. Zanardi, Phys. Rev. B 76, 104420 (2007). . P Buonsante, A Vezzani, Phys. Rev. Lett. 98110601P. Buonsante and A. Vezzani, Phys. Rev. Lett. 98, 110601 (2007). . I P Mcculloch, J. Stat. Mech.: Theor. Exp. 10014I. P. McCulloch, J. Stat. Mech.: Theor. Exp. , P10014 (2007). . D Schwandt, F Alet, S Capponi, Phys. Rev. Lett. 103170501D. Schwandt, F. Alet, and S. Capponi, Phys. Rev. Lett 103, 170501 (2009). M Abramowitz, I A Stegun, Handbook of Mathematical Functions with Formulas. New YorkDoverM. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions with Formulas (Dover, New York, 1965). . A Z Mekjian, S J Lee, Phys. Rev. A. 446294A. Z. Mekjian and S. J. Lee, Phys. Rev. A 44, 6294 (1991). . A Comtet, P Leboeuf, S N Majumdar, Phys. Rev. Lett. 9870404A. Comtet, P. Leboeuf, and S. N. Majumdar, Phys. Rev. Lett. 98, 070404 (2007). . M , arXiv:0909.4556M. Rigol, arXiv:0909.4556 (2009). We notice that ∆ f is different from the finite size gap to the first excitation (there can be a huge number of states between E 0,f and E * f ). We notice that ∆ f is different from the finite size gap to the first excitation (there can be a huge number of states between E 0,f and E * f ).
[]
[ "A Linear Time and Constant Space Algorithm to Compute the Mixed Moments of the Multivariate Normal Distributions", "A Linear Time and Constant Space Algorithm to Compute the Mixed Moments of the Multivariate Normal Distributions" ]
[ "Shalosh B Ekhad ", "Doron Zeilberger " ]
[]
[]
Using recurrences gotten from the Apagodu-Zeilberger Multivariate Almkvist-Zeilberger algorithm, we present a linear-time and constant-space algorithm to compute the general mixed moments of the k-variate general normal distribution, with any covariance matrix, for any specific k. Besides their obvious importance in statistics, they are also very significant in enumerative combinatorics, since, when the entries of the covariance matrix remain symbolic, they enable us to count in how many ways, in a species with k different genders, a bunch of individuals can all get married, keeping track of the different kinds of the k(k − 1)/2 possible heterosexual marriages, and the k possible same-sex marriages. We completely implement our algorithm (with an accompanying Maple package, MVNM.txt) for the bivariate and trivariate cases (and hence taking care of our own 2-sex society and a putative 3-sex society), but alas, the actual recurrences for larger k took too long for us to compute. We leave them as computational challenges.Maple Packagecontains input and output files, referred to in this paper.The multivariate Normal DistributionRecall that the probability density function (see[T] and [Wik]) of the multivariate normal distribution with mean 0 and (symmetric) covariance matrix C = (c ij ) 1≤i,j≤k is c a 12 12 c a 23 13 c a 23 23 , in the polynomial M [[1,c 12 ,c 13 ],[c 12 ,1,c 23 ],[c 13 ,c 23 ,1]] (m 1 , m 2 , m 3 ) ,is the exact number of ways that, in a 3-gender society, with genders S 1 , S 2 , S 3 , that m 1 individuals of gender S 1 , m 2 individuals of gender S 2 , and m 3 individuals of gender S 3 , can all get married
null
[ "https://arxiv.org/pdf/2202.09900v1.pdf" ]
247,011,534
2202.09900
414aa7ab7ae79238bfe2273fe5c0dc68ecdd22a3
A Linear Time and Constant Space Algorithm to Compute the Mixed Moments of the Multivariate Normal Distributions 20 Feb 2022 Shalosh B Ekhad Doron Zeilberger A Linear Time and Constant Space Algorithm to Compute the Mixed Moments of the Multivariate Normal Distributions 20 Feb 2022This article is accompanied by a Maple package, MVNM.txt, available from https://sites.math.rutgers.edu/~zeilberg/tokhniot/MVNM.txt . The web-page of this article, Using recurrences gotten from the Apagodu-Zeilberger Multivariate Almkvist-Zeilberger algorithm, we present a linear-time and constant-space algorithm to compute the general mixed moments of the k-variate general normal distribution, with any covariance matrix, for any specific k. Besides their obvious importance in statistics, they are also very significant in enumerative combinatorics, since, when the entries of the covariance matrix remain symbolic, they enable us to count in how many ways, in a species with k different genders, a bunch of individuals can all get married, keeping track of the different kinds of the k(k − 1)/2 possible heterosexual marriages, and the k possible same-sex marriages. We completely implement our algorithm (with an accompanying Maple package, MVNM.txt) for the bivariate and trivariate cases (and hence taking care of our own 2-sex society and a putative 3-sex society), but alas, the actual recurrences for larger k took too long for us to compute. We leave them as computational challenges.Maple Packagecontains input and output files, referred to in this paper.The multivariate Normal DistributionRecall that the probability density function (see[T] and [Wik]) of the multivariate normal distribution with mean 0 and (symmetric) covariance matrix C = (c ij ) 1≤i,j≤k is c a 12 12 c a 23 13 c a 23 23 , in the polynomial M [[1,c 12 ,c 13 ],[c 12 ,1,c 23 ],[c 13 ,c 23 ,1]] (m 1 , m 2 , m 3 ) ,is the exact number of ways that, in a 3-gender society, with genders S 1 , S 2 , S 3 , that m 1 individuals of gender S 1 , m 2 individuals of gender S 2 , and m 3 individuals of gender S 3 , can all get married f C (x) := e − 1 2 x T C −1 x (2π) k det C . By simple rescaling we can always assume that all the variances are 1, in other words, that the entries of the main diagonal of C are all 1. We are interested in fast computation of the mixed moments M C (m 1 , · · · , m k ) := R k x m 1 1 · · · x m k k f C (x 1 , . . . , x k ) dx 1 · · · dx k . One way (not a good one!) to compute these moments, for any specific (m 1 , . . . , m k ) is to diagonalize C, make a change of variables and compute an integral of the form R k k i=1   k j=0 b ij x j   m i e − 1 2 (x 2 1 +...+x 2 k ) dx 1 · · · dx k . Then expand k i=1 k j=0 b ij x j m i and use the fact that ∞ −∞ e −x 2 /2 x r dx is 0 if r is odd and √ 2π · r! 2 r/2 (r/2)!) if r is even. A much better way is via the moment generating function ([Wik][T]) 0≤m 1 ,...,m k <∞ M C (m 1 , . . . , m k ) t m 1 1 · · · t m k k m 1 ! · · · m k ! = e 1 2 ( 1≤i,j≤k t i c ij t j ) . This is implemented in procedure MOMd in the Maple package MVNM.txt mentioned above. For example to get the (3, 3, 3, 3)-mixed moment for the generic four-variate normal distribution, with a general (symbolic) covariance matrix Another way is to to use the fact that R k ∂ ∂x 1 (x m 1 1 · · · x m k k f C (x 1 , . . . , x k )) dx 1 · · · dx k = 0 . Using the product and the chain rule, and expanding, one gets a certain mixed recurrence, requiring to compute all the (up to) m 1 · · · m k 'previous' values, requiring, again O(M ) k ) memory and time. But thanks to the Apagodu-Zeilberger [ApZ] multivariate extension of the Almkvist-Zeilberger [AlZ] algorithm there exist pure recurrences, with polynomial coefficients in m 1 , . . . , m k , in each of the discrete coordinate directions. The ones for k = 2 are fairly simple (they are essentially secondorder), but the ones for k = 3 are already very complicated. But once found (and we did find them!) this enables a linear-time and constant-space algorithm for computing any (m 1 , m 2 , m 3 )mixed moment. The recurrences are too complicated to be typeset here, but can be read from the Maple source-code of procedure MOM3 in our Maple package. Warning: Don't even try to use floating-points! You would get garbage, due to the complexity of the calculations that accumulate the round-off errors. Both ways would give you erroneous answers unless Digits is set very high. If you keep c12, c12, c23 symbolic, the superiority of MOM3 over MOMd is even more apparent. restart: read 'MVNM.txt ': time(MOM3(c12,c13,c23,[100,50,40])); , is less than 12 seconds, while doing the same things with MOMd takes 100 times longer! Why this is also Important in Enumerative Combinatorics? Using what Herb Wilf [Wil] used to call generatiningfunctionlogy it is easy to see that, when the entries of the covariance matrix C are kept symbolic, then for the bivariate case, the coefficient of m2) is the exact number of ways that m1 men and m2 women can all get married and there are exactly r heterosexual marriages. The coefficiet of (note that you need their total number, m 1 + m 2 + m 3 to be even, or else it is not possible) where there were exactly a 12 {S1, S2} marriages, a 13 {S1, S3} marriages, and a 23 {S2, S3} marriages. c r in M [[1,c],[c,1]] (m1, For example, if you want to know the number of ways 300 men and 200 women can get married where there were exactly 100 heterosexual weddings (and hence 150 same-sex marriages), type: coeff (MOM2(c,[300,200]),c,100); , to get a certain 564-digit integer. If you want to know, in a 3-gender society, the exact number of ways that 20 individuals of gender S1, 20 individuals of gender S2, and 20 individuals of gender S3, can get married (so altogether there are 30 weddings) with 9 {S1, S2} weddings, 7 {S1, S3} weddings, and 5 {S2, S3} weddings (and hence 30 − 9 − 7 − 5 = 9 same-sex marriages), type: coeff (coeff(coeff(MOM3(c12,c13,c23,[20,20,20]),c12,9),c13,7),c23,5); , getting, in 0.533 seconds, that the number is: 444975998773143505634352562176000000000 . Sample Data To see the list of lists of lists of polynomials in c12,c13,c23, let's call it L, such that L[m1][m2][m3] is the (m1,m2,m3)-mixed moment of the trivariate normal distribution with covariance matrix [[1,c12,c13],[c12,1,c13],[c13,c23,1]] for 1 ≤ m1, m2, m3 ≤ 20 look at the output file https://sites.math.rutgers.edu/~zeilberg/tokhniot/oMVNM1.txt . To see the first 35 diagonal mixed moments (i.e. up to the (70, 70, 70) mixed moment), see https://sites.math.rutgers.edu/~zeilberg/tokhniot/oMVNM2.txt . Enjoy! The recurrences for four dimensions took too long for us, and we leave them as computational challenges. Perhaps they can be done with Christoph Koutschan's [K] very powerful Mathematica package? would confirm that lu2 and lu1 are the same (good check!), but it takes 631.007 seconds.The syntax is MOM3(c12,c13,c23,[m1,m2,m3]); . For example to get the (10, 10, 10) mixed moment as a polynomial in the symbols c12, c13, c23, type: MOM3(c12,c13,c23,[10,10,10]); . This should (and does!) give the same answer as MOMd([[1,c12,c13],[c12,1,c23],[c13,c23,1]],[10,10,10]); . To really appreciate the superiority of our algorithm, using MOM3, over the straightforward MOM3d, try, for example restart: read 'MVNM.txt': t0:=time():lu1:=MOM3(1/2,1/3,1/4,[570,560,750]); time()- t0; , that would give you the very complicated lu1 in 2.56 seconds, while t0:=time(): lu2:=MOMd([[1,1/2,1/3],[1/2,1,1/4],[1/3,1/4,1]],[570,560,750]); , The method of differentiating Under The integral sign. Gert Almkvist, Doron Zeilberger, J. Symbolic Computation. 10Gert Almkvist and Doron Zeilberger, The method of differentiating Under The integral sign, J. Symbolic Computation 10 (1990), 571-591. Multi-Variable Zeilberger and Almkvist-Zeilberger Algorithms and the Sharpening of Wilf-Zeilberger Theory. Moa Apagodu, Doron Zeilberger, Adv. Appl. Math. 37Moa Apagodu and Doron Zeilberger, Multi-Variable Zeilberger and Almkvist-Zeilberger Al- gorithms and the Sharpening of Wilf-Zeilberger Theory, Adv. Appl. Math. 37 (2006), 139-152. Advanced applications of the holonomic systems approach. Christoph Koutschan, Linz, AustriaJohannes Kepler UniversityPhD thesisResearch Institute for Symbolic Computation (RISC)Christoph Koutschan, Advanced applications of the holonomic systems approach, PhD thesis, Research Institute for Symbolic Computation (RISC), Johannes Kepler University, Linz, Austria, 2009. The multivariate normal distribution. Y L Tong, Springer Series in Statistics. Springer-VerlagY.L. Tong, "The multivariate normal distribution. Springer Series in Statistics". New York: Springer-Verlag, 1990. Wikipedia contributors. Multivariate normal distribution, Wikipedia, The Free Encyclopedia. Wikipedia, The Free Encyclopedia. Wikipedia contributors. Multivariate normal distribution, Wikipedia, The Free Encyclopedia. Wikipedia, The Free Encyclopedia, 5 Feb. 2022. Web. 8 Feb. 2022. Herbert S Wilf, Generatingfunctionology, Freely downloadable from. CRC PressSecond Edition. Third editionHerbert S. Wilf, "generatingfunctionology, Academic Press, 1990. Second Edition: 1994; Third edition : 2005 (CRC Press). Freely downloadable from: https://www2.math.upenn.edu/~wilf/gfologyLinked2.pdf . B Shalosh, Doron Ekhad, Zeilberger, New Brunswick; Hill Center-Busch Campus, 110 Frelinghuysen Rd., Piscataway, NJ 08854-8019, USA. EmailDepartment of Mathematics, Rutgers UniversityShaloshBEkhad, DoronZeil. at gmail dot comShalosh B. Ekhad and Doron Zeilberger, Department of Mathematics, Rutgers University (New Brunswick), Hill Center-Busch Campus, 110 Frelinghuysen Rd., Piscataway, NJ 08854-8019, USA. Email: [ShaloshBEkhad, DoronZeil] at gmail dot com . . arxiv.orgPersonal Journal of Shalosh B. Ekhad and Doron Zeil. Exclusively published in the Personal Journal of Shalosh B. Ekhad and Doron Zeil- berger and arxiv.org
[]
[ "THE P-ADIC ANALYTIC SUBGROUP THEOREM AND APPLICATIONS", "THE P-ADIC ANALYTIC SUBGROUP THEOREM AND APPLICATIONS" ]
[ "Tzanko Matev " ]
[]
[]
We prove a p-adic analogue of Wüstholz's analytic subgroup theorem. We apply this result to show that a curve embedded in its Jacobian intersects the p-adic closure of the Mordell-Weil group transversely whenever the latter has rank equal to 1. This allows us to give some theoretical justification to Chabauty techniques applied to finding rational points on curves whose Jacobian has Mordel-Weil group of rank 1.
null
[ "https://arxiv.org/pdf/1010.3156v1.pdf" ]
117,721,830
1010.3156
627a3b2b5a188c073d8c00b9ceaddccec706d09e
THE P-ADIC ANALYTIC SUBGROUP THEOREM AND APPLICATIONS Tzanko Matev THE P-ADIC ANALYTIC SUBGROUP THEOREM AND APPLICATIONS We prove a p-adic analogue of Wüstholz's analytic subgroup theorem. We apply this result to show that a curve embedded in its Jacobian intersects the p-adic closure of the Mordell-Weil group transversely whenever the latter has rank equal to 1. This allows us to give some theoretical justification to Chabauty techniques applied to finding rational points on curves whose Jacobian has Mordel-Weil group of rank 1. Introduction In this paper we develop a p-adic analogue to Wüstholz's analytic subgroup theorem [11] and give an application to finding rational points on curves. Let G be a commutative algebraic group defined over the algebraic numbers and let V be a proper linear subspace of the Lie algebra of G(C), spanned by algebraic vectors. Then the analytic subgroup theorem states that if V contains a vector whose image under the exponential map is an algebraic point in G(C), then there exists an algebraic subgroup of G, containing the said point, whose Lie algebra is a linear subspace of V . In order to state the p-adic analogue we replace the exponential map with the logarithm, since it behaves better in the non-archimedean case. Let V be a linear subspace of the p-adic Lie algebra of G spanned by algebraic vectors. Then the theorem states that if the logarithm of an algebraic point is contained in V , then there exists an algebraic subgroup, containing that point, whose Lie algebra is a linear subspace of V . The precise statement is given in Section 2. The proof of the theorem is very similar to the proof of the original result. One constructs a section of a sheaf on G which has zeros of high order on a certain finite set of algebraic points. We then proceed to show that the negation of the theorem forces this section to have many more zeros, which, using a multiplicity estimate, gives a contradiction. The main difference to the original proof is the replacement of certain complex analytic estimates with their p-adic analogues, which are stated in Proposition 13. The proof of the theorem is presented in Sections 4 to 8. In Section 3 we give an application of the p-adic analytic subgroup theorem to finding rational points on curves. Let ϕ : C → J be an embedding of a curve defined over Q into its Jacobian. We show that if the Jacobian is simple and has Mordell-Weil rank equal to 1, then the padic completion of C intersects the p-adic closure of J(Q) transversally at every point in C(Q). In particular, every rational point in C has a p-adic neighbourhood which intersects the p-adic closure of the Mordell-Weil group at a single point. Bruin and Stoll have combined the Mordell-Weil Sieve with Chabauty techniques to develop an algorithm for finding the rational points on curves whose Mordell-Weil rank is smaller than the genus (see [4], 4.4). The proof that the algorithm terminates is conditional on Stoll's Main Conjecture [10], as well as on an additional conjecture (Conjecture 4.2 in [4]). Our results imply that when the rank of the Mordell-Weil group is equal to 1, one can slightly modify their algorithm so that its termination depends only on the Main Conjecture. Notation and Main Result Let Q denote the field of algebraic numbers and let M be a complete ultrametric extension of Q. Let G be a commutative algebraic group over Q and let G M be its base extension over M . If we denote by t G and t G M the tangent spaces at the identity element e of G and G M respectively, we have the relation t G M = t G ⊗ Q M and one has a canonical Q-linear embedding ι : t G → t G M , v → v ⊗ 1. We will use this map to identify t G with a subset of t G M , and to identify the space V ⊗ Q M with a subspace of t G M for every vector space V ⊆ t G . We shall call vectors that lie in the image of ι algebraic vectors. Following Bourbaki [3, §3.7.6] one can define a logarithm map log : G(M ) * → t G M , where G(M ) * is the set of all points γ ∈ G(M ) with the property that the identity e is an accumulation point of the set {γ n | n ∈ N}. Then we have Theorem 1. Assume that γ ∈ G(M ) * is an algebraic point and that V ⊆ t G is a Q-linear subspace such that log(γ) ∈ V ⊗ Q M . Then there exists an algebraic subgroup B ⊆ G defined over Q, such that γ ∈ B(Q) and t B ⊆ V . Remark. There are a couple of cases, where the statement of the theorem is trivial. If V = t G , then one can simply choose B = G. If on the other hand γ is a torsion point, say γ m = e for some m ∈ N, then we can pick B to be the group of m-torsion points. Therefore the theorem gives non-trivial implications only when V is a proper linear subspace, and γ is a non-torsion point. Finding points on curves Let K be a number field with a non-archimedean valuation v, and let K v be the completion with respect to this valuation. Let K alg v denote the field of all algebraic numbers in K v . We are going to apply Theorem 1 in the case when the group G is a simple abelian variety defined over K and the linear space V is one-dimensional. Then the set G(K v ) is compact, which implies that G(K v ) * = G(K v ) and that the logarithm is globally defined. We have the following: Lemma 2. Let A be a simple abelian variety defined over K. Let b ∈ t A be a non-zero tangent vector and let γ ∈ A(K) be a K-rational point of infinite order. Then the vectors b and log γ are linearly independent over K v . Remark. We should note that if the abelian variety A is absolutely simple then the lemma is a trivial corollary of Theorem 1. The difficulty lies in showing the result for a simple, but not absolutely simple, abelian variety. Proof. We shall give a proof by contradiction. Let A be a simple abelian variety which is not absolutely simple. Assume that there exists an algebraic vector b and a K-rational point γ such that log γ lies in the one-dimensional K v -linear space spanned by b. Then Theorem 1 implies that there exists an algebraic subgroup B defined over some finite Galois extension L, such that γ ∈ B(L) and such that t B is a subspace of the L-linear space spanned by b. Let B 0 be the connected component at identity of this group, and let γ 0 be a multiple of γ, which lies in B 0 . The group B is one-dimensional, therefore B 0 is an elliptic curve. We will show that B 0 is defined over K and thus derive a contradiction with the assumption that A is simple. The idea of the following was suggested to me by Brendan Creutz. Let σ ∈ Gal(L/K). The set of points {γ n 0 : n ∈ Z} is infinite, therefore it is Zariski dense in B 0 . The Galois automorphism induces a morphism σ : A L → A L . This morphism is both open and continuous in the Zariski topology. It fixes the points γ n 0 , for all n ∈ Z, therefore it fixes their Zariski closure. But the closure of the set {γ n 0 : n ∈ Z} is the curve B 0 , therefore the morphism σ fixes B 0 . Since this is true for every Galois automorphism in Gal(L/K), we have that B 0 is defined over K. Let C be a smooth curve defined over K. We assume that C has at least one K-rational point P . Then we have an embedding defined over K ϕ P : C −→ J into the Jacobian J/K of C such that ϕ P (P ) = e, where e ∈ J(K) is the identity element. Let A/K be a simple abelian variety, and let i : C −→ A be a smooth non-constant morphism defined over K which factors through ϕ P . (Since A is simple this implies that the implied map J → A is surjective.) For any field extension L of K, let Ω 1 (L) denote the sheaf of algebraic 1-forms with coefficients in L on a variety. The general theory of abelian varieties allows us to identify the cotangent space at the identity t * A ⊗ L with the space of global 1-forms Γ(A, Ω 1 (L)). We shall use that identification without further mention. Theorem 3. Assume that W := span Kv log A(K) is a 1-dimensional K v -vector space. Then for every point Q ∈ C(K alg v ) there exists a 1- form w ∈ W ⊥ ⊂ Γ(A, Ω 1 (K v )) such that i * (w)(Q) = 0. In other words, the image of the curve C under the map i is transversal to the v-adic closure of A(K) at every intersection point which is algebraic. The theorem, however, does not say anything for any possible transcendental intersection points. Proof. Without loss of generality we can assume that Q ∈ C(K) (otherwise we replace K by a finite extension whose v-adic completion is still K v ). Let t C,Q denote the tangent space at Q, which is a 1dimensional K-vector space. We have a map i * : t C,Q → t A,i(Q) , where, similarly, t A,i(Q) is the tangent space at i(Q). Composing this map with the differential of the translation map we get a K-linear homomor- phism φ : t C,Q → t A . Note that, since i is smooth, the map φ is an injection. Let V := φ(t C,Q ). Let γ ∈ A(K) be a point of infinite order. Since A is simple, Lemma 2 implies that log γ ∈ V ⊗ K K v , therefore the spaces V ⊗ K K v and W are linearly independent over K v . Hence there exists a global 1-form w ∈ Γ(A, Ω 1 (K v )) which does not vanish on V and such that w(W ) = 0. It is easily seen that any such form w satisfies i * (w)(Q) = 0. 3.1. The modified Bruin-Stoll algorithm. Let C/Q be a smooth curve of genus at least two. We assume that C has a rational point P which defines the embedding ϕ P : C → J into the Jacobian of C. We also assume that its Jacobian is simple, and that the group J(Q) has rank 1. We pick a prime p such that the map ϕ P can be extended to a smooth morphism between smooth and proper Z p -schemes Φ P : C → J , where C Qp := C × Q Spec Q p and J Qp := J × Q Spec Q p are the generic fibres of C and J respectively. Let Q ∈ C(Q). Then, according to Theorem 3, there exists a global p-adic 1-form w ∈ Γ(C, Ω 1 (Q p )), such that its corresponding 1-form ϕ * −1 P w on J Qp annihilates log J(Q). We fix one such form. Let t be a local parameter on P which gives a local parametert on the reduction of C as well. Then one has the expansion w = (a 0 + a 1 t + . . . )dt, where one can assume that a i ∈ Z p . We define v(w) := v(a 0 ). This definition does not depend on the choice of the local parameter. Let J 1 (Q p ) be the kernel of the reduction map J(Q p ) → J(F p ), and let J n+1 (Q p ) := p n J 1 (Q p ), where we consider J 1 (Q p ) as a formal group. We have maps ϕ n P : C(Q p ) → J(Q p )/J n (Q p ) defined in the obvious way. Then we have Proposition 4. If p ≥ 3 and n ≥ v(w)+1, then the preimage of ϕ n P (Q) contains a single rational point. Proof. The proof is a slight modification of the proof of Proposition 6.3 in [9]. Let r 1 , . . . , r g be local parameters of J at ϕ P (Q) such that their reduction gives local parameters on the special fiber of J . Then there exists an index i such that ϕ * P r i and its reduction give local parameters on C and C × Spec F p respectively. Without loss of generality we can assume that i = 1. Then one has the representation w = (a 0 + a 1 r 1 + · · · )dr 1 . Since the residue class of ϕ n P (Q) consists precisely of the points for which max{|r 1 |, . . . , |r g |} ≤ p −n , we have that |r 1 | ≤ p −n for all points in the preimage of ϕ n P (Q). Let r := p −n r 1 . Then, the logarithm corresponding to w is given by λ w (r) = a 0 p n r + a 1 p 2n 2 r 2 + · · · + a m p n(m+1) m + 1 r m+1 + · · · According to the Chabauty theory (see [9] for details) we know that all the rational points lying in the preimage of ϕ n P (Q) correspond to zeros of λ w (r) for |r| ≤ 1. It is clear that if v( amp n(m+1) m+1 ) > v(p n a 0 ) for all m ≥ 1, then this function will have a single solution r = 0. One can easily check that for p ≥ 3 and n ≥ v(a 0 ) + 1 this is precisely the case. This proves the proposition. The Mordell-Weil Sieve involves constructing a certain finite abelian group G together with a subset X G ⊂ G and a group homomorphism ψ : J(Q) → G such that ϕ P (C(Q)) ⊆ ψ −1 (X G ). It is expected that one can construct G in such a way so that the last relation becomes an equality of sets (see [4] for details). In practice one picks a number N which kills all elements in G, and only considers the quotient map ψ N : J(Q)/N J(Q) → G. Then we have the following algorithm for finding the rational points on C: (1) Fix a prime of smooth reduction p ≥ 3, and compute G p := J(F p ), as well as X p := ϕ P (C(F p )). Compute G and X G using the Mordell-Weil Sieve and pick N to be a multiple of the exponent of G p × G. (2) Find all the residue classes of J(Q)/N J(Q) which map into X p × X G ⊂ G p × G. If there are no such classes then terminate. -form w Q which does not vanish on Q but such that ϕ * −1 P w vanishes on J(Q). Compute v Q := v(w Q ). Let n be the maximum of all such v Q s. (5) Set G p := J(Q p )/J n (Q p ), X p := ϕ n P (C(Q p )) \ ϕ n P (A) . Fix a new choice for G, X G and N such that N is a multiple of the exponent of G p × G. Proposition 4 guarantees that the only rational points of C mapped to ϕ n P (A) are those coming from A, which implies that if the algorithm terminates, then the union of all sets A which we find in Step 3 will be equal to C(Q). This algorithm is in practice equivalent to the one given by Bruin and Stoll. Theorem 3, however, guarantees that one can always perform Step 4, hence the termination of the algorithm depends only on Stoll's Main Conjecture. Reduction to the semistable case We are going to prove Theorem 1 by reducing it to a special case. We will need the following definition. Let G be a commutative algebraic group of dimension n defined over Q, and let V ⊆ t G be a ddimensional Q-linear subspace (we allow d = n). We set τ (G, V ) := d/n, if dim G ≥ 1, and τ (G, V ) := 1, otherwise. The pair (G, V ) is called semistable, if for all proper quotients π : G → G we have τ (G, V ) ≤ τ (G , V ), where V = π * (V ). Since τ (G , V ) can take only finitely many possible values, it follows that every pair (G, V ) has a semistable quotient. It is also easy to see that if τ (G, V ) = 0 or τ (G, V ) = 1, then the pair (G, V ) is semistable. The following statement is a special case of Theorem 1. Theorem 5. Let V be a proper linear subspace of t G , such that (G, V ) is semistable. Then log γ / ∈ V ⊗ Q M for any algebraic non-torsion point γ ∈ G(M ) * . Lemma 6. Theorem 5 implies Theorem 1. Proof. Let G be a commutative algebraic group and let V be a linear subspace of t G . We proceed by induction on dim G. If the pair (G, V ) is semistable, then Theorem 5 together with the remark in Section 2 trivially imply Theorem 1. In particular, Theorem 1 is true whenever dim G ≤ 1. Assume (G, V ) is not semistable and that Theorem 1 is true for all commutative algebraic groups with dimension less than dim G. Let γ ∈ G(M ) * be an algebraic point such that log γ ∈ V ⊗ M . Let π : G → G be a quotient to a semistable pair (G , V ), where V = π * (V ). Then 1 > τ (G, V ) > τ (G , V ), hence dim G > dim V and V is a proper linear subspace of t G . Theorem 5 implies that the only algebraic points in G (M ) * , whose logarithm lies in V ⊗M , are the torsion points. On the other hand π(γ) ∈ G (M ) * , since π is continuous. The commutativity of the diagram G(M ) * π / / log G (M ) * log t G ⊗ M π * / / t G ⊗ M implies that log π(γ) ∈ V ⊗ M , therefore, by the inductive hypothesis π(γ) is a torsion point. Let k be the order of π(γ), and let γ = kγ. Preliminaries In this section we introduce some notation and standard results which are going to be needed for the proof of Theorem 5. We shall prove Theorem 5 by contradiction. We assume that there exist a linear subspace V t G and an algebraic non-torsion point γ ∈ G(M ) * such that the pair (G, V ) is semistable and log γ ∈ V ⊗ M . We will denote n = dim G, d = dim V . Let Γ be the group generated by γ in G(M ). We fix an embedding φ : G → P N of G into an N -dimensional projective space such that φ(Γ) ∩ {X N = 0} = ∅. (We will use X 0 , . . . , X N to denote the coordinates in P N .) We can always choose such an embedding. Indeed, let ψ : G → P N be an arbitrary embedding. Then the composition of ψ with a projective automorphism which sends some hyperplane defined over a sufficiently large extension of the field of definition of Γ to the hyperplane {X N = 0} will give us the desired morphism φ. We shall identify G with φ(G). We denoteŪ :=Ḡ ∩ {X N = 0}, and U :=Ū ∩ G. HereḠ is the Zariski closure of G in P N . The restriction morphism OḠ(Ū ) → OḠ(U ) = O G (U ) is an injection. Therefore we shall identify OḠ(Ū ) with a subset of O G (U ). The proof of the following lemma is given in [11, Section 2] Lemma 7. There exists a Zariski open set U ⊂ G×G such that Γ×Γ ⊂ U and such that the group law is represented on U by a single set of bi-homogeneous polynomials E 0 , . . . , E N ∈ Q[X 0 , . . . , X N , X 0 , . . . , X N ] of bi-degree b, whose coefficients are algebraic integers. We fix such a set of polynomials and denote E := (E 0 : · · · : E N ). Remark. One can show that for an appropriate embedding φ the group operation can be given by bi-homogenous polynomials of bi-degree 2. However, since our results are not effective, we are not going to need this fact. From now on we shall assume that G, V , γ and E are defined over a fixed number field K with a fixed embedding into Q. We will abuse notation by denoting the tangent space at e of G again by t G . It is now a n-dimensional K-linear space. The embedding K → Q → M gives rise to a non-archimedean valuation v on K. Let K v be the completion of K with respect to that valuation. Then log γ ∈ t G Kv = t G ⊗ K K v . We take the height of a point in P N (Q) to be its projective height. This means that if α ∈ P N (L) for some number field L, α = (α 0 , . . . , α N ), then h(α n 1 1 . . . α n k k ) ≤ c 1 + c 2 ( i |n i |) 2 , for all (n i ) ∈ Z k . Differentiation We fix a basis ∂ 1 , . . . ∂ n of t G . It is a standard fact in the theory of group varieties that for every ∂ ∈ t G there exists a unique translation invariant derivation D(∂). This means that we have a morphism of sheaves of K-linear spaces D(∂) : O G → O G such that if U is a Zariski- open subset of G, f, g ∈ O G (U), α ∈ U(Q) and c ∈ K then (1) D(∂)c = 0; (2) D(∂)f g = f D(∂)g + gD(∂)f ; (3) D(∂)f (α) = D(∂)(f • T α )(e) = ∂(f • T α ) , where T α (β) := αβ is the translation-by-α morphism. Since G is commutative, we also have (4) D(∂)D(∂ )f = D(∂ )D(∂)f for any two vectors ∂, ∂ ∈ t G . From now on we shall use the same notation for a vector in t G and for its corresponding derivation. For any linear space W ⊆ t G , K[W ] will denote the ring of differential operators with coefficients in K generated by derivations in W . Let ∆ 1 , . . . , ∆ d be a basis of V . We denote Despite the notation, the sets D(∞) and D(T ) depend on the choice of basis of V . We shall later fix a convenient basis to work with. D(∞) := {∆ ∈ K[V ] : ∆ = ∆ t 1 1 . . . ∆ t d d , t i ≥ 0}. For any differential operator ∆ ∈ D(∞), ∆ = ∆ t 1 1 . . . ∆ t d d , we denote |∆| := t 1 + · · · + t d . Let α ∈ G(K) and let P ∈ K[X 0 , . . . , X N ] be a homogenous polynomial of degree D. Assume that Q is another such polynomial with the same degree and such that Q(α) = 0. We define the order of P along V at the point α to be the smallest number t such that there exists ∆ ∈ D(t) with ∆ P Q (α) = 0 We denote the order by ord V,α P . One can check that this definition depends neither on Q, nor on the choice of basis for V . Let P be a homogenous polynomial of degree D and let ∆ ∈ K[V ]. Then P •E(X 0 ,...,X N ,Y 0 ,...,Y N ) Y bD N , when considered as a function of Y, induces a rational function f (X 0 , . . . , X N ) ∈ O G (U ). We define P ∆ (X 0 , . . . , X N ) := (∆f (X 0 , . . . , X N ))(e) Then P ∆ is a homogenous polynomial of degree bD and it is not difficult to show that ord V,α P ∆ ≥ ord V,α P − |∆|. This allows us to study the multiplicity of P at a point using the polynomials P ∆ . If P is any polynomial we write |P | v for the maximum of the absolute values of its coefficients. We shall later need an estimate of |P ∆ | v in terms of |P | v . We are going to show next that after an appropriate choice of basis for V such an estimate becomes trivial. More precisely, let us call a basis ∆ 1 , . . . , ∆ d ∈ V , nice, if the following property holds: For any homogenous polynomial P with |P | v ≤ 1 and any ∆ ∈ D(∞) one has |P ∆ | v ≤ 1. Lemma 9. The linear space V has a nice basis. Proof. Let ∆ 1 , . . . , ∆ d be any basis of V . The K-algebra O G (U ) is finitely generated. Pick a set of generators f 1 , . . . , f k ∈ O G (U ) which contains the functions X i X N for all i = 0, . . . , N − 1 and such that |f j (e)| v ≤ 1 for all j = 1, . . . , k. Then there exist polynomials F ij ∈ K[T 1 , . . . , T k ] such that ∆ i f j = F ij (f 1 , . . . , f k ). Pick ∆ i = c i ∆ i where c i is any non-zero integer such that |c i F ij | v ≤ 1 for all j. We are going to show that this basis has the required property. Let F ∈ K[T 1 . . . , T k ] be such, that |F | v ≤ 1. It is then easy to show that |∆F (f 1 , . . . , f k )(e)| v ≤ 1 for any choice of ∆ = ∆ t 1 1 · · · ∆ t d d . Let now P be a homogenous polynomial of degree D. Then we have the representation f (X 0 , . . . , X N ) := P • E(X 0 , . . . , X N , Y 0 , . . . , Y N ) Y bD N = J A J X J 0 0 . . . X J N N , where A J ∈ O G (U ). Since the coefficients of E are algebraic integers and |P | v ≤ 1, it follows that f , when considered as a poly- nomial in X 0 , . . . , X N , Y 0 Y N , . . . , Y N −1 Y N , has |f | v ≤ 1. Hence, since the functions Y i /Y N lie in the set {f 1 , . . . , f k }, one has a representation A J = I A I J f I 1 1 . . . f I k k , where A I J ∈ K, |A I J | v ≤ 1. By the observation in the previous paragraph it follows that |∆A J (e)| v ≤ 1 for all ∆ ∈ D(∞). Since P ∆ (X 0 , . . . , X N ) = J ∆A J (e)X J 0 0 . . . X J N N , one concludes that |P ∆ | v ≤ 1 for all ∆ ∈ D(∞). From now on we fix a nice basis ∆ 1 , . . . , ∆ d of V . In order to complete the proof of Theorem 5 we will need to apply a multiplicity estimate. We state here one estimate, given in [2]: Theorem 10. Let (G, V ) be a semistable pair and let γ ∈ G(M ). We fix an embedding of G into projective space, and a Zariski open subset U as in Section 5. Let γ 0 ∈ Γ. We fix a basis ∆ 1 , . . . , ∆ d of V . There exists an effectively computable constant c > 0 with the following property. Let S 0 , T 0 and D 0 be non-negative integers such that S 0 T d 0 > cD n 0 . Assume in addition that there exists a homogenous polynomial P of degree D 0 , which does not vanish identically on G and such that P ∆ t 1 1 ...∆ t d d (γ s 0 ) = 0 for all 0 ≤ s ≤ S 0 and all non-negative integers t 1 , . . . , t d with 0 ≤ t 1 + · · · + t d ≤ T 0 . Then γ 0 is a torsion point. One could also use the more general Philippon multiplicity estimate [7] of which the previous theorem is an easy consequence (See Chapter 11, Corollary 4.2 in [6]). ) subgroup W 0 ⊂ U (K v )∩{α ∈ P N (K v ) : | X i X N (α)| v ≤ 1 for all i = 0, . . . , N − 1} where the logarithm is a local isomorphism. Since γ ∈ G(K v ) * , there exists k ∈ N such that γ k ∈ W 0 . Since log γ = 1 k log(γ k ), in order to prove Theorem 5 one only needs to show that log(γ k ) ∈ V ⊗ K v . Therefore, without loss of generality, we can assume that γ ∈ W 0 . Let D, T , S, l be nonnegative integers. Since our result is not effective we do not need to keep track of all the constants appearing in our estimates. Therefore, in order to simplify notation, we will use the letter c to denote a sufficiently large positive number, which can change from line to line, and which does not depend on D, T , S and l. Let γ 1 := γ p , where p is the prime which is extended by v, and let γ = γ l 1 . We denote the group generated by γ by Γ , and we also set Γ (S) := {γ s : 0 ≤ s ≤ S}. The main steps of the proof are contained in the following three propositions. We defer the proofs of those propositions to the next section. Proposition 11 (Auxiliary polynomial). Assume that the following inequality holds (1) D n ≥ 2n(deg K)(T + n) d (S + 1) Then there exists a homogeneous polynomial P with integer coefficients and degree D, which does not vanish on G, such that for all ∆ ∈ D(T /2) and all α ∈ Γ (S) we have (a). ord V,α P ∆ ≥ T /2, (b). h(P ∆ ) ≤ c(D + T ) log(D + T ) + cD(lS) 2 . To any polynomial P satisfying the conditions given in the proposition and any ∆ ∈ D(∞) we associate a function f ∆ = P ∆ c ∆ X bD N , where c ∆ ∈ K is any coefficient of P ∆ such that |P ∆ | v = |c ∆ | v . Lemma 12. The functions f ∆ have the following properties: (a). |f ∆ (α)| v ≤ 1 whenever |X i (α)/X N (α)| v ≤ 1 for all i; (b). If X N (α) = 0, then h(f ∆ (α)) ≤ log N +bD N + h(P ∆ ) + bDh(α). Proof. Part (a) is trivial. Part (b) is easily shown using elementary height estimates. Proposition 13 (Upper bound). Let ∆ ∈ D(T /2), let 0 ≤ s ≤ lS, and let P be the polynomial in Proposition 11. Then log|f ∆ (γ s 1 )| v ≤ −cST Proposition 14 (Lower bound). Let ∆ ∈ D(T /2), let 0 ≤ s ≤ lS and let P be the polynomial in Proposition 11. Then either P ∆ (γ s 1 ) = 0, or log|f ∆ (γ s 1 )| v ≥ −c(lS) 2 D − c(D + T ) log(D + T ). Proof of Theorem 5. Choose l = S 3 2 , T = S 5n+2 , D = (S 5nd+2d+2 ) 1/n . Then we can pick S large enough so that the inequality (1) is satisfied. Choose a polynomial P with the properties given in Proposition 11. Let ∆ ∈ D(T /2), 0 ≤ s ≤ lS. Then, according to Proposition 13, for S large enough we have (2) log|f ∆ (γ s 1 )| v ≤ −cST ≤ −cS 5n+3 On the other hand Proposition 14 implies that either P ∆ (γ s 1 ) = 0 or that − log|f ∆ (γ s 1 )| v ≤ c(lS) 2 D + c(D + T ) log(D + T ) ≤ cS 5+5d+ 2d+2 n + c(S 5d+ 2d+2 n + S 5n+2 ) log S At this point we use the assumption that the linear space V is a proper subspace of t G . This means that d ≤ n − 1, hence we get the estimate − log|f ∆ (γ s 1 )| v ≤ cS 5n+2 + c(S 5n−2 + S 5n+2 ) log S ≤ cS 5n+2.5(3) However, if S is large enough, the estimates (2) and (3) cannot simultaneously be satisfied, hence P ∆ (γ s 1 ) = 0 for all non-negative s ≤ lS and all ∆ ∈ D(T /2). Finally we apply Theorem 10 where we set S 0 := lS, T 0 := T /2, D 0 := D and γ 0 := γ 1 . Since (lS)T d ∼ D n S 1 2 , if we apply the theorem for large enough S we get that γ 1 is a torsion point. This, however, contradicts the assumption that γ is a non-torsion point, which completes the proof of the theorem. 8. Proofs of the main propositions 8.1. The auxiliary polynomial. We are only going to give a sketch of the proof of Proposition 11. It is very similar to the proof of Lemma 4.1 in [11], the only difference being that the heights of the points in Γ (S) are bounded above by c(lS) 2 instead of cS 2 . Condition (1) of the proposition is implied by having P ∆ (γ s ) = 0 for all ∆ ∈ D(T ) and all s, 0 ≤ s ≤ S. Each of those equations is a linear equation for the coefficients of P and there are roughly ST d such equations. The space of polynomials of degree D modulo polynomials vanishing on G has dimension roughly D n . Therefore if D n ≥ cST d there exists a polynomial satisfying Condition (1). Siegel's Lemma tells us that we can pick the polynomial to have integer coefficients and a certain upper bound of the height, determined by the height of the system of linear equations. A not very difficult (but long) computation shows that the height of the coefficients of each equation is bounded above by c(D+T ) log(D+T )+h(γ s ). Using Lemma 8 we see that the second term is bounded above by c(lS) 2 . Applying Siegel's Lemma one shows the existence of a polynomial P with small height which satisfies Condition (1). It is then not difficult to show that condition (2) is also satisfied. 8.2. The upper bound. Let f be a v-adic analytic function of one variable. We will use the notation f r := max |z|v≤r |f (z)| v . We shall call a power series f (z) = ∞ n=1 a n z n normal, if (1) lim n→∞ |a n | v = 0, and (2) |a n | v ≤ 1. Those condtions are equivalent to (1) f (z) is defined and analytic on |z| v ≤ 1. (2) f 1 ≤ 1. The following theorem is due to Mahler [1, Appendix, Theorem 14]: Theorem 15. Let f (z) be a normal function which has zeros at x 1 , . . . , x n ∈ K v of multiplicities d 1 , . . . , d n respectively. Assume |x i | v ≤ r, where 0 < r < 1, and let d = d 1 + · · · + d n . Then given any x ∈ K v such that |x| v ≤ r and any k = 0, 1, 2, . . . we have |f (k) (x)| v ≤ r d−k Proof of Proposition 13. Let w := log γ, and let φ(z) := f ∆ (exp(zw)). (Here exp is the inverse of restriction of the the logarithm function to W 0 .) Since γ ∈ W 0 we have that φ is defined and analytic for all |z| v ≤ 1. For any α ∈ W 0 , since X i X N (α) v ≤ 1 for all i, Lemma 12(a) implies that |f ∆ (α)| v ≤ 1. We conclude that φ 1 ≤ 1, hence φ is normal. By Proposition 11 (1), it follows that the order of φ at points 0, pl, 2pl, . . . , plS is at least T /2. Let r = |p| v . Then, since 0 < r < 1 and |spl| v ≤ r, applying Theorem 15 we conclude that for any integer s we have |f ∆ (γ s 1 )| v = |φ(ps)| v ≤ r ST /2 . Taking logarithms we obtain the desired result. The lower bound. Proof of Proposition 14. We are going to prove this proposition by means of Liouville's inequality [5,Lemma D.3.3]. Assume that P ∆ (γ s 1 ) = 0. h(f ∆ (γ s 1 )) ≤ bDh(γ s 1 ) + h(P ∆ ) + log N + bD N ≤ cs 2 D + c(D + T ) log(D + T ) + cD(lS) 2 + c log D ≤ c(lS) 2 D + c(D + T ) log(D + T ) Since we have assumed that P ∆ (γ s 1 ) = 0, Liouville's inequality implies that [K v : Q p ] log|f ∆ (γ s 1 )| v ≥ −[K : Q]h(f ∆ (γ s 1 )) Combining this inequality with the previous estimate proves the assertion. Date: October 15, 2010. ( 3 ) 3For each of the residue classes we have found in Step 2, find the representative with smallest canonical height in J(Q) and check if it comes from a rational point on the curve. Let A be the set of all rational points which we have found in this way. (4) For each rational point Q ∈ A, find a global analytic 1 ( 6 ) 6Go to Step 2. Then γ ∈ H(M ), where H := ker π. If t H ⊆ V , then we pick the algebraic group B consisiting of the points B(Q) = {θ ∈ G(Q) : θ k ∈ H(Q)}.Clearly γ ∈ B(Q) and t B = t H ⊆ V , hence Theorem 1 is true. If on the other hand t H ⊆ V , then, using the inductive hypothesis, we can apply Theorem 1 to H, γ and the space t H ∩ V . Since log γ ∈ (t H ∩ V ) ⊗ M , there exists an algebraic subgroup B 1 of H, such that t B 1 ⊆ t H ∩V ⊆ V and γ ∈ B 1 (Q). We then pick B such that B(Q) = {θ ∈ G(Q) : θ k ∈ B 1 (Q)}. This group satisfies the properties prescribed in Theorem 1. This concludes the proof. sum runs over the set M L consisting of all places of L. Here we define |x| w := |N Lw/Q w (x)| 1/[L:Q] w , where w is the unique place of Q such that w |w. The height h(P ) of a homogenous polynomial P of degree D is the height of its coefficients taken as a point in P A (Q), where A = N +D N −1. The following lemma is proved in [8, Proposition 5]: Lemma 8. Let α 1 , . . . , α k ∈ G(K). Then there exist constants c 1 , c 2 such that For any non-negative integer T we set D(T ) := {∆ : |∆| ≤ T }. 7 . 7Proof of Theorem 5 There exists a system of v-adic open neighbourhoods of e in G(K v ) consisting of subgroups of G(K v ). Hence there exists an open (and closed Transcendental numbers in the p-adic domain. W W Adams, Am. J. Math. 88W.W. Adams, Transcendental numbers in the p-adic domain., Am. J. Math. 88 (1966), 279-308. Logarithmic forms and Diophantine geometry. Alan Baker, Gisbert Wuestholz, New Mathematical Monographs. 9Cambridge University PressAlan Baker and Gisbert Wuestholz, Logarithmic forms and Diophantine ge- ometry., New Mathematical Monographs, vol. 9, Cambridge University Press, 2007. Part I: Chapters 1-3. English translation. Nicolas Bourbaki, Actualites scientifiques et industrielles, Hermann. Adiwes International Series in Mathematics. Paris: Hermann. Reading, MassAddison-Wesley Publishing CompanyElements of mathematics. Lie groups and Lie algebras. XVII, 450 p.Nicolas Bourbaki, Elements of mathematics. Lie groups and Lie algebras. Part I: Chapters 1-3. English translation., Actualites scientifiques et industrielles, Hermann. Adiwes International Series in Mathematics. Paris: Hermann, Pub- lishers in Arts and Science; Reading, Mass.: Addison-Wesley Publishing Com- pany. XVII, 450 p., 1975. The Mordell-Weil sieve: proving non-existence of rational points on curves. Nils Bruin, Michael Stoll, LMS Journal of Computation and Mathematics. 131Nils Bruin and Michael Stoll, The Mordell-Weil sieve: proving non-existence of rational points on curves, LMS Journal of Computation and Mathematics 13 (2010), no. -1, 272-306. Diophantine geometry. An introduction. Marc Hindry, Joseph H Silverman, Graduate Texts in Mathematics. 201SpringerMarc Hindry and Joseph H. Silverman, Diophantine geometry. An introduc- tion., Graduate Texts in Mathematics, vol. 201, Springer, 2000. Introduction to algebraic independence theory. Lecture Notes in Mathematics. Yuri V. Nesterenko and Patrice Philippon175211104Springer-VerlagYuri V. Nesterenko and Patrice Philippon (eds.), Introduction to algebraic in- dependence theory, Lecture Notes in Mathematics, vol. 1752, Springer-Verlag, Berlin, 2001. MR1837822 (2002g:11104) Patrice Philippon, Lemmes de zéros dans les groupes algébriques commutatifs. 114French). MR878242 (89c:11111Patrice Philippon, Lemmes de zéros dans les groupes algébriques commutat- ifs, Bull. Soc. Math. France 114 (1986), no. 3, 355-383 (French). MR878242 (89c:11111) J.-P Serre, French69-70Quelque properiétés des groupes algébriques commutatifs. J.-P. Serre, Quelque properiétés des groupes algébriques commutatifs, Astérisque 69-70 (1979) (French). Michael Stoll, 1201-1214. MR2264661Independence of rational points on twists of a given curve. 14214025Michael Stoll, Independence of rational points on twists of a given curve, Com- pos. Math. 142 (2006), no. 5, 1201-1214. MR2264661 (2007m:14025) 349-391. MR2368954Finite descent obstructions and rational points on curves. 111086, Finite descent obstructions and rational points on curves, Algebra Number Theory 1 (2007), no. 4, 349-391. MR2368954 (2008i:11086) Algebraische Punkte auf analytischen Untergruppen algebraischer Gruppen. (Algebraic points on analytic subgroups of algebraic groups). Gisbert Wuestholz, Ann. of Math. 2GermanGisbert Wuestholz, Algebraische Punkte auf analytischen Untergruppen al- gebraischer Gruppen. (Algebraic points on analytic subgroups of algebraic groups)., Ann. of Math. (2) 129 (May 1989), no. 3, 501-517 (German).
[]
[ "Three-body repulsive forces among identical bosons in one dimension", "Three-body repulsive forces among identical bosons in one dimension" ]
[ "M Valiente \nInstitute for Advanced Study\nTsinghua University\n100084BeijingChina\n" ]
[ "Institute for Advanced Study\nTsinghua University\n100084BeijingChina" ]
[]
I consider non-relativistic bosons interacting via pairwise potentials with infinite scattering length and supporting no two-body bound states. To lowest order in effective field theory, these conditions lead to non-interacting bosons, since the coupling constant of the Lieb-Liniger model vanishes identically in this limit. Since any realistic pairwise interaction is not a mere delta function, the noninteracting picture is an idealisation indicating that the effect of interactions is weaker than in the case of off-resonant potentials. I show that the leading order correction to the ground state energy for more than two bosons is accurately described by the lowest order three-body force in effective field theory that arises due to the off-shell structure of the two-body interaction. For natural two-body interactions with a short-distance repulsive core and an attractive tail, the emergent three-body interaction is repulsive and, therefore, three bosons do not form any bound states. This situation is analogous to the two-dimensional repulsive Bose gas, when treated using the lowest-order contact interaction, where the scattering amplitude exhibits an unphysical Landau pole. The avoidance of this state in the three-boson problem proceeds in a way that parallels the two-dimensional case. These results pave the way for the experimental realisation of one-dimensional Bose gases with pure three-body interactions using ultracold atomic gases.PACS numbers:
10.1103/physreva.100.013614
[ "https://arxiv.org/pdf/1902.01643v1.pdf" ]
119,409,166
1902.01643
36ab0fdb3c458e918d1de189e165e8e013b5afa0
Three-body repulsive forces among identical bosons in one dimension 5 Feb 2019 M Valiente Institute for Advanced Study Tsinghua University 100084BeijingChina Three-body repulsive forces among identical bosons in one dimension 5 Feb 2019arXiv:1902.01643v1 [cond-mat.quant-gas] I consider non-relativistic bosons interacting via pairwise potentials with infinite scattering length and supporting no two-body bound states. To lowest order in effective field theory, these conditions lead to non-interacting bosons, since the coupling constant of the Lieb-Liniger model vanishes identically in this limit. Since any realistic pairwise interaction is not a mere delta function, the noninteracting picture is an idealisation indicating that the effect of interactions is weaker than in the case of off-resonant potentials. I show that the leading order correction to the ground state energy for more than two bosons is accurately described by the lowest order three-body force in effective field theory that arises due to the off-shell structure of the two-body interaction. For natural two-body interactions with a short-distance repulsive core and an attractive tail, the emergent three-body interaction is repulsive and, therefore, three bosons do not form any bound states. This situation is analogous to the two-dimensional repulsive Bose gas, when treated using the lowest-order contact interaction, where the scattering amplitude exhibits an unphysical Landau pole. The avoidance of this state in the three-boson problem proceeds in a way that parallels the two-dimensional case. These results pave the way for the experimental realisation of one-dimensional Bose gases with pure three-body interactions using ultracold atomic gases.PACS numbers: The theory of few-particle forces in quantum mechanics has a long history that dates back to the early studies of atomic nuclei [1]. It was soon realised that even highly sophisticated nucleon-nucleon potentials, which faithfully reproduced all experimental features of the deuteron [2] and the nucleon-nucleon scattering amplitudes [3], failed to account for the binding energy of the triton [4]. Tuning the short distance details of the nuclear potential, affecting the off-shell elements of the two-nucleon amplitude, moreover, is unnecessary, since these are not measurable and can be traded off in favour of on-shell three-body amplitudes [5]. This is where three-body forces come into play, as they can be used, in conjunction with accurate two-body interactions, to fit three-nucleon data [6], so that heavier nuclei can be investigated in this way. The modern theory of few-body forces has evolved into a systematic, well controlled low-energy expansion of the interparticle interactions [7,8]. Based on the pioneering work of Weinberg on effective nuclear forces [9,10], model independent two-and higher-body interactions have been developed into what is now commonly known as (chiral) effective field theory (EFT). In essence, EFT considers all possible interactions that are consistent with the underlying symmetries of the problem at a given order in perturbation theory, and the bare coupling constants of the theory are renormalised in favour of low-energy, physical observables. These effective theories, which are commonplace in nuclear physics, have slowly made their way into the ultracold atomic realm [11]. In fact, the lowest-order interactions were first introduced in the theory of Bose-Einstein condensates (BECs) using pseudopotentials back in 1957 [12]. Since the original motivation in ultracold gases was to produce BECs with alkali atoms [13,14], which interact very weakly, higher-order EFTs were unnecessary for a long time. Three-body interactions in three spatial dimensions, however, were shown to be needed in order to fix the energy of the lowest-lying Efimov state at or near unitarity and avoid the Thomas collapse [15,16], thereby generating great interest in few-body forces in the atomic physics community, which saw the first experimental evidence [17][18][19] for the elusive Efimov states [20]. Within this context, repulsive three-body forces have been proposed as a mechanism for the stabilisation of quantum atomic droplets [21] which, however, turned out to be stabilised by quantum fluctuations -a lowest-order effect in EFT -at least in actual experimental demonstrations [22,23] . The most promising candidate for the observation of effects due to three-particle forces is perhaps a system of ultracold bosons tightly confined to one spatial dimension. Recently, Guijarro et al. proposed using Bose-Bose and Fermi-Bose mixtures to engineer three-body repulsive interactions between dimers [24]. This proposal relies upon the ability to independently and simultaneously tune two different intraspecies interaction strengths, besides the interspecies scattering length. There are also several recent works focusing on attractive three-body forces in one dimension without [25][26][27] and with [28] a reference to a physical implementation, the latter requiring simultaneous tuning of several interaction strengths in a multicomponent Bose system on a tight-binding optical lattice. The trimer may also be observable with trapped ultracold atoms, as shown by Pricoupenko [29], who also developed the pseudopotential treatment of the three-body interaction in Ref. [30], which is most convenient for studies in the position representation. What all of the above works on the one-dimensional three-body interaction agree upon is the important fact that the three-body problem with pure three-body forces in one dimension is kinematically equivalent to a twodimensional two-body problem at low energies. Indeed, the former exhibits the same quantum anomaly as the latter, which has recently been investigated experimentally in two different works [31,32], a fact that was the focus of Refs. [25,26] in the present case. The most immediate consequence of this is that, while for attractive interactions three-and many-body bound states appear, for repulsive interactions one needs to deal with an unphysical bound state, i.e. a Landau pole in the scattering amplitude. Fortunately, it is possible to deal with it in the same way as for the two-dimensional problem with two-body interactions thanks to the kinematic equivalence. In this work, I consider non-relativistic identical bosons interacting via two-body forces exhibiting a zero-energy resonance (infinite scattering length [54]). The model interactions I use have a soft repulsive core at short distances, and an attractive finite-range tail. This type of interactions are justified for effectively reduceddimensional systems after integrating out the transversal degrees of freedom [33][34][35]. The particular form of the two-body interactions V (x) = V (x i − x j ) between particles i and j that I will use is given by V (x) = V 0 e −λ0x 2 + V 1 e −λ1x 2 ,(1) where V 0 (< 0) and V 1 (> 0) give, respectively, the strength of the attractive tail and soft core of the interaction, λ 0 and λ 1 determine their spatial spread, and I shall denote by x 0 (x 2 0 = log |λ 1 V 1 /λ 0 V 0 |/(λ 1 − λ 0 )) the length scale determining the potential minimum. In what follows, I choose these parameters in such a way that the two-boson scattering length diverges (1/a = 0), i.e., such that the zero-energy solution to the stationary two-body Schrödinger equation in the relative coordinate − 2 m ψ"(x) + V (x)ψ(x) = 0,(2) has the asymptotic form ψ(x) ∝ 1 as x → ±∞, and such that there are no two-body bound states. The effective two-body interaction, to lowest order and in the momentum representation, is given by a vanishing Lieb-Liniger coupling constant g = −2 2 /ma = 0 [36]. In the two-boson sector, the next-order interaction involves the effective range r [37], whose effect is identically zero at zero energy. To see this, and to analyse the three-body problem, it is most convenient to abandon the collision-theoretical approach and instead place the few-body systems on a finite line of length L with periodic boundary conditions. The analysis of the finite-size spectrum can be used to extract low-energy scattering amplitudes [38,39], and has come to be the method of choice in modern studies of scattering processes, from low-energy nuclear physics [40] to lattice QCD [41]. In 1D, the eigenenergies E = 2 k 2 /m at zero total momen-tum for two-bosons can be calculated from the equation k = 2πn L − 2 L θ(k), n ∈ Z,(3) where θ(k) is the even-wave scattering phase shift in 1D [42]. For the ground state (n = 0), since 1/a = 0, we obtain the solution k = 0 and therefore, as claimed, the effective range has no effect on it. For the first excited state (n = 1), however, using k tan θ(k) = 1/a + rk 2 /2 + O(k 4 ) [42], the energy shift with respect to the non-interacting energy E (0) 1 is given by ∆E 1 ≈ −2E (0) 1 r/L = O(L −3 ) . Therefore, the lowest-order correction for N ≥ 3 particles is given by the contribution of effective three-body forces which naïvely scales as O(L −2 ), for both ground and low-lying excited states. The bare lowest-order three-body interaction V LO 3 is obtained by expanding a 1D hyperspherically symmetric three-body potential to zero-th order in the hyperspherical momentum, and corresponds to a contact interaction in the position representation of the form V LO 3 (x 1 , x 2 , x 3 ) = g 3 δ(x 1 − x 2 )δ(x 2 − x 3 ),(4) where g 3 is the bare three-body interaction strength. For a pure three-body interaction, the three-body scattering amplitude can be obtained directly through the Lippmann-Schwinger equation since the Faddeev decomposition is unnecessary. The three-body T-matrix T 3 (z) for the interaction (4) is readily obtained as k ′ 1 , k ′ 2 , k ′ 3 |T 3 (z)|k 1 , k 2 , k 3 = 2πδ(K − K ′ )t 3 (z), with K = k 1 + k 2 + k 3 and K ′ = k ′ 1 + k ′ 2 + k ′ 3 the conserved total momentum. The constant t 3 (z), after setting the total momentum to zero, is given by t 3 (z) = g −1 3 − I(z) −1 ,(5) where I(z) = dq 1 dq 2 dq 3 (2π) 2 δ(q 1 + q 2 + q 3 ) z − 2 2m (q 2 1 + q 2 2 + q 2 3 ) . In order to calculate the coupling constant g 3 , the integral I(z), Eq. (6), must be regularised. I use a hard cutoff Λ in the hyperradial integral, by changing variables to Jacobi coordinates x = (q 1 − q 2 )/ √ 2, y = 2/3 [q 3 − (q 1 + q 2 )/2], and defining the hyperradial momentum ρ = x 2 + y 2 . The real part of I(z) for z = E + i0 + (E > 0) is given, in the limit Λ → ∞, by ReI(z) = − m 2π √ 3 2 log Λ 2 2mE/ 2(7) For attractive interactions, the T-matrix is renormalised by fixing the three-body binding energy E B = −|E| ≡ 2 Q 2 * /2m while, for repulsion, E B marks the location of a (unphysical) Landau pole, completely equivalent to its two-body two-dimensional counterpart [43]. Here, Q * plays the role of a momentum scale beyond which the EFT description breaks down. As noted by Beane in Ref. [43] for the 2D case, the three-body scattering length [24] is not a natural scale for repulsive interactions and I shall refer to the momentum scale Q * only. Putting together these considerations, the coupling constant g 3 = g 3 (Λ) is given by 1 g 3 (Λ) = m √ 3π 2 log Q * Λ .(8) Since I will analyse the three-body problem on a twobody resonance using diagonalisation in a periodic box, I derive now the finite-size scaling of the three-body energy with three-body interactions. This is easiest to do in the momentum representation. The stationary Schrödinger equation (Ĥ 0 +V LO 3 )|ψ = E|ψ , withĤ 0 the nonrelativistic kinetic energy operator for three particles, is solved by finding the poles of the Green's function for total momentum K = 0 in a box of length L. After writing the energy as E = (2π) 2 2 λ 2 /mL 2 , and defining an integer cutoff n Λ via Λ = 2πn Λ /L, the following equation for λ 2 is found ′ n1,n2 1 n 2 1 + n 2 2 + n 1 n 2 − λ 2 − 4π √ 3 log n Λ + 4π √ 3 log Q * L 2π = 0,(9) where the primed sum restricts the values of (n 1 , n 2 ) to n 2 1 + n 2 2 + n 1 n 2 < n 2 Λ , and the limit n Λ → ∞ is implied. From Eq. (9), it is simple to extract the weakcoupling (λ 2 ≪ 1) expansion, as in previous EFT-based approaches to the finite-size spectrum for two interacting particles [40,43,44]. Expanding the sum in Eq. (9), I find ′ n1,n2 1 n 2 1 + n 2 2 + n 1 n 2 − λ 2 − 4π √ 3 log n Λ = − 1 λ 2 + σ 1 + ∞ j=1 σ j+1 λ 2j ,(10)σ 1 = ′ n =0 1 n 2 1 + n 2 2 + n 1 n 2 − 4π √ 3 log n Λ ,(11)σ j = n =0 1 (n 2 1 + n 2 2 + n 1 n 2 ) j .(12) The values of the first two sums above are calculated to be σ 1 = 3.96156 . . ., σ 2 = 8.7115 . . .. Using the expansion (10) of the sum in Eq. (9) to obtain a weak-coupling expansion in the renormalised coupling constant g R , given by [55] the ground state energy E 0 of the three-body system reads g R = √ 3 4π 1 log Q * L 2π ,(13)E 0 = 4π 2 2 mL 2 g R − σ 1 g 2 R + (σ 2 1 − σ 2 )g 3 R + O(g 4 R ) . (14) As seen above, the naïve scaling of the energy (∝ L −2 ) is modified by the quantum anomaly [25,26] in the form of logarithmic corrections. In the three-body problem under the resonant and no bound state conditions, the contribution of the lowestorder effective three-body force to the ground state energy, Eq. (14), is dominant. However, higher order effects are present, and in order to extract the three-body momentum scale Q * accurately, a next-order term of O(L −4 ) must be included. To see what this term corresponds to, I write the three-body effective range correction to the scattering amplitude by simply replacing 1 g R → 1 g R − r 2 3 k 2 ,(15) which is completely analogous to the problem of 2D twobody scattering [47]. The correction to the energy due to the effective range is given by ∆E r3 = 16π 4 r 2 3 g 3 R 2 /mL 4 . This results in a two-parameter fit that needs at least two numerical or experimental data points. In Fig. 1, I plot the ground state energy of three particles in a periodic box as a function of the system's size for a resonant interaction. I extract the ultraviolet (UV) scale Q * by fitting Eq. (14), including the next-to-leading order correction ∝ g 3 R L −4 due to the three-body range, to the numerical data. The agreement between the theory and the data is remarkably good. From Fig. 1, one sees that the effective three-body interaction is purely repulsive, with fitted values Q * x 0 = 420 and r 3 /x 0 = 27.8. Other choices of the particular functional form of the potential (1), and other particular values of the potential's parameters that keep the scattering length divergent yield qualitatively identical results. I now move on to discuss the possible experimental demonstration of the three-body force in one dimension under the two conditions mentioned throughout this work. It must be noted that the requirement of no twobody bound states is given for theoretical convenience. Experimentally, this is justified when there are no shallow bound states, yet with sizeable binding energies, as deep bound states that generically exist in ultracold atomic systems are far in energy from the continuum and are therefore not populated in typical experimental time scales. The resonant condition can be satisfied by either using transversal confinement with anharmonic, anisotropic traps [45], which can reach effectively infinite scattering lengths [45,46], magnetic Feshbach resonances [48], for example for 133 Cs, which can set the 3D scattering length to zero [49], and therefore the effective 1D scattering length to infinity via dimensional reduction [50], or a combination thereof. As for the observation of tangible effects of the threebody forces, I shall consider a realistic experimental scenario, in particular Ref. [49]. There, they effectively confine a BEC of 133 Cs atoms to one dimension by applying a transversal 2D optical lattice with effective harmonic length a ⊥ = 1440a 0 , with a 0 the Bohr radius, and the system consists of an array of quasi-1D tubes. The longitudinal harmonic length is given by a = 8310a 0 . The central tube, in the repulsive weak coupling regime relevant to this work, has only a few atoms, N = 8 -11. A 1D resonance (1/a = 0) is obtained for a magnetic field B = 17.119 G, at which point they measure the lowest longitudinal breathing mode and show that indeed it corresponds to the non-interacting limit within experimental uncertainty. In the model interaction of Eq. (1), the most relevant length scale is given by x 0 , which marks the position of the potential minimum. Typical interatomic interactions have x 0 ∼ 5 -10Å [51]. Using these values for x 0 , the example given above, which shows a sizeable effect of the three-body force, yet remaining in the weak coupling limit, corresponds to a UV three-body momentum scale Q * ∼ 42 -84Å −1 (Q * x 0 ≈ 420). With the density in the threebody sector ρ 3b = 3/L ∼ 10 −2 -10 −1Å −1 , the relevant dimensionless constant κ = Q * /ρ 3b takes on values in the range κ ∼ 420 -8400. Since the three-body force is weak and the particle numbers in the experiment of Ref. [49] are small, one can estimate the central density ρ(0) in the central tube by using the non-interacting ground state in a harmonic well. This gives ρ(0) ≈ N/ πa 2 , with values in the range ρ(0) ∼ 3.2µm −1 -4.4µm −1 , implying that Q * /ρ(0) ∼ 10 5 -3 · 10 5 , corresponding to a threebody coupling constant reduced by a factor of about 2 -5 (see Eq. (13)), very similar in magnitude to the example shown here. It would be, nevertheless, beneficial to have higher particle numbers in the tube, of the order of 100, which would mildly increase the three-body coupling constant. Since the focus of Ref. [49] was the strongly interacting limit, it would be interesting to explore the 1D resonant regime in more detail, and study the shift in breathing mode frequency due to the residual three-body interactions. Another possible experimental observation of the effects of three-body forces would be through measurements of the speed of sound, which can be probed using magnetic field gradients [52] or Bragg spectroscopy [53] in quasi-1D ultracold atomic systems. To conclude, I have studied the three-body problem with identical bosons in one spatial dimension interacting via semi-realistic pairwise interactions and found that, on resonance, the leading order contribution to the threebody scattering amplitude corresponds to an effective, repulsive three-body interaction. I have analyzed the problem in a finite box with periodic boundary conditions, which allows for the extraction of the three-body collisional parameters, and hence the scattering amplitude, via the ground state energy of the system. I have also shown that under rather usual experimental conditions, the effects of the three-body force should be observable. It is also worth noticing that the resonant condition is not stringent, and these effects are sizeable slightly away from the resonance, even on the slightly attractive side of it. Moreover, the energy shifts due to four-and fivebody forces naïvely scale as L −3 and L −4 , respectively, and may play a non-trivial role in the equation of state of the resonant Bose gas. Many-body physics under these conditions are yet to be explored, and open up a plethora of new possibilities with one-dimensional quantum manyparticle systems beyond the Lieb-Liniger model. I would like to thank N. T. Zinner, R. E. Barfknecht, M. Mikkelsen and D. S. Petrov for useful discussions. ∈ = 1.076 . . ., λ1/λ0 = 2, mV0/ 2 λ0 = −5, V1/V0 = −1.59151239, corresponding to inverse scattering length x0/a ≈ −3.5 · 10 −7 . Small blue dots correspond to the numerical solution of the three-body Schrödinger equation with potential (1); so do the large blue dots, using a larger basis set for convergence; the red dashed line is the fit of Eq.(14) to the data for Lλ[4.5, 10], including the effective range correction (see text). Inset: same as the main figure, but for mL 2 E/ 2 . . H Yukawa, Proc. Phys. Math. Soc. Japan. 1748H. Yukawa, Proc. Phys. Math. Soc. Japan 17, 48 (1935). . K Erkelenz, Phys. Rep. 13191K. Erkelenz, Phys. Rep. 13C, 191 (1974). . R A Bryan, B L Scott, Phys. Rev. 135434R. A. Bryan and B. L. Scott, Phys. Rev. 135, B434 (1964). . A Bömelburg, Phys. Rev. C. 28403A. Bömelburg, Phys. Rev. C 28, 403 (1983). . H. -W Hammer, A Nogga, A Schwenk, Rev. Mod. Phys. 85197H. -W. Hammer, A. Nogga and A. Schwenk, Rev. Mod. Phys. 85, 197 (2013). . A Nogga, D Hüber, H Kamada, W Glöcke, Phys. Lett. B. 40919A. Nogga, D. Hüber, H. Kamada and W. Glöcke, Phys. Lett. B 409, 19 (1997). . E Epelbaum, H. -W Hammer, U. -G Meißner, Rev. Mod. Phys. 811773E. Epelbaum, H. -W. Hammer and U. -G. Meißner, Rev. Mod. Phys. 81, 1773 (2009). . E Epelbaum, H Krebs, U G Meißner, Phys. Rev. Lett. 115122301E. Epelbaum, H. Krebs and U. G. Meißner, Phys. Rev. Lett. 115, 122301 (2015). . S Weinberg, Phys. Lett. B. 251288S. Weinberg, Phys. Lett. B 251, 288 (1990). . S Weinberg, Nucl. Phys. B. 631447S. Weinberg, Nucl. Phys. B 631, 447 (1998). . E Braaten, M Kusunoki, D Zhang, Ann. Phys. (NY). 3231770E. Braaten, M. Kusunoki and D. Zhang, Ann. Phys. (NY) 323, 1770 (2008). . K Huang, C N Yang, Phys. Rev. A. 105767K. Huang and C. N. Yang, Phys. Rev. A 105, 767 (1957). . M H Anderson, J R Ensher, M R Matthews, C E Wieman, E A Cornell, Science. 269198M. H. Anderson, J. R. Ensher, M. R. Matthews, C. E. Wieman and E. A. Cornell, Science 269, 198 (1995). . K B Davis, M. -O Mewes, M R Andrews, N J Van Druten, D S Durfree, D M Kurn, W Ketterle, Phys. Rev. Lett. 753969K. B. Davis, M. -O. Mewes, M. R. Andrews, N. J. van Druten, D. S. Durfree, D. M. Kurn and W. Ketterle, Phys. Rev. Lett. 75, 3969 (1995). . P F Bedaque, H. -W Hammer, U Van Kolck, Phys. Rev. Lett. 82463P. F. Bedaque, H. -W. Hammer and U. van Kolck, Phys. Rev. Lett. 82, 463 (1999). . P F Bedaque, H. -W Hammer, U Van Kolck, Nucl. Phys. A. 646444P. F. Bedaque, H. -W. Hammer and U. van Kolck, Nucl. Phys. A 646, 444 (1999). . T Kraemer, Nature. 440315T. Kraemer et al., Nature 440, 315 (2006). . S Knoop, Nature Phys. 5227S. Knoop et al., Nature Phys. 5, 227 (2009). . M Zaccanti, Nature Phys. 5586M. Zaccanti et al., Nature Phys. 5, 586 (2009). . V Efimov, Phys. Lett. B. 33563V. Efimov, Phys. Lett. B 33, 563 (1970). . A Bulgac, Phys. Rev. Lett. 8950402A. Bulgac, Phys. Rev. Lett. 89, 050402 (2002). . C R Cabrera, L Tanzi, J Sanz, B Naylor, P Thomas, P Cheiney, L Tarruell, Science. 359301C. R. Cabrera, L. Tanzi, J. Sanz, B. Naylor, P. Thomas, P. Cheiney and L. Tarruell, Science 359, 301 (2018). . G Semeghini, Phys. Rev. Lett. 120235301G. Semeghini et al., Phys. Rev. Lett. 120, 235301 (2018). . G Guijarro, A Pricoupenko, G E Astrakharchik, J Boronat, D S Petrov, Phys. Rev. A. 9761605G. Guijarro, A. Pricoupenko, G. E. Astrakharchik, J. Boronat and D. S. Petrov, Phys. Rev. A 97, 061605 (2018). . J E Drut, J R Mckenney, W S Daza, C L Lin, C R Ordóñez, Phys. Rev. Lett. 120243002J. E. Drut, J. R. McKenney, W. S. Daza, C. L. Lin and C. R. Ordóñez, Phys. Rev. Lett. 120, 243002 (2018). . W S Daza, J E Drut, C L Lin, C R Ordóñez, arXiv:1808.0711v1e-printW. S. Daza, J. E. Drut, C. L. Lin and C. R. Ordóñez, e-print arXiv:1808.0711v1 . . Y Nishida, Phys. Rev. A. 9761603Y. Nishida, Phys. Rev. A 97, 061603 (2018). . Y Sekino, Y Nishida, Phys. Rev. A. 9711602Y. Sekino and Y. Nishida, Phys. Rev. A 97, 011602 (2018). . L Pricoupenko, Phys. Rev. A. 9761604L. Pricoupenko, Phys. Rev. A 97, 061604 (2018). . L Pricoupenko, Phys. Rev. A. 9912711L. Pricoupenko, Phys. Rev. A 99 , 012711 (2019). . M Holten, L Bayha, A C Klein, P A Murthy, P M Preiss, S Jochim, Phys. Rev. Lett. 121120401M. Holten, L. Bayha, A. C. Klein, P. A. Murthy, P. M. Preiss and S. Jochim, Phys. Rev. Lett. 121, 120401 (2018). . T Peppler, P Dyke, M Zamorano, S Hoinka, C J Vale, Phys. Rev. Lett. 121120402T. Peppler, P. Dyke, M. Zamorano, S. Hoinka and C. J. Vale, Phys. Rev. Lett. 121, 120402 (2018). . A Maestro, M Boninsegni, I Affleck, Phys. Rev. Lett. 106105303A. Del Maestro, M. Boninsegni and I. Affleck, Phys. Rev. Lett. 106, 105303 (2011). . A Del Maestro, Int. J. Mod. Phys. B. 261244002A. Del Maestro, Int. J. Mod. Phys. B 26, 1244002 (2012). . P F Duc, M Savard, M Petrescu, B Rosenow, A Maestro, G Gervais, Science Adv. 11400222P. F. Duc, M. Savard, M. Petrescu, B. Rosenow, A. Del Maestro and G. Gervais, Science Adv. 1, e1400222 (2015). . E H Lieb, W Liniger, Phys. Rev. 1301605E. H. Lieb and W. Liniger, Phys. Rev. 130, 1605 (1963). Few-body syst. M Valiente, N T Zinner, 56845M. Valiente and N. T. Zinner, Few-body syst. 56, 845 (2015). . M Lüscher, Commun. Math. Phys. 104177M. Lüscher, Commun. Math. Phys. 104, 177 (1986). . M Lüscher, Commun. Math. Phys. 105153M. Lüscher, Commun. Math. Phys. 105, 153 (1986). . S R Beane, Phys. Lett. B. 585106S. R. Beane, Phys. Lett. B 585, 106 (2004). . R A Briceño, J J Dudek, R D Young, Rev. Mod. Phys. 9025001R. A. Briceño, J. J. Dudek and R. D. Young, Rev. Mod. Phys. 90, 025001 (2018). . V E Barlette, M M Leite, S Adhikari, Eur. J. Phys. 21435V. E. Barlette, M. M. Leite and S. Adhikari, Eur. J. Phys. 21, 435 (2000). . S R Beane, Phys. Rev. A. 8263610S. R. Beane, Phys. Rev. A 82, 063610 (2010). . M Valiente, N T Zinner, Sci. China-Phys. Mech. & Astr. 59114211M. Valiente and N. T. Zinner, Sci. China-Phys. Mech. & Astr. 59, 114211 (2016). . S Sala, P I Schneider, A S Saenz, Phys. Rev. Lett. 10973201S. Sala, P. I. Schneider and A. S. Saenz, Phys. Rev. Lett. 109, 073201 (2012). . M Valiente, K Mølmer, Phys. Rev. A. 8453628M. Valiente and K. Mølmer, Phys. Rev. A 84, 053628 (2011). . S Adhikari, Am. J. Phys. 54362S. Adhikari, Am. J. Phys. 54, 362 (1986). . C Chin, R Grimm, P Julienne, E Tiesinga, Rev. Mod. Phys. 821225C. Chin, R. Grimm, P. Julienne and E. Tiesinga, Rev. Mod. Phys. 82, 1225 (2010). . E Haller, Science. 3251224E. Haller et al., Science 325, 1224 (2009). . M Olshanii, Phys. Rev. Lett. 81938M. Olshanii, Phys. Rev. Lett. 81, 938 (1998). . R A Aziz, V P S Nain, J S Carley, W L Taylor, G T Mcconville, J. Chem. Phys. 704330R. A. Aziz, V. P. S. Nain, J. S. Carley, W. L. Taylor and G. T. McConville, J. Chem. Phys. 70, 4330 (1979). . B Yang, Phys. Rev. Lett. 119165701B. Yang et al., Phys. Rev. Lett. 119, 165701 (2017). . T L Yang, Phys. Rev. Lett. 121103001T. L. Yang et al., Phys. Rev. Lett. 121, 103001 (2018). This situation is sometimes referred to as "zero crossing". I shall, however, call this a zero-energy resonance. no general consensus in the terminology for infinite scattering length. in analogy with the three-dimensional caseIn the literature, there is no general consensus in the ter- minology for infinite scattering length. This situation is sometimes referred to as "zero crossing". I shall, how- ever, call this a zero-energy resonance, in analogy with the three-dimensional case. As in the two-body problem in two dimensions, there is freedom in choosing the renormalised coupling constant, i.e. by choosing different scales. 43]. This amounts to a mere reparametrisation of the weak-coupling expansionAs in the two-body problem in two dimensions, there is freedom in choosing the renormalised coupling constant, i.e. by choosing different scales [43]. This amounts to a mere reparametrisation of the weak-coupling expansion.
[]
[ "The hyperbolic maximum principle approach to the construction of generalized convolutions", "The hyperbolic maximum principle approach to the construction of generalized convolutions" ]
[ "Rúben Sousa ", "Manuel Guerra ", "Semyon Yakubovich " ]
[]
[]
We introduce a unified framework for the construction of convolutions and product formulas associated with a general class of regular and singular Sturm-Liouville boundary value problems. Our approach is based on the application of the Sturm-Liouville spectral theory to the study of the associated hyperbolic equation. As a by-product, an existence and uniqueness theorem for degenerate hyperbolic Cauchy problems with initial data at a parabolic line is established.The mapping properties of convolution operators generated by Sturm-Liouville operators are studied. Analogues of various notions and facts from probabilistic harmonic analysis are developed on the convolution measure algebra. Various examples are presented which show that many known convolution-type operators -including those associated with the Hankel, Jacobi and index Whittaker integral transforms -can be constructed using this general approach.
10.1201/9780429320026-6
[ "https://arxiv.org/pdf/1901.10357v1.pdf" ]
119,343,303
1901.10357
b38c305be5ceb3f507569183cd90344e57a4c160
The hyperbolic maximum principle approach to the construction of generalized convolutions 29 Jan 2019 January 30, 2019 Rúben Sousa Manuel Guerra Semyon Yakubovich The hyperbolic maximum principle approach to the construction of generalized convolutions 29 Jan 2019 January 30, 2019Generalized convolutionproduct formulahyperbolic Cauchy problemparabolic degen- eracySturm-Liouville spectral theorymaximum principle We introduce a unified framework for the construction of convolutions and product formulas associated with a general class of regular and singular Sturm-Liouville boundary value problems. Our approach is based on the application of the Sturm-Liouville spectral theory to the study of the associated hyperbolic equation. As a by-product, an existence and uniqueness theorem for degenerate hyperbolic Cauchy problems with initial data at a parabolic line is established.The mapping properties of convolution operators generated by Sturm-Liouville operators are studied. Analogues of various notions and facts from probabilistic harmonic analysis are developed on the convolution measure algebra. Various examples are presented which show that many known convolution-type operators -including those associated with the Hankel, Jacobi and index Whittaker integral transforms -can be constructed using this general approach. Introduction Given a Sturm-Liouville operator on an interval of the real line, it is well-known that its eigenfunction expansion gives rise to an integral transform which shares many properties with the ordinary Fourier transform [18,52]. Since various standard special functions are solutions of Sturm-Liouville equations, the class of integral transforms of Sturm-Liouville type includes, as particular cases, many common integral transforms (Hankel, Kontorovich-Lebedev, Mehler-Fock, Jacobi, Laguerre, etc.). The Fourier transform lies at the heart of the classical theory of harmonic analysis. This naturally raises a question: is it possible to generalize the main facts of harmonic analysis to integral transforms of Sturm-Liouville type? Starting from the seminal works of Delsarte [17] and Levitan [33] it was noticed that the key ingredient for developing of such a generalized harmonic analysis is the so-called product formula. We say that an indexed family of complex-valued functions {w λ } on an interval I ⊂ R has a product formula if for each x, y ∈ I there exists a complex Borel measure ν x,y (independent of λ) such that w λ (x) w λ (y) = I w λ dν x,y (λ ∈ Λ). (1.1) Product formulas naturally lead to generalized convolution operators. To fix ideas, let ℓ(u) = 1 r −(pu ′ ) ′ + qu be a usual Sturm-Liouville differential expression defined on the interval I, and let (F h)(λ) := I h(x) w λ (x) dm(x) be a Sturm-Liouville type integral transform, where the w λ are solutions of ℓ(w) = λw (λ ∈ C). If {w λ } has a product formula, then we can define a generalized (Sturm-Liouville type) convolution operator * by (f * g)(x) := I I f dν x,y g(y) dm(y). ( 1.2) It is not difficult to show that, under reasonable assumptions, the property F (f * g) = (F f )·(F g) holds for this convolution operator; this means that the analogue of one of the basic identities in harmonic analysis -the Fourier convolution theorem -is satisfied by the generalized convolution. Consider now the associated hyperbolic partial differential equation 1 r(x) −∂ x p(x) ∂ x f (x, y) + q(x)f (x, y) = 1 r(y) −∂ y p(y) ∂ y f (x, y) + q(y)f (x, y) . (1.3) If the kernel of the Sturm-Liouville transform is defined via some initial condition w λ (a) = 1, then the product f (x, y) = w λ (x)w λ (y) is a solution of (1.3) satisfying the boundary condition f (x, a) = w λ (x). Studying the properties of the associated hyperbolic equation is therefore a natural strategy for proving the existence of a product formula and extracting information about the measure ν x,y . An especially interesting case is that where {ν x,y } turns out to be a family of probability measures (satisfying appropriate continuity assumptions). Indeed, in this case one can show that the convolution (1.2) gives rise to a Banach algebra structure in the space of finite complex Borel measures in which various probabilistic concepts and properties can be developed in analogy with the classical theory [6,53]. Establishing explicit product formulas, or even proving their existence, has been recognized as a difficult problem [15,12]. Nevertheless, using the maximum principle for hyperbolic equations [60], it was shown by Levitan [34] (and, under weakened assumptions, by Chebli [11] and Zeuner [63]) that this probabilistic property of the product formula holds for a general family of Sturm-Liouville differential expressions on I = [0, ∞) of the form ℓ(u) = − 1 A (Au ′ ) ′ . This family of Sturm-Liouville operators includes, as important particular cases, the generators of the Hankel transform and the (Fourier-)Jacobi transform; these cases are noteworthy due to the fact that the explicit expression for the measure in the product formula can been determined using results from the theory of special functions (see . Various examples show that the probabilistic property of the product formula holds only for a restricted class of Sturm-Liouville operators [37,45]; this is connected with the fact that the hyperbolic maximum principle requires rather strong assumptions on the coefficients. Notwithstanding, the recent work [46,47] of the authors on the index Whittaker transform made it apparent that there is room for generalization of the results of [11,34,63]. In fact, the family of Sturm-Liouville operators considered in these works only includes operators for which the equation (1.3) is uniformly hyperbolic on [0, ∞) 2 ; a consequence of this is that, under their assumptions, the support supp(ν x,y ) of the measures in the product formula is always compact. In contrast, the case of the index Whittaker transform provides an example of a product formula whose measures ν x,y have the probabilistic property and satisfy supp(ν x,y ) = [0, ∞) for x, y > 0; here the associated hyperbolic equation (1.3) is parabolically degenerate at the boundaries x = 0 and y = 0. (The index Whittaker transform is generated by the Sturm-Liouville expression x 2 u ′′ + (1 + 2(1 − α)x)u ′ on I = [0, ∞), and its product formula, which is known in closed form, is given in Example 8. 6.) The goal of this work is to introduce a unified framework for the construction of Sturm-Liouville type convolution operators associated with possibly degenerate hyperbolic equations. We will consider a Sturm-Liouville differential expression of the form ℓ = − 1 r d dx p d dx , x ∈ (a, b) (1.4) (−∞ ≤ a < b ≤ ∞), where p and r are (real-valued) coefficients such that p(x), r(x) > 0 for all x ∈ (a, b) and p, p ′ , r and r ′ are locally absolutely continuous on (a, b). Concerning the behavior of the coefficients at the boundaries x = a and x = b, we will assume respectively that where c ∈ (a, b) is an arbitrary point. These conditions mean that a is a regular or entrance boundary and b is a natural boundary for the operator ℓ. The notions of regular, entrance and natural boundary refer to the Feller classification of boundaries, which is recalled in Remark 2.10, where we also give some comments on the role of conditions (1.5)-(1.6). The point of departure is the study of the Cauchy problem for the possibly degenerate hyperbolic equation 1 r(x) ∂ x p(x) ∂ x f (x, y) = 1 r(y) ∂ y p(y) ∂ y f (x, y) . Under the assumption that the product p(x)r(x) of the coefficients of (1.4) is an increasing function, we prove an existence and uniqueness theorem for the Cauchy problem which is based on the spectral theory of Sturm-Liouville operators. We then give a sufficient condition for the maximum principle to hold for the hyperbolic equation is given and, as a corollary, the positivity preserving property of the solution of the Cauchy problem is obtained. Our existence theorem (and the positivity result) covers many hyperbolic equations with initial data on the parabolic line which are outside the scope of the classical theory, and for which the problem of well-posedness of the Cauchy problem was, to the best of our knowledge, open. In fact, given that our results depend heavily on the assumption that the left boundary a is of entrance type (cf. Remark 2.10), they indicate that the well-posedness of the degenerate problem with initial line y = a depends on the Feller boundary classification of ℓ at the endpoint a. If the maximum principle holds for the hyperbolic equation associated with ℓ, then the solution of the hyperbolic Cauchy problem can be written as f (x, y) = [a,b) h dν x,y , where h(x) = f (x, a) is the initial condition and {ν x,y } is a family of finite positive Borel measures on [a, b). Formally, this suggests that the product formula (1.1) should hold for the kernel w λ of the Sturm-Liouville transform. It turns out that (1.1) indeed holds and that the ν x,y are probability measures, but the proof requires some effort, especially when the Cauchy problem is parabolically degenerate [48]. We then define the generalized convolution by (1.2), so that the expected convolution theorem F (f * g) = (F f ) · (F g) holds. Moreover, the Young inequality for the L p -spaces with respect to the weighted measure r(x)dx is valid for the convolution (1.2), demonstrating that the mapping properties of the generalized convolution structure resemble those of the ordinary convolution. A fundamental tool for studying the continuity and mapping properties of the generalized convolution is the extension of the Sturm-Liouville transform to complex measures, defined by µ(λ) = [a,b) w λ (x)µ(dx). Actually, if we define the convolution of two Dirac measures by δ x * δ y = ν x,y and then define the convolution µ * ν of two complex measures so that (µ, ν) → µ * ν is weakly continuous, then the space M C [a, b) of finite complex measures on [a, b) becomes a convolution measure algebra for which the Sturm-Liouville transform is a generalized characteristic function, in the sense that the property µ * ν = µ · ν holds. The algebra (M C [a, b), * ) is therefore a natural environment for studying notions from probabilistic harmonic analysis, in particular infinite divisibility, Gaussian-type measures and Lévy-type (additive) stochastic processes. As anticipated above, the study of these concepts leads to analogues of chief results in probability theory such as the Lévy-Khintchine formula or the contraction property of convolution semigroups. The class of Lévy-type processes with respect to the convolution measure algebra includes the diffusion process generated by the Sturm-Liouville expression ℓ, as well as many other Markov processes with discontinuous paths. We hope that this work illuminates the role of product formulas and hyperbolic Cauchy problems on a purely probabilistic problem -that of constructing a class of Lévy-type processes which accommodates a given diffusion process -and stimulates further research on this topic. The remaining sections are organized as follows. In Section 2, after introducing the basic properties of the solution of the Sturm-Liouville equation ℓ(w) = λw, we summarize some key facts from the theory of eigenfunction expansions of Sturm-Liouville operators and from the theory of one-dimensional diffusion processes. Section 3 is devoted to the hyperbolic Cauchy problem associated with ℓ: an existence and uniqueness theorem is proved and, under suitable assumptions, it is shown that the unique solution satisfies a weak maximum principle. In Section 4, the solution of the hyperbolic Cauchy problem is used to define the generalized convolution of probability measures and the generalized translation of functions; moreover, the Sturm-Liouville transform of finite measures is introduced and an analogue of the Lévy continuity theorem is established, together with some other basic properties. The product formula for the solution of the Sturm-Liouville equation is discussed in Section 5. In Section 6 we establish the basic properties of the generalized convolution as an operator on weighted L p -spaces. Section 7 explores the probabilistic properties of the convolution, demonstrating that the main concepts and facts from the classical theory of infinitely divisible distributions and convolution semigroups can be developed, in a parallel fashion, in the framework of the generalized convolutions considered here. The concluding Section 8 presents several examples and shows that various convolutions associated with standard integral transforms constitute particular cases of the general construction presented here. Preliminaries We use the following standard notations. For a subset E ⊂ R d , C(E) is the space of continuous complexvalued functions on E; C b (E), C 0 (E) and C c (E) are, respectively, its subspaces of bounded continuous functions, of continuous functions vanishing at infinity and of continuous functions with compact support; C k (E) stands for the subspace of k times continuously differentiable functions. B b (E) is the space of complex-valued bounded and Borel measurable functions. The corresponding spaces of real-valued functions are denoted by C(E, R), C b (E, R), etc. L p (E; µ) (1 ≤ p ≤ ∞) denotes the Lebesgue space of complex-valued p-integrable functions with respect to a given measure µ on E. The space of probability (respectively, finite positive, finite complex) Borel measures on E will be denoted by P(E) (respectively, M + (E), M C (E)). The total variation of µ ∈ M C (E) is denoted by µ , and δ x denotes the Dirac measure at a point x. Solutions of the Sturm-Liouville equation We begin by collecting some properties of the solutions of the Sturm-Liouville equation ℓ(u) = λu (λ ∈ C), where ℓ is of the form (1.4) and satisfies the boundary condition (1.5). We shall write f [1] = pf ′ and s(x) = x c dξ p(ξ) (this is the so-called scale function, cf. [8]). If the Sturm-Liouville equation is regular at the left endpoint a, it is well-known that there is an entire solution w λ (x) of ℓ(u) = λu satisfying the initial conditions w λ (a) = cos θ, w [1] λ (a) = sin θ (0 ≤ θ < π). When we only require that (1.5) holds (so that a may be an entrance boundary), the following lemma ensures that the same continues to hold for the boundary condition with vanishing derivative (θ = 0): Lemma 2.1. For each λ ∈ C, there exists a unique solution w λ (·) of the boundary value problem ℓ(w) = λw (a < x < b), w(a) = 1, w [1] (a) = 0. (2.1) Moreover, λ → w λ (x) is, for each fixed x, an entire function of exponential type. Proof. The proof is similar to [28,Lemma 3], but for completeness we give a sketch here. Let η 0 (x) = 1, η j (x) = x a s(x) − s(ξ) η j−1 (ξ)r(ξ)dξ (j = 1, 2, . . .). (2.2) Pick an arbitrary β ∈ (a, b) and define S(x) = x a s(β) − s(ξ) r(ξ)dξ. From the boundary assumption (1.5) it follows that 0 ≤ S(x) ≤ S(β) < ∞ for x ∈ (a, β]. Furthermore, it is easy to show (using induction) that |η j (x)| ≤ 1 j! (S(x)) j for all j. Therefore, the function w λ (x) = ∞ j=0 (−λ) j η j (x) (a < x ≤ β, λ ∈ C) is well-defined as an absolutely convergent series. The estimate |w λ (x)| ≤ ∞ j=0 |λ| j (S(x)) j j! = e |λ|S(x) ≤ e |λ|S(β) (a < x ≤ β) shows that λ → w λ (x) is entire and of exponential type. In addition, for a < x ≤ β we have 1 − λ x a 1 p(y) y a w λ (ξ) r(ξ)dξ dy = 1 − λ x a (s(x) − s(ξ))w λ (ξ) r(ξ)dξ = 1 − λ x a (s(x) − s(ξ)) ∞ j=0 (−λ) j η j (ξ) r(ξ)dξ = 1 + ∞ j=0 (−λ) j+1 x a (s(x) − s(ξ))η j (ξ) r(ξ)dξ = 1 + ∞ j=0 (−λ) j+1 η j+1 (x) = w λ (x), i.e., w λ (x) satisfies w λ (x) = 1 − λ x a 1 p(y) y a w λ (ξ) r(ξ)dξ dy This integral equation is equivalent to (2.1), so the proof is complete. Throughout this work, {a m } m∈N will denote a sequence b > a 1 > a 2 > . . . with lim a m = a. Next we verify that the solution w λ for the Sturm-Liouville equation on the interval (a, b) is approximated by the corresponding solutions on the intervals (a m , b): Lemma 2.2. For m ∈ N, let w λ,m (x) be the unique solution of the boundary value problem ℓ(w) = λw (a m < x < b), w(a m ) = 1, w [1] (a m ) = 0. (2.3) Then lim m→∞ w λ,m (x) = w λ (x) pointwise for each a < x < b and λ ∈ C. Proof. In the same way as in the proof of Lemma 2.1 we can check that the solution of (2.3) is given by w λ,m (x) = ∞ j=0 (−λ) j η j,m (x) (a m < x < b, λ ∈ C) where η 0,m (x) = 1 and η j,m (x) = x am s(x) − s(ξ) η j−1,m (ξ)r(ξ)dξ. As before we have |η j,m (x)| ≤ 1 j! (S(x)) j for a m < x ≤ β (where S is the function from the proof of Lemma 2.1). Using this estimate and induction on j, it is easy to see that η j,m (x) → η j (x) as m → ∞ (a < x ≤ β, j = 0, 1, . . .). Noting that the estimate on |η j,m (x)| allows us to take the limit under the summation sign, we conclude that w λ,m (x) → w λ (x) as m → ∞ (a < x ≤ β). The following lemma provides a sufficient condition for the solution w λ (·) to be uniformly bounded in the variables x ∈ (a, b) and λ ≥ 0: Lemma 2.3. If x → p(x)r(x) is an increasing function, then the solution of (2.1) is bounded: |w λ (x)| ≤ 1 for all a < x < b, λ ≥ 0. Proof. Let us start by assuming that p(a)r(a) > 0. For λ = 0 the result is trivial because w 0 (x) ≡ 1. Fix λ > 0. Multiplying both sides of the differential equation ℓ(w λ ) = λw λ by 2w [1] λ , we obtain − 1 pr [(w [1] λ ) 2 ] ′ = λ(w 2 λ ) ′ . Integrating the differential equation and then using integration by parts, we get λ 1 − w λ (x) 2 = x a 1 p(ξ)r(ξ) w [1] λ (ξ) 2 ′ dξ = w [1] λ (x) 2 p(x)r(x) + x a p(ξ)r(ξ) ′ w [1] λ (ξ) p(ξ)r(ξ) 2 dξ, a < x < b where we also used the fact that w [1] λ (a) = 0 and the assumption that p(a)r(a) > 0. The right hand side is nonnegative, because x → p(x)r(x) is increasing and therefore (p(ξ)r(ξ)) ′ ≥ 0. Given that λ > 0, it follows that 1 − w λ (x) 2 ≥ 0, so that |w λ (x)| ≤ 1. If p(a)r(a) = 0, the above proof can be used to show that the solution of (2.3) is such that |w λ,m (x)| ≤ 1 for all a < x < b, λ ≥ 0 and m ∈ N; then Lemma 2.2 yields the desired result. Remark 2.4. We shall make extensive use of the fact that the differential expression (1.4) can be transformed into the standard form ℓ = − 1 A d dξ A d dξ = − d 2 dξ 2 − A ′ A d dξ . This is achieved by setting A(ξ) := p(γ −1 (ξ)) r(γ −1 (ξ)), (2.4) where γ −1 is the inverse of the increasing function γ(x) = x c r(y) p(y) dy, c ∈ (a, b) being a fixed point (if r(y) p(y) is integrable near a, we may also take c = a). Indeed, it is straightforward to check that a given function ω λ : (a, b) → C satisfies ℓ(ω λ ) = λω λ if and only if ω λ (ξ) := ω λ (γ −1 (ξ)) satisfies ℓ( ω λ ) = λ ω λ . It is interesting to note that the assumption of the previous lemma (x → p(x)r(x) is increasing) is equivalent to requiring that the first-order coefficient A ′ A of the transformed operator ℓ is nonnegative. We also observe that if this assumption holds then we have γ(b) = ∞ (otherwise the left-hand side integral in (1.6) would be finite, contradicting that b is a natural boundary). We have γ(a) > −∞ if a is a regular endpoint (Remark 2.10); if a is entrance, γ(a) can be either finite or infinite. Sturm-Liouville type transforms For simplicity, we shall write L p (r) := L p (a, b); r(x)dx (1 ≤ p < ∞), and the norm of this space will be denoted by · p . It follows from the boundary conditions (1.5)-(1.6) that one obtains a self-adjoint realization of ℓ in the Hilbert space L 2 (r) by imposing the Neumann boundary condition lim x↓a u [1] (x) = 0 at the left endpoint a. We state this well-known fact (cf. [40,35]) as a lemma: Lemma 2.5. The operator L : D (2) L ⊂ L 2 (r) −→ L 2 (r), Lu = ℓ(u) where D(2) L := u ∈ L 2 (r) u and u ′ locally abs. continuous on (a, b), ℓ(u) ∈ L 2 (r), lim x↓a u [1] (x) = 0 (2.5) is self-adjoint. The self-adjoint realization L gives rise to an integral transform, which we will call the L-transform, given by (F h)(λ) := b a h(x) w λ (x) r(x)dx (h ∈ L 1 (r), λ ≥ 0) (2.6) (this is also known as the generalized Fourier transform or the Sturm-Liouville transform). The Ltransform is an isometry with an inverse which can be written as an integral with respect to the so-called spectral measure ρ L : Proposition 2.6. There exists a unique locally finite positive Borel measure ρ L on R such that the map h → F h induces an isometric isomorphism F : L 2 (r) −→ L 2 (R; ρ L ) whose inverse is given by (F −1 ϕ)(x) = R ϕ(λ) w λ (x) ρ L (dλ), the convergence of the latter integral being understood with respect to the norm of L 2 (r). The spectral measure ρ L is supported on [0, ∞). Moreover, the differential operator L is connected with the transform (2.6) via the identity [F (Lh)](λ) = λ·(F h)(λ), h ∈ D (2) L (2.7) and the domain D L defined by (2.5) can be written as D (2) L = u ∈ L 2 (r) λ·(F f )(λ) ∈ L 2 [0, ∞); ρ L . (2.8) Proof. The existence of a generalized Fourier transform associated with the operator L is a consequence of the standard Weyl-Titchmarsh-Kodaira theory of eigenfunction expansions of Sturm-Liouville operators (see [49,Section 3.1] and [59,Section 8]). In the general case the eigenfunction expansion is written in terms of two linearly independent eigenfunctions and a 2 × 2 matrix measure. However, from the regular/entrance boundary assumption (1.5) it follows that the function w λ (x) is square-integrable near x = 0 with respect to the measure r(x)dx; moreover, by Lemma 2.1, w λ (x) is (for fixed x) an entire function of λ. Therefore, the possibility of writing the expansion in terms only of the eigenfunction w λ (x) follows from the results of [19, Sections 9 and 10]. It is worth pointing out that the transformation of the Sturm-Liouville operator ℓ into its standard form ℓ (Remark 2.4) leaves the spectral measure unchanged: indeed, it is easily verified that the operator L : D (2) L ⊂ L 2 (A) −→ L 2 (A), Lu = ℓ(u) is unitarily equivalent to the operator L and, consequently, ρ L = ρ L . The following lemma gives a sufficient condition for the inversion integral of the L-transform to be absolutely convergent. Lemma 2.7. (a) For each µ ∈ C \ R, the integrals [0,∞) w λ (x) w λ (y) |λ − µ| 2 ρ L (dλ) and [0,∞) w [1] λ (x) w [1] λ (y) |λ − µ| 2 ρ L (dλ) (2.9) converge uniformly on compact squares in (a, b) 2 . (b) If h ∈ D (2) L , then h(x) = [0,∞) (F h)(λ) w λ (x) ρ L (dλ) (2.10) h [1] (x) = [0,∞) (F h)(λ) w [1] λ (x) ρ L (dλ) (2.11) where the right-hand side integrals converge absolutely and uniformly on compact subsets of (a, b). w λ (x)w λ (y) |λ − µ| 2 ρ L (dλ) = b a G(x, ξ, µ)G(y, ξ, µ) r(ξ)dξ = 1 Im(µ) Im G(x, y, µ) where G(x, y, µ) is the resolvent kernel (or Green function) of the operator (L, D L ). Moreover, according to [19,Theorems 8.3 and 9.6], the resolvent kernel is given by G(x, y, µ) = w µ (x)ϑ µ (y), x < y w µ (y)ϑ µ (x), x ≥ y where ϑ λ (·) is a solution of ℓ(u) = λu which is square-integrable near ∞ with respect to the measure r(x)dx and verifies the identity w λ (x)ϑ [1] λ (x) − w [1] λ (x)ϑ λ (x) ≡ 1. It is easily seen (cf. [41, p. 125]) that the functions Im G(x, y, µ) and ∂ [1] x ∂ [1] y Im G(x, y, µ) are continuous in 0 < x, y < ∞. Essentially the same proof as that of [41,Corollary 3] now yields that [0,∞) w [1] λ (x) w [1] λ (y) |λ − µ| 2 ρ L (dλ) = 1 Im(µ) ∂ [1] x ∂ [1] y Im G(x, y, µ) and that the integrals (2.9) converge uniformly for x, y in compacts. (b) By Proposition 2.6 and the classical theorem on differentiation under the integral sign for Riemann-Stieltjes integrals, to prove (2.10)-(2.11) it only remains to justify the absolute and uniform convergence of the integrals in the right-hand sides. Recall from Proposition 2.6 that the condition h ∈ D (2) L implies that F h ∈ L 2 [0, ∞); ρ L and also λ (F h)(λ) ∈ L 2 [0, ∞); ρ L . As a consequence, we obtain [0,∞) (F h)(λ)w λ (x) ρ L (dλ) ≤ [0,∞) λ (F h)(λ) w λ (x) λ + i ρ L (dλ) + [0,∞) (F h)(λ) w λ (x) λ + i ρ L (dλ) ≤ λ (F h)(λ) ρ + (F h)(λ) ρ w λ (x) λ + i ρ < ∞ where · ρ denotes the norm of the space L 2 R; ρ L , and similarly [0,∞) (F h)(λ) w [1] λ (x) ρ L (dλ) ≤ λ (F h)(λ) ρ + (F h)(λ) ρ w [1] λ (x) λ + i ρ < ∞. We know from part (a) that the integrals which define w λ (x) λ+i ρ and w [1] λ (x) λ+i ρ converge uniformly, hence the integrals in (2.10)-(2.11) converge absolutely and uniformly for x in compact subsets. Diffusion processes In what follows we write P x0 for the distribution of a given time-homogeneous Markov process started at the point x 0 and E x0 for the associated expectation operator. By an irreducible diffusion process X on an interval I ⊂ R we mean a continuous strong Markov process {X t } t≥0 with state space I and such that P x (τ y < ∞) > 0 for any x ∈ int I and y ∈ I, where τ y = inf{t ≥ 0 | X t = y}. The resolvent {R η } η>0 of such a diffusion (or of a general Feller process) X is defined by R η u = ∞ 0 e −ηt P t u dt, u ∈ C b (I, R), where (P t u)(x) = E x [u(X t )] is the transition semigroup of the process X. The C b -generator (G, D(G)) of X is the operator with domain D(G) = R η C b (I, R) (η > 0) and defined by (Gu)(x) = ηu(x) − g(x) for u = R η g, g ∈ C b (I, R), x ∈ I (G is independent of η, cf. [24, p. 295]). A Feller semigroup is a family {T t } t≥0 of operators T t : C b (I, R) −→ C b (I, R) satisfying (i) T t T s = T t+s for all t, s ≥ 0; (ii) T t C 0 (I, R) ⊂ C 0 (I, R) for all t ≥ 0; (iii) If h ∈ C b (I, R) satisfies 0 ≤ h ≤ 1, then 0 ≤ T t h ≤ 1; (iv) lim t↓0 T t h − h ∞ = 0 for each h ∈ C 0 (I, R). The Feller semigroup is said to be conservative if T t 1 = 1 (here 1 denotes the function identically equal to one). A Feller process is a time-homogeneous Markov process {X t } t≥0 whose transition semigroup is a Feller semigroup. For further background on the theory of Markov diffusion processes and Feller semigroups, we refer to [8] and references therein. We now recall a known fact from the theory of (one-dimensional) diffusion processes, namely that the negative of the Sturm-Liouville differential operator (1.4) generates a diffusion process which is conservative and has the Feller property. The proof can be found on [24, Sections 4 and 6] (see also [39, Section II.5]). Lemma 2.8. The operator L (b) : D (b) L ⊂ C b ([a, b), R) −→ C b ([a, b), R), L (b) u = −ℓ(u) with domain D (b) L = u ∈ C b ([a, b), R) ℓ(u) ∈ C b ([a, b), R), lim x↓a u [1] (x) = 0 is the C b -generator of a one-dimensional irreducible diffusion process X = {X t } t≥0 with state space [a, b) whose transition semigroup defines a conservative Feller semigroup on C 0 ([a, b), R). The transition probabilities of the one-dimensional diffusion process from the previous lemma admits an explicit representation as the inverse L-transform of the function e −tλ w λ (x): Lemma 2.9. The transition semigroup admits the representation (P t u)(x) = b a h(y) p(t, x, y) r(y)dy (h ∈ B b [a, b), R , t > 0, a < x < b) where p(t, x, y) is a nonnegative function which is called the fundamental solution for the parabolic equation ∂u ∂t = −ℓ x u (the subscript indicates the variable in which the operator ℓ acts). The fundamental solution and its derivatives are explicitly given by (∂ n t p)(t, x, y) = [0,∞) λ n e −tλ w λ (x) w λ (y) ρ L (dλ) (∂ [1] x ∂ n t p)(t, x, y) = [0,∞) λ n e −tλ w [1] λ (x) w λ (y) ρ L (dλ) for n ∈ N 0 , where, for fixed t > 0, the integrals converge absolutely and uniformly on compact squares of (a, b) × (a, b). Proof. These assertions are a consequence of the results of [35, Sections 2-3] and [40, Section 4]. We mention also that another consequence of the results of [35, Section 3] is that for h ∈ L 2 (r) the expectation of h(X t ) can be written in terms of the L-transform as E x [h(X t )] = [0,∞) e −tλ w λ (x) (F h)(λ) ρ L (dλ) t > 0, a < x < b where the integral converges with respect to the norm of L 2 (r). Remark 2.10. Let X be a one-dimensional diffusion process on an interval with endpoints a and b, whose C b -generator is of the form (1.4). Let I a = c a y a dx p(x) r(y)dy, J a = c a c y dx p(x) r(y)dy According to Feller's boundary classification for the diffusion X, the left endpoint a is called regular if I a < ∞, J a < ∞; exit if I a < ∞, J a = ∞; entrance if I a = ∞, J a < ∞; natural if I a = ∞, J a = ∞. (the right endpoint is classified in a similar way). The probabilistic meaning of this classification is the following [8, Chapter II]: an irreducible diffusion can be started from the boundary a if and only if a is regular or entrance; the boundary a is reached from x 0 ∈ (a, b) with positive probability by an irreducible diffusion if and only if a is regular or exit. Our standing assumption (1.5) on the coefficients of the Sturm-Liouville operator means that a is a regular or an entrance boundary for the diffusion process X generated by ℓ. It is clear from the preceding remarks that Lemma 2.8 relies crucially on this assumption. The same is true for some of the results of the previous subsections: in fact, Lemma 2.1 fails if a is exit or natural [26,, and the boundary conditions defining D (2) L differ from those in (2.5) when a is exit or natural [40]. In turn, the assumption (1.6) means that b is a natural boundary for the diffusion X. Since one can show that (1.6) is automatically satisfied whenever Assumption MP below holds [48, Proposition 3.5], this boundary assumption at b yields no loss of generality on our results concerning product formulas and generalized convolutions. The hyperbolic equation ℓ x f = ℓ y f In this section we investigate the (possibly degenerate) hyperbolic Cauchy problem (ℓ x f )(x, y) = (ℓ y f )(x, y) (x, y ∈ (a, b)), f (x, a) = h(x), (∂ [1] y f )(x, a) = 0 (3.1) where ∂ [1] u = pu ′ , ℓ is the Sturm-Liouville operator (1.4) , and the subscripts indicate the variable in which the operators act. Since ℓ y − ℓ x = p(x) r(x) ∂ 2 ∂x 2 − p(y) r(y) ∂ 2 ∂y 2 + lower order terms, the equation ℓ x f = ℓ y f is hyperbolic at the line y = a if p(a) r(a) > 0; otherwise, the initial conditions of the Cauchy problem are given at a line of parabolic degeneracy. If γ(a) = − c a r(y) p(y) dy > −∞, then we can remove the degeneracy via the change of variables x = γ(ξ), y = γ(ζ) (cf. Remark 2.4), through which the partial differential equation is transformed to the standard form ℓ ξ u = ℓ ζ u, with initial condition at the line ζ = γ(a). In the case γ(a) = −∞, the standard form of the equation is also parabolically degenerate in the sense that its initial line is ζ = −∞. Existence and uniqueness of solution We start by proving a result which not only assures the existence of solution for Cauchy problems with well-behaved initial conditions but also provides an explicit representation for the solution as an inverse L-transform: Theorem 3.1 (Existence of solution). Suppose that x → p(x)r(x) is an increasing function. If h ∈ D (2) L and ℓ(h) ∈ D (2) L , then the function f h (x, y) := [0,∞) w λ (x) w λ (y) (F h)(λ) ρ L (dλ) (3.2) solves the Cauchy problem (3.1). For ease of notation, unless necessary we drop the dependence in h and denote (3.2) by f (x, y). Proof. Let us begin by justifying that ℓ x f can be computed via differentiation under the integral sign. It follows from (2.1) that w [1] λ (x) = −λ x a w λ (ξ) r(ξ)dξ and therefore (by Lemma 2.3) |w [1] λ (x)| ≤ λ x a r(ξ)dξ. Hence [0,∞) (F h)(λ) w [1] λ (x) w λ (y) ρ L (dλ) ≤ x a r(ξ)dξ · [0,∞) λ (F h)(λ) w λ (y) ρ L (dλ) < ∞,(3.3) where the convergence (which is uniform on compacts) follows from (2.7) and Lemma 2.7(b). From the convergence of the differentiated integral we conclude that ∂ [1] x f (x, y) = [0,∞) (F h)(λ) w [1] λ (x) w λ (y) ρ L (dλ). Since (ℓw λ )(x) = λw λ (x), in the same way we check that [0,∞) (F h)(λ) (ℓw λ )(x) w λ (y) ρ L (dλ) converges absolutely and uniformly on compacts and is therefore equal to (ℓ x f )(x, y). Consequently, (ℓ x f )(x, y) = (ℓ y f )(x, y) = [0,∞) λ (F h)(λ) w λ (x) w λ (y) ρ L (dλ). (3.4) Concerning the boundary conditions, Lemma 2.7(b) together with the fact that w λ (a) = 1 imply that f (x, a) = h(x), and from (3.3) we easily see that lim y↓a ∂ [1] y f (x, y) = 0. This shows that f is a solution of the Cauchy problem (3.1). Under the assumptions of the theorem, the solution (3.2) of the hyperbolic Cauchy problem satisfies f (·, y) ∈ D (2) L for all a < y < b, (3.5) F [ℓ y f (·, y)](λ) = ℓ y [F f (·, y)](λ) for all a < y < b, (3.6) lim y↓a [F f (·, y)](λ) = (F h)(λ), (3.7) lim y↓a ∂ [1] y F [f (·, y)](λ) = 0. (3.8) Indeed, by Proposition 2.6 we have [F f (·, y)](λ) = (F h)(λ) w λ (y) for all λ ∈ supp(ρ L ) and a < y < b. Since h ∈ D (2) L and |w λ (·)| ≤ 1 (Lemma 2.3), it is clear from (2.8) that f (x, y) satisfies (3.5). Moreover, it follows from (3.4) that F [ℓ y f j (·, y)](λ) = λ (F h)(λ) w λ (y) = ℓ y [F f j (·, y)](λ), hence (3.6) holds. The properties (3.7)-(3.8) follow immediately from Lemma 2.1. Next we show that the solution from the above existence theorem is the unique solution satisfying the conditions (3.5)-(3.8): Theorem 3.2 (Uniqueness). Let h ∈ D (2) L . Let f 1 , f 2 ∈ C 2 (a, b) 2 be two solutions of (ℓ x f )(x, y) = (ℓ y f )(x, y). For f ∈ {f 1 , f 2 }, suppose that (3.5) holds and that there exists a zero ρ L -measure set Λ 0 ⊂ [0, ∞) such that (3.6)-(3.8) hold for each λ ∈ [0, ∞) \ Λ 0 . Then f 1 (x, y) ≡ f 2 (x, y) for all x, y ∈ (a, b). (3.9) Proof. Fix λ ∈ [0, ∞) \ Λ 0 and let Ψ j (y, λ) := [F f j (·, y)](λ). We have ℓ y Ψ j (y, λ) = F [ℓ y f j (·, y)](λ) = F [ℓ x f j (·, y)](λ) = λΨ j (y, λ), a < y < b where the first equality is due to (3.6) and the last step follows from (2.7). Moreover, lim y↓a Ψ j (y, λ) = (F h)(λ) and lim y↓a ∂ [1] y Ψ j (y, λ) = 0 by (3.7) and (3.8), respectively. It thus follows from Lemma 2.1 that [F f j (·, y)](λ) = Ψ j (y, λ) = (F h)(λ) w λ (y), a < y < b. This equality holds for ρ L -almost every λ, so the isometric property of F gives f 1 (·, y) = f 2 (·, y) Lebesguealmost everywhere; since the f j are continuous, we conclude that (3.9) holds. We emphasize that the two previous propositions, in particular, ensure that there exists a unique solution for the Cauchy problem (3.1) with initial condition h ∈ C 4 c,0 := u ∈ C 4 c [a, b) ℓ(u), ℓ 2 (u) ∈ C c [a, b), lim x↓a u [1] (x) = lim x↓a [ℓ(u)] [1] (x) = 0 (clearly, if h ∈ C 4 c,0 then h, ℓ(h) ∈ D (2) L ). If the hyperbolic equation ℓ x f = ℓ y f (or the transformed equation ℓ ξ u = ℓ y u) is uniformly hyperbolic, the existence and uniqueness of solution for this Cauchy problem is a standard result which follows from the classical theory of hyperbolic problems in two variables (see e.g. [16,Chapter V]); in fact, the existence and uniqueness holds under much weaker restrictions on the initial condition. However, our existence and uniqueness result becomes nontrivial in the presence of a (non-removable) parabolic degeneracy at the initial line. Indeed, even though many authors have addressed Cauchy problems for degenerate hyperbolic equations in two variables, most studies are restricted to equations where the ∂ 2 ∂x 2 term vanishes at an initial line y = y 0 (we refer to [5, §2.3], [43,Section 5.4] and references therein). Much less is known for hyperbolic equations whose ∂ 2 ∂y 2 term vanishes at the same initial line: it is known that the Cauchy problem is, in general, not well-posed, and the relevance of determining sufficient conditions for its well-posedness has long been pointed out [5, §2.4], but as far as we are aware little progress has been made on this problem (for related work see [38]). The application of spectral techniques to hyperbolic Cauchy problems associated with Sturm-Liouville operators is by no means new, see e.g. [11,10] and references therein; however, it seems that such techniques had never been applied to degenerate cases. It is helpful to know that an existence theorem analogous to Theorem 3.1 holds when the initial line is shifted away from the degeneracy, because this will allow us to justify that the solution of the degenerate Cauchy problem is the pointwise limit of solutions of nondegenerate problems. Proposition 3.3. Suppose that x → p(x)r(x) is an increasing function, and let m ∈ N. If h ∈ D (2) L and ℓ(h) ∈ D (2) L , then the function f m (x, y) = [0,∞) w λ (x) w λ,m (y) (F h)(λ) ρ L (dλ) (3.10) is a solution of the Cauchy problem (ℓ x f m )(x, y) = (ℓ y f m )(x, y), a < x < b, a m < y < b f m (x, a m ) = h(x), a < x < b (∂ [1] y f m )(x, a m ) = 0, a < x < b. (3.11) Proof. Let us begin by justifying that ∂ [1] x f m (x, y) and (ℓ x f m )(x, y) can be computed via differentiation under the integral sign. The differentiated integrals are given by [0,∞) w [1] λ (x) w λ,m (y) (F h)(λ) ρ L (dλ) (3.12) [0,∞) w λ (x) w λ,m (y) [F (ℓ(h))](λ) ρ L (dλ) (3.13) (for the latter, we used the identities (ℓw λ )(x) = λw λ (x) and (2.7)), and their absolute and uniform convergence on compacts follows from the fact that h, ℓ(h) ∈ D L , together with Lemma 2.7(b) and the inequality |w λ,m (·)| ≤ 1 (which follows from Lemma 2.3 if we replace a by a m ). This justifies that ∂ [1] x f m (x, y) and (ℓ x f m )(x, y) are given by (3.12), (3.13) respectively. We also need to ensure that ∂ [1] y f m (x, y) and (ℓ y f m )(x, y) are given by the corresponding differentiated integrals, and to that end we must check that [0,∞) w λ (x) w [1] λ,m (y) (F h)(λ) ρ L (dλ) converges absolutely and uniformly. Indeed, it follows from (2.3) that for y ≥ a m we have w [1] λ,m (y) = λ y am w λ,m (ξ) r(ξ)dξ and consequently |w [1] λ,m (y)| ≤ λ y am r(ξ)dξ; hence (3.14) and the uniform convergence in compacts follows from (2.7) and Lemma 2.7(b). The verification of the boundary conditions is straightforward: Lemma 2.7(b) together with the fact that w λ,m (a m ) = 1 imply that f m (x, a m ) = h(x), and from (3.14) we easily see that ∂ [1] y f m (x, a m ) = 0. [0,∞) w λ (x) w [1] λ,m (y) (F h)(λ) ρ L (dλ) ≤ y am r(ξ)dξ · [0,∞) λ w λ (x)(F h)(λ) ρ L (dλ) This shows that f m is a solution of the Cauchy problem (3.11). Corollary 3.4. Suppose that x → p(x)r(x) is an increasing function. Let h ∈ D (2) L and consider the functions f m , f defined by (3.10), (3.2). Then lim m→∞ f m (x, y) = f (x, y) pointwise for each x, y ∈ (a, b). Proof. Since w λ,m (y) → w λ (y) pointwise as m → ∞ (Lemma 2.2), the conclusion follows from the dominated convergence theorem (which is applicable due to Lemmas 2.3 and 2.7(b)). Maximum principle and positivity of solution After having shown that the Cauchy problem is well-posed whenever the function x → p(x)r(x) is increasing, we introduce a stronger assumption on the coefficients which will be seen to be sufficient for a maximum principle to hold for the hyperbolic equation ℓ x f = ℓ y f and, in consequence, for the solution of the Cauchy problem (3.1) to preserve positivity and boundedness of its initial condition. We shall rely on the transformation of ℓ into the standard form (Remark 2.4); in the following assumption, A is the function defined in (2.4). Assumption MP. There exists η ∈ C 1 (γ(a), ∞) such that η ≥ 0, φ η := A ′ A − η ≥ 0, and the functions φ η and ψ η := 1 2 η ′ − 1 4 η 2 + A ′ 2A ·η are both decreasing on (γ(a), ∞). Observe that Assumption MP allows for γ(a) = −∞ (this will enable us to treat the case of nonremovable degeneracy), and it does not include the left endpoint in the interval where the conditions on η are imposed. This assumption is therefore a generalization of an assumption introduced by Zeuner, cf. Example 8.5 below. The proof of the maximum principle presented in the sequel is based on [63, Proposition 3.7] and on the maximum principles of [60]. The key tool is the integral identity which we now state: Lemma 3.6. Let ℓ B be the differential expression ℓ B v := −v ′′ − φ η v ′ + ψ η v. For γ(a) < c ≤ y ≤ x, consider the triangle ∆ c,x,y := {(ξ, ζ) ∈ R 2 | ζ ≥ c, ξ + ζ ≤ x + y, ξ − ζ ≥ x − y}, and let v ∈ C 2 (∆ c,x,y ). Write B(x) := exp( 1 2 x β η(ξ)dξ) (with β > γ(a) arbitrary) and A B (x) = A(x) B(x) 2 . Then the following integral equation holds: A B (x)A B (y) v(x, y) = H + I 0 + I 1 + I 2 + I 3 − I 4 (3.15) where H := 1 2 A B (c) A B (x − y + c) v(x − y + c, c) + A B (x + y − c) v(x + y − c, c)] I 0 := 1 2 A B (c) x+y−c x−y+c A B (s)(∂ y v)(s, c) ds I 1 := 1 2 y c A B (s)A B (x − y + s) φ η (s) + φ η (x − y + s) v(x − y + s, s) ds I 2 := 1 2 y c A B (s)A B (x + y − s) φ η (s) − φ η (x + y − s) v(x + y − s, s) ds I 3 := 1 2 ∆c,x,y A B (ξ)A B (ζ) ψ η (ζ) − ψ η (ξ) v(ξ, ζ) dξdζ I 4 := 1 2 ∆c,x,y A B (ξ)A B (ζ) (ℓ B ζ v − ℓ B ξ v)(ξ, ζ) dξdζ. Proof. Just compute I 4 − I 3 = 1 2 ∆c,x,y ∂ ∂ξ A B (ξ)A B (ζ) (∂ ξ v)(ξ, ζ) − ∂ ∂ζ A B (ξ)A B (ζ) (∂ ζ v)(ξ, ζ) dξdζ = I 0 − 1 2 y 0 A B (s)A B (x − y + s) (∂ ζ v + ∂ ξ v)(x − y + s, s) ds − 1 2 y 0 A B (s)A B (x + y − s) (∂ ζ v − ∂ ξ v)(x + y − s, s) ds = I 0 + I 1 − y c d ds A B (s)A B (x − y + s) v(x − y + s, s) ds + I 2 − y c d ds A B (s)A B (x − y + s) v(x − y + s, s) ds where in the second equality we used Green's theorem, and the third equality follows easily from the fact that (A B ) ′ = φ η A B . Theorem 3.7 (Weak maximum principle). Suppose Assumption MP holds, and let γ( a) < c ≤ y 0 ≤ x 0 . If u ∈ C 2 (∆ c,x0,y0 ) satisfies ( ℓ x u − ℓ y u)(x, y) ≤ 0, (x, y) ∈ ∆ c,x0,y0 u(x, c) ≥ 0, x ∈ [x 0 − y 0 + c, x 0 + y 0 − c] (∂ y u)(x, c) + 1 2 η(c)u(x, c) ≥ 0, x ∈ [x 0 − y 0 + c, x 0 + y 0 − c] (3.16) then u ≥ 0 in ∆ c,x0,y0 . Proof. Pick a function ω ∈ C 2 [c, ∞) such that ℓ B ω < 0, ω(c) > 0 and ω ′ (c) ≥ 0. Clearly, it is enough to show that for all δ > 0 we have v(x, y) := B(x)B(y)u(x, y) + δω(y) > 0 for (x, y) ∈ ∆ c,x0,y0 . Assume by contradiction that there exist δ > 0, (x, y) ∈ ∆ c,x0,y0 for which we have v(x, y) = 0 and v(ξ, ζ) ≥ 0 for all (ξ, ζ) ∈ ∆ c,x,y ⊂ ∆ c,x0,y0 . It is clear from the choice of ω that v(·, c) > 0, thus we have H ≥ 0 in the right hand side of (3.15). Similarly, (∂ y v)(·, c) = B(x)B(y) (∂ y u)(·, c) + 1 2 η(c)u(·, c) + δω ′ (c) ≥ 0, hence I 0 ≥ 0. Since φ η is positive and decreasing and ψ η is decreasing (cf. Assumption MP) and we are assuming that u ≥ 0 on ∆ c,x,y , it follows that I 1 ≥ 0, I 2 ≥ 0 and I 3 ≥ 0. In addition, I 4 < 0 because (ℓ B ζ v − ℓ B ξ v)(ξ, ζ) = B(x)B(y)( ℓ ζ u − ℓ ξ u)(ξ, ζ) + (ℓ B ω)(ζ) < 0. Consequently, (3.15) yields 0 = A B (x)A B (y)v(x, y) ≥ −I 4 > 0. This contradiction shows that v(x, y) > 0 for all (x, y) ∈ ∆ c,x0,y0 . Naturally, this weak maximum principle can be restated in terms of the operator ℓ = − 1 r d dx (p d dx ); this is left to the reader. As anticipated above, the positivity-preserving property of the Cauchy problem is a by-product of the maximum principle. L and h ≥ 0, then the function f given by (3.2) is such that f (x, y) ≥ 0 for x, y ∈ (a, b). If, in addition, h ≤ C, then f (x, y) ≤ C for x, y ∈ (a, b). Note that the conclusion holds for all x, y ∈ (a, b) because the function f (x, y) is symmetric. Sturm-Liouville translation and convolution Assumption MP will always be in force throughout this and the subsequent sections. Definition and first properties In view of the comments made in the Introduction, it is natural to define the L-convolution µ * ν (µ, ν ∈ M C [a, b)) in order that, for sufficiently well-behaved initial conditions, the integral Proof. For each fixed x, y ∈ [a, b), the right hand side of (3.2) defines a linear functional C 4 c,0 ∋ h → f h (x, y) ∈ C. By Corollary 3.9, |f h (x, y)| ≤ h ∞ for h ∈ C 4 c,0 . Thus it follows from the Hahn-Banach theorem that this functional admits a linear extension T x,y : C 0 [a, b) → C such that |T x,y h| ≤ h ∞ for all h ∈ C 0 [a, b). According to the Riesz representation theorem (cf. [14,Theorem 7.3.6] [a,b) h(ξ) (δ x * δ y )(dξ)), M C [a, b) is the dual of C 0 [a, b); we thus have T x,y h = [a,b) h(ξ)ν x,y (dξ), where ν x,y is a finite complex measure with ν x,y ≤ 1. Finally, the fact that [a,b) h(ξ)ν x,y (dξ) ≡ f h (x, y) ≥ 0 for all h ∈ C 4 c,0 , h ≥ 0 (Corollary 3.9) yields that ν x,y ∈ M + [a, b) is a subprobability measure. Definition 4.2. Let µ, ν ∈ M C [a, b). The measure (µ * ν)(·) = [a,b) [a,b) ν x,y (·) µ(dx) ν(dy) is called the L-convolution of the measures µ and ν. The L-translation of a function h ∈ B b [a, b) is defined as (T y h)(x) = [a,b) h(ξ) ν x,y (dξ) ≡ [a,b) h(ξ) (δ x * δ y )(dξ), x, y ∈ [a, b). It follows from this definition, together with (3.2), that the L-convolution is such that (for µ 1 , µ 2 , ν, π ∈ M C [a, b) and p 1 , p 2 ∈ C): (i) µ * ν = ν * µ (Commutativity); (ii) (µ * ν) * π = µ * (ν * π) (Associativity); (iii) (p 1 µ 1 + p 2 µ 2 ) * ν = p 1 (µ 1 * ν) + p 2 (µ 2 * ν) (Bilinearity); (iv) µ * ν ≤ µ · ν (Submultiplicativity); (v) If µ, ν ∈ M + [a, b), then µ * ν ∈ M + [a, b) (Positivity). Summarizing this, we have: C [a, b), * ), equipped with the total variation norm, is a commutative Banach algebra over C whose identity element is the Dirac measure δ a . Proposition 4.3. The space (M Moreover, M + [a, b) is an algebra cone (i.e. it is closed under L-convolution, addition and multiplication by positive scalars, and it contains the identity element). (T µ h)(x) := [a,b) (T y h)(x) µ(dy) ≡ [a,b) h(ξ) (δ x * µ)(dξ) (h ∈ B b [a, b)) (so that T x ≡ T δx for a ≤ x < b). It is easy to see that T µ h ∞ ≤ µ · h ∞ for all h ∈ B b [a, b) and µ ∈ M C [a, b). Observe also that for h ∈ C 4 c,0 we can write (by (3.2) and (4.1)) (T µ h)(x) = [0,∞) (F h)(λ) w λ (x) µ(λ) ρ L (dλ) (h ∈ C 4 c,0 ) (4.2) or equivalently (cf. Proposition 2.6) F (T µ h) (λ) = µ(λ)(F h)(λ) (h ∈ C 4 c,0 ). (4.3) Due to Lemma 2.7, the integral (4.2) converges absolutely and uniformly for x on compact subsets of (a, b). Sturm-Liouville transform of measures An important tool for the subsequent analysis is the extension of the L-transform (2.6) to finite complex measures, defined as follows: Definition 4.5. Let µ ∈ M C [a, b). The L-transform of the measure µ is the function defined by the integral µ(λ) = [a,b) w λ (x) µ(dx), λ ≥ 0. The next proposition contains some basic properties of the L-transform of measures which, as one would expect, resemble those of the ordinary Fourier transform (or characteristic function) of finite measures. We recall that, by definition, the complex measures µ n converge weakly to µ ∈ M C [a, b) if lim n [a,b) g(ξ)µ n (dξ) = [a,b) g(ξ)µ(dξ) for all g ∈ C b [a, b). We also recall that a family {µ j } ⊂ M C [a, b) is said to be uniformly bounded if sup j µ j < ∞, and {µ j } is said to be tight if for each ε > 0 there exists a compact K ε ⊂ [a, b) such that sup j |µ j |([a, b) \ K ε ) < ε. (These definitions are taken from [7].) In the sequel, the notation µ n w −→ µ denotes weak convergence of measures. (d) Suppose that lim x↑b w λ (x) = 0 for all λ > 0. If {µ n } is a sequence of measures belonging to M + [a, b) whose L-transforms are such that µ n (λ) − −−− → n→∞ f (λ) pointwise in λ ≥ 0 (4.4) for some real-valued function f which is continuous at a neighborhood of zero, then µ n w −→ µ for some measure µ ∈ M + [a, b) such that µ ≡ f . Proof. (a) Let us prove the second statement, which implies the first. Set C = sup j µ j . Fix λ 0 ≥ 0 and ε > 0. By the tightness assumption, we can choose β ∈ (a, b) such that |µ j |(β, b) < ε for all j. Since the family {w (·) (x)} x∈(a,β] is equicontinuous on [0, ∞) (this follows easily from the power series representation of w (·) (x), cf. proof of Lemma 2.1), we can choose δ > 0 such that |λ − λ 0 | < δ =⇒ |w λ (x) − w λ0 (x)| < ε for all a < x ≤ β. Consequently, µ j (λ) − µ j (λ 0 ) = (a,b) w λ (x) − w λ0 (x) µ j (dx) ≤ (β,b) w λ (x) − w λ0 (x) |µ j |(dx) + (a,β] w λ (x) − w λ0 (x) |µ j |(dx) ≤ 2ε + Cε = (C + 2)ε for all j, provided that |λ − λ 0 | < δ, which means that { µ j } is equicontinuous at λ 0 . (b) Let µ ∈ M C [a, b) be such that µ(λ) = 0 for all λ ≥ 0. We need to show that µ is the zero measure. For each h ∈ C 4 c,0 , by (4.2) we have (T µ h)(x) = [0,∞) (F h)(λ) w λ (x) µ(λ) ρ L (dλ) = 0. Since h ∈ C 4 c,0 , Theorem 3.1 assures that lim x↓a (T y h)(x) = h(y) for y ≥ 0; therefore, by dominated convergence (which is applicable because T y h ∞ ≤ h ∞ < ∞), 0 = lim x↓a (T µ h)(x) = lim x↓a [a,b) (T y h)(x) µ(dy) = [a,b) h(y) µ(dy) This shows that [a,b) h(y) µ(dy) = 0 for all h ∈ C 4 c,0 and, consequently, µ is the zero measure. (c) Since w λ (·) is continuous and bounded, the pointwise convergence µ n (λ) → µ(λ) follows from the definition of weak convergence of measures. By Prokhorov's theorem [7, Theorem 8.6.2], {µ n } is tight and uniformly bounded, thus (by part (i)) { µ n } is equicontinuous on [0, ∞). Invoking [31, Lemma 15.22], we conclude that the convergence µ n → µ is uniform for λ in compact sets. (d) We only need to show that the sequence {µ n } is tight and uniformly bounded. Indeed, if {µ n } is tight and uniformly bounded, then Prokhorov's theorem yields that for any subsequence {µ n k } there exists a further subsequence {µ n k j } and a measure µ ∈ M + [a, b) such that µ n k j w −→ µ. Then, due to part (iii) and to (4.4), we have µ(λ) = f (λ) for all λ ≥ 0, which implies (by part (ii)) that all such subsequences have the same weak limit; consequently, the sequence µ n itself converges weakly to µ. The uniform boundedness of {µ n } follows immediately from the fact that µ n (0) = µ n [a, b) converges. To prove the tightness, take ε > 0. Since f is continuous at a neighborhood of zero, we have 1 δ 2δ 0 f (0) − f (λ) dλ −→ 0 as δ ↓ 0; therefore, we can choose δ > 0 such that 1 δ 2δ 0 f (0) − f (λ) dλ < ε. Next we observe that, due to the assumption that lim x↑b w λ (x) = 0 for all λ > 0, we have 2δ 0 1 − w λ (x) dλ −→ 2δ as x ↑ b, meaning that we can pick β ∈ (a, b) such that 2δ 0 1 − w λ (x) dλ ≥ δ for all β < x < b. By our choice of β and Fubini's theorem, µ n β, b) = 1 δ [β,b) δ µ n (dx) ≤ 1 δ [β,b) 2δ 0 1 − w λ (x) dλ µ n (dx) ≤ 1 δ [a,b) 2δ 0 1 − w λ (x) dλ µ n (dx) = 1 δ 2δ 0 µ n (0) − µ n (λ) dλ. Hence, using the dominated convergence theorem, lim sup n→∞ µ n [β, b) ≤ 1 δ lim sup n→∞ 2δ 0 µ n (0) − µ n (λ) dλ = 1 δ 2δ 0 lim n→∞ µ n (0) − µ n (λ) dλ = 1 δ 2δ 0 f (0) − f (λ) dλ < ε due to the choice of δ. Since ε is arbitrary, we conclude that {µ n } is tight, as desired. The product formula We saw in the previous section that the hyperbolic maximum principle allows us to introduce a convolution measure algebra associated with the Sturm-Liouville operator. The next aims are to develop harmonic analysis on L p spaces and to study notions such as the continuity of the convolution or the divisibility of measures. However, this requires a fundamental tool, namely the trivialization property δ x * δ y = δ x · δ y for the L-transform or, which is the same, the product formula for its kernel. Theorem 5.1 (Product formula for w λ ). The product w λ (x) w λ (y) admits the integral representation w λ (x) w λ (y) = [a,b) w λ (ξ) (δ x * δ y )(dξ), x, y ∈ [a, b), λ ∈ C. (5.1) Here we present the proof only in the special (nondegenerate) case γ(a) > −∞. The proof of the general case is longer and relies on a different regularization argument; the details are given in [48]. f n (x, y) = w n λ (x) w n λ (y) = w λ (x) w λ (y), x, y ∈ [0, n 2 ]. It thus follows from Proposition 4.1 that is integrable near a, so that we may assume that γ(a) = 0 (otherwise, replace the interior point c by the endpoint a in the definition of the function γ). Applying the first part of the proof to the transformed operator ℓ = − 1 A d dξ (A d dξ ) defined via (2.4), we find that w λ (x) w λ (y) = [0,∞) w λ (ξ) (δ x * δ y )(dξ) for x, y ∈ [0, ∞), where w λ (ξ) := w λ (γ −1 (ξ)) and * is the convolution associated with ℓ. We can rewrite this as w λ (x) w λ (y) = [0,∞) w λ (ξ) ν x,y (dξ), x, y ∈ [0, n 2 ]w λ (x) w λ (y) = [a,b) w λ (ξ) γ −1 (δ γ(x) * δ γ(y) ) (dξ), x, y ∈ [a, b), λ ∈ C where the measure in the right hand side is the pushforward of the measure δ γ(x) * δ γ(y) under the map ξ → γ −1 (ξ). But one can easily check that the convolutions * and * are connected by δ x * δ y = γ −1 (δ γ(x) * δ γ(y) ) (this is a simple consequence of the definition of the convolution and the relation between the operators ℓ and ℓ), so we are done. If lim x↑b p(x)r(x) = ∞ holds (cf. Remark 4.7.III), then the following properties also hold: (c) The mapping (µ, ν) → µ * ν is continuous in the weak topology. (d) If h ∈ C b [a, b), then T µ h ∈ C b [a, b) for all µ ∈ M C [a, b). (e) If h ∈ C 0 [a, b), then T µ h ∈ C 0 [a, b) for all µ ∈ M C [a, b). Proof. (a) Using (5.1), we compute µ * ν(λ) = [a,b) w λ (x) (µ * ν)(dx) = [a,b) [a,b) [a,b) w λ (ξ) (δ x * δ y )(dξ) µ(dx)ν(dy) = [a,b) [a,b) w λ (x) w λ (y) µ(dx)ν(dy) = µ(λ) ν(λ), λ ≥ 0. This proves the "only if" part, and the converse follows from the uniqueness property in Proposition 4.6(b). (b) Due to Proposition 4.3, it only remains to prove that (µ * ν)[a, b) = 1 (µ, ν ∈ P[a, b)). But this follows at once from part (a): (µ * ν)[a, b) = µ * ν(0) = µ(0)· ν(0) = µ[a, b)·ν[a, b) = 1. (c) Since δ x * δ y (λ) = w λ (x)w λ (y), Proposition 4.6(d) yields that (x, y) → δ x * δ y is continuous in the weak topology. Therefore, for h ∈ C b [a, b) and µ n , ν n ∈ M C [a, b) with µ n w −→ µ and ν n w −→ ν we have lim n [a,b) h(ξ)(µ n * ν n )(dξ) = lim n [a,b) [a,b) [a,b) h(ξ) (δ x * δ y )(dξ) µ n (dx)ν n (dy) = [a,b) [a,b) [a,b) h(ξ) (δ x * δ y )(dξ) µ(dx)ν(dy) = [a,b) h(ξ)(µ * ν)(dξ) due to the continuity of the function in parenthesis. (d) Since (T µ h)(x) = [a,b) h(ξ) (δ x * µ)(dξ), this follows immediately from part (c) (e) It remains to show that (T µ h)(x) → 0 as x ↑ b. Since w λ (x) µ(λ) → 0 as x ↑ b (λ > 0), it follows from Remark 4.7.II that δ x * µ v −→ 0 as x ↑ b, where 0 denotes the zero measure; this means that for each h ∈ C 0 [a, b) we have (T µ h)(x) = [a,b) h(ξ)(δ x * µ)(dξ) −→ [a,b) h(ξ) 0(dξ) = 0 as x ↑ b showing that T µ h ∈ C 0 [a, b). Harmonic analysis on L p spaces For the remainder of this work, the coefficients of ℓ will be assumed to satisfy lim x↑b p(x)r(x) = ∞ (cf. Remark 4.7.III), and Assumption MP continues to be in place. In this section, we turn our attention to the basic mapping properties of the L-translation and convolution on the Lebesgue spaces L p (r) (1 ≤ p ≤ ∞). The first result, whose proof depends on the continuity of the mapping (µ, ν) → µ * ν, ensures that the L-translation defines a linear contraction on L p (r): Proposition 6.1. Let 1 ≤ p ≤ ∞ and µ ∈ M + [a, b). The L-translation (T µ h)(x) = [a,b) h(ξ) (δ x * µ)(dξ) , is, for each h ∈ L p (r), a Borel measurable function of x, and we have T µ h p ≤ µ · h p for all h ∈ L p (r) (6.1) (consequently, T µ L p (r) ⊂ L p (r)). Proof. It suffices to prove the result for nonnegative h ∈ L p (r), 1 ≤ p ≤ ∞. The map ν → µ * ν is weakly continuous (Corollary 5.2(c)) and takes M + [a, b) into itself. According to [27,Section 2.3], this implies that, for each Borel measurable h ≥ 0, the function x → (T µ h)(x) is Borel measurable. It follows that [a,b) c [a, b)) defines a positive Borel measure. For a ≤ c 1 < c 2 < b, let 1 [c1,c2) be the indicator function of [c 1 , c 2 ), let h n ∈ C 4 c,0 be a sequence of nonnegative functions such that h n → 1 [c1,c2) pointwise, and write g(x)(µ * r)(dx) := b a (T µ g)(x)r(x)dx (g ∈ CC = {g ∈ C ∞ c (a, b) | 0 ≤ g ≤ 1}. We compute (µ * r)[c 1 , c 2 ) = lim n [a,b) h n (x)(µ * r)(dx) = lim n sup g∈C b a (T µ h n )(x) g(x) r(x)dx = lim n sup g∈C [0,∞) (F h n )(λ) (F g)(λ) µ(λ) ρ L (dλ) = lim n sup g∈C b a h n (x) (T µ g)(x) r(x)dx ≤ µ ·lim n [a,b) h n (x) r(x)dx = µ · [c1,c2) r(x)dx where the third and fourth equalities follow from (4.2) and a change of order of integration, and the inequality holds because T µ g ∞ ≤ µ · g ∞ ≤ µ . Therefore, T µ h 1 = h L1([a,b),µ * r) ≤ µ · h 1 for each Borel measurable h ≥ 0. Since δ x * µ ∈ M + [a, b), Hölder's inequality yields that T µ h p ≤ µ 1/q · T µ |h| p 1/p 1 ≤ µ · h p for 1 < p < ∞. Finally, if h ∈ L ∞ (r), h ≥ 0 then h = h b + h 0 , where 0 ≤ h b ≤ h ∞ and h 0 = 0 Lebesgue-almost everywhere. Since T µ h 0 1 ≤ µ · h 0 1 = 0, we have T y h 0 = 0 Lebesgue-almost everywhere, and therefore T y h ∞ = T y h b ∞ ≤ µ · h ∞ . It is natural to define the L-convolution of functions so that the fundamental identity F (h * g) = (F h)·(F g) holds (where F denotes the L-transform (2.6)): Definition 6.2. Let h, g : [a, b) −→ C. If the integral (h * g)(x) = b a (T y h)(x) g(y) r(y)dy = b a [a,b) h(ξ) (δ x * δ y )(dξ) g(y) r(y)dy exists for almost every x ∈ [a, b), then we call it the L-convolution of the functions h and g. Proposition 6.3. If h ∈ C 4 c,0 and g ∈ L 1 (r), then F (h * g) (λ) = (F h)(λ)(F g)(λ) for all λ ≥ 0. Proof. For h ∈ C 4 c,0 and g ∈ L 1 (r) we have F (h * g) (λ) = b a b a (T ξ h)(x)g(ξ) r(ξ)dξ w λ (x)r(x)dx = b a F (T ξ h) (λ) g(ξ)r(ξ)dξ = (F h)(λ) b a g(ξ)w λ (ξ)r(ξ)dξ = (F h)(λ)(F g)(λ) where we have used Fubini's theorem and the identity (4.3). Proposition 6.4 (Young convolution inequality). Let p 1 , p 2 ∈ [1, ∞] such that 1 p1 + 1 p2 ≥ 1. For h ∈ L p1 (r) and g ∈ L p2 (r), the L-convolution h * g is well-defined and, for s ∈ [1, ∞] defined by 1 s = 1 p1 + 1 p2 −1, it satisfies h * g s ≤ h p1 g p2 (in particular, h * g ∈ L s (r)). Consequently, the L-convolution is a continuous bilinear operator from L p1 (r) × L p2 (r) into L s (r). The proof is given for completeness; it is analogous to that of the Young inequality for the ordinary convolution. Proof. Define 1 t1 = 1 p1 − 1 s and 1 t2 = 1 p2 − 1 s . Observe that |(T x h)(y)| |g(y)| ≤ |(T x h)(y)| p1/t1 |g(y)| p2/t2 |(T x h)(y)| p1 |g(y)| p2 1/s . Since 1 s + 1 t1 + 1 t2 = 1,1/t2 b a |(T x h)(y)| p1 |g(y)| p2 r(y)dy 1/s = h p1/t1 p1 g p2/t2 p2 b a |(T x h)(y)| p1 |g(y)| p2 r(y)dy 1/s . Using again (6.1) we conclude that h * g s ≤ h p1/t1 p1 g p2/t2 p2 h p1/s p1 g p2/s p2 = h p1 g p2 . A consequence of the Young convolution inequality is that the fundamental identity F (h * g) (λ) = (F h)(λ)(F g)(λ) (Proposition 6.3) extends, by continuity, to h ∈ L 1 (r) ∪ L 2 (r) and g ∈ L 1 (r). Another consequence is the Banach algebra property of the space L 1 (r): Corollary 6.5. The Banach space L 1 (r), equipped with the convolution multiplication h · g ≡ h * g, is a commutative Banach algebra without identity element. Proof. The Young convolution inequality shows that the L-convolution defines a binary operation on L 1 (r) for which the norm is submultiplicative. The commutativity and associativity of the L-convolution are a consequence of the identity F (h * g) = (F h)·(F g). Suppose now that there exists e ∈ L 1 (r) such that h * e = h for all h ∈ L 1 (r). Then (F h)(λ)(F e)(λ) = F (h * e) (λ) = (F h)(λ) for all h ∈ L 1 (r) and λ ≥ 0. Clearly, this implies that (F e)(λ) = 1 for all λ ≥ 0. But we know that δ a ≡ 1, so it follows from Proposition 4.6(b) that e(x)r(x)dx = δ a (dx), which is absurd. This shows that the Banach algebra has no identity element. 7 Applications to probability theory Infinite divisibility of measures and the Lévy-Khintchine representation The set P id of L-infinitely divisible measures (or L-infinitely divisible distributions) is defined in the obvious way: P id = µ ∈ P[a, b) for all n ∈ N there exists ν n ∈ P[a, b) such that µ = ν * n n where ν * n n denotes the n-fold L-convolution of ν n with itself. It is a simple exercise to show that the L-transform of µ ∈ P id is of the form µ(λ) = e −ψµ(λ) where ψ µ is continuous, nonnegative and ψ µ (0) = 0. The function ψ µ is called the L-exponent of µ ∈ P id . As we will see, the exponents of L-infinitely divisible measures admit a representation which is analogous to the well-known Lévy-Khintchine formula for infinitely divisible measures with respect to the ordinary Fourier transform. In the present context, the relevant notions of Poisson and Gaussian measures are defined as follows: Definition 7.1. Let µ ∈ M + [a, b). The measure e(µ) ∈ P[a, b) defined by e(µ) = e − µ ∞ k=0 µ * k k! (the infinite sum converging in the weak topology) is said to be the L-compound Poisson measure associated with µ. The L-transform of e(µ) can be easily deduced using Corollary 5.2(a): e(µ)(λ) = e − µ ∞ k=0 µ * k (λ) k! = e − µ ∞ k=0 µ(λ) k k! = exp µ(λ) − µ . Since e(µ 1 + µ 2 ) = e(µ 1 ) * e(µ 2 ) (µ 1 , µ 2 ∈ M + [a, b)), every * -compound Poisson measure belongs to P id . To motivate the following definition, we observe that it follows from classical results in probability theory (see e.g. [31,Theorem 16.17] and [36,§III.1]) that an infinitely divisible probability measure on R d is Gaussian if and only if it has no nontrivial divisors of the form e(ν), where ν is a finite positive measure on R d and e(ν) denotes the (ordinary) compound Poisson measure associated with ν. Definition 7.2. A measure µ ∈ P id is called an L-Gaussian measure if µ = e(ν) * ϑ a > 0, ν ∈ M + [a, b), ϑ ∈ P id =⇒ e(ν) = δ a . We are now ready to state the analogue of the Lévy-Khintchine representation for infinite divisibility with respect to the L-convolution. Theorem 7.3 (Levy-Khintchine type formula). The L-exponent of a measure µ ∈ P id can be represented in the form ψ µ (λ) = ψ α (λ) + (a,b) 1 − w λ (x) ν(dx) (7.1) where ν is a σ-finite measure on (a, b) which is finite on the complement of any neighbourhood of a and such that (a,b) 1 − w λ (x) ν(dx) < ∞ and α is an L-Gaussian measure with L-exponent ψ α (λ). Conversely, each function of the form (7.1) is an L-exponent of some µ ∈ P id . Proof. We only give a sketch of the proof, and refer to [56] for details. Let µ ∈ P id , let b > a 1 > a 2 > . . . with lim a n = a, and let I n = [a, a n ), J n = [a n , b). Consider the set Q of all divisors of µ of the form e(π) such that π(I 1 ) = 0. One can prove that the set D(P) of all divisors (with respect to the L-convolution) of measures ν ∈ P is relatively compact whenever P ⊂ P[a, b) is relatively compact (see [57,Corollary 1]); using this fact, it can be shown that sup µ=e(π)∈Q [a,b) 1 − w λ (x) π(dx) < ∞ and, consequently, there exists a divisor µ 1 = e(π 1 ) ∈ Q such that π 1 (J 1 ) is maximal among all elements of Q. Write µ = µ 1 * α 1 (α 1 ∈ P id ). Applying the same reasoning to α 1 with I 1 replaced by I 2 , we get α 1 = µ 2 * α 2 = e(π 2 ) * α 2 . If we perform this successively, we get µ = α n * β n , where β n = µ 1 * µ 2 * . . . µ n , µ k = e(π k ) with π k (I k ) = 0 and π k (J k ) having the specified maximality property. The sequences {α n } and {β n } are relatively compact; letting α and β be limit points, we have µ = α * β (α, β ∈ P id ). Suppose, by contradiction, that α is not L-Gaussian, and let e(η), with η = δ a , be a divisor of α. Clearly η(J k ) > 0 for some k; given that each α n divides α n−1 , we have α k = e(η) * ν (ν ∈ P id ). If we let η be the restriction of η to the interval J k , then α k−1 = e(π k + η) * e(η − η) * ν which is absurd (because (π k + η)(J k ) > π k (J k ), contradicting the maximality property which defines π k ). To determine the L-exponent of β, note that β n = e(Π n ) is the L-compound Poisson measure associated with Π n := n k=1 π k , thus ψ βn (λ) = (a,b) 1 − w λ (x) Π n (dx). Since {Π n } is an increasing sequence of measures and each e(Π n ) dividing µ, there exists a σ-finite measure ν such that ψ β (λ) = lim n (a,b) 1 − w λ (x) Π n (dx) = lim n (a,b) 1 − w λ (x) ν(dx) < ∞ (µ ∈ P id ensures the finiteness of the integral); from the relative compactness of D({µ}) it is possible to conclude that ν(J k ) < ∞ for all k. For the converse, let ν n be the restriction of ν to the interval J n defined as above. It is verified without difficulty that the right-hand side of (7.1) is continuous at zero, hence by Proposition 4.6(d) α * e(ν n ) w −→ µ ∈ P[a, b), and µ ∈ P id because P id is closed under weak convergence of measures. Convolution semigroups and their contraction properties Definition 7.4. A family {µ t } t≥0 ⊂ P[a, b) is called an L-convolution semigroup if it satisfies the condi- tions • µ s * µ t = µ s+t for all s, t ≥ 0; • µ 0 = δ a ; • µ t w −→ δ a as t ↓ 0. A direct consequence of this definition is that {µ t } −→ µ 1 ∈ P id (7.2) defines a one-to-one correspondence holds between the set of L-convolution semigroups and the set of L-infinitely divisible measures. Indeed, if {µ t } is an L-convolution semigroup, then it is clear that each µ t is L-infinitely divisible; and if µ ∈ P id has exponent ψ µ (λ), then µ t (λ) = exp(−t ψ µ (λ)) defines the unique L-convolution semigroup such that µ 1 = µ (the proof of this is analogous to that for the classical convolution, cf. [1, Theorem 29.6]). Proposition 7.5. Let {µ t } be an L-convolution semigroup. Then (T t h)(x) := (T µt h)(x) = [a,b) h(ξ)(δ x * µ t )(dξ) defines a strongly continuous Markovian contraction semigroup {T t } t≥0 on C 0 [a, b) and on the spaces L p (r) (1 ≤ p < ∞), i.e., the following properties hold: (i) T t T s = T t+s for all t, s ≥ 0; (ii) T t C 0 [0, ∞) ⊂ C 0 [0, ∞) for all t ≥ 0; (ii') T t L p (r) ⊂ L p (r) for all t ≥ 0 (1 ≤ p < ∞); (iii) T t 1 = 1 for all t ≥ 0, and if f ∈ C b [0, ∞) satisfies 0 ≤ h ≤ 1, then 0 ≤ T t h ≤ 1; (iv) lim t↓0 T t h − h ∞ = 0 for each h ∈ C 0 [0, ∞); (iv') lim t↓0 T t h − h p = 0 for each h ∈ L p (r) (1 ≤ p < ∞). Moreover, {T t } is translation-invariant: T t T ν f = T ν T t f for all t ≥ 0 and ν ∈ M C [a, b). Proof. Parts (ii), (ii') and (iii) follow at once from Corollary 5.2 and Proposition 6.1. Concerning part (i) and the translation invariance property, notice that by (4.3) we have F (T µ (T ν h)) = µ · F (T ν h) = µ · ν · F h = µ * ν · F h = F (T µ * ν h) (h ∈ C 4 c,0 ) so that T µ (T ν h) = T µ * ν h first for h ∈ C 4 c,0 and then, by continuity, for h ∈ C 0 [a, b) and h ∈ L p (r) (1 ≤ p < ∞). To prove part (iv) we just need to show that lim t↓0 ∈ [a, b), because it is well-known from the theory of Feller semigroups that for a semigroup satisfying (ii) and (iii) this weak continuity property implies the strong continuity of the semigroup (see e.g. [9, Lemma 1.4]). But for h ∈ C 0 [a, b) and x ∈ [a, b) we clearly have (T t h)(x) = h(x) for all h ∈ C 0 [a, b) and xlim t↓0 (T t h)(x) − h(x) = lim t↓0 [a,b) (T y h)(x) − h(x) µ t (dy) = [a,b) (T y h)(x) − h(x) δ a (dy) = 0 showing that (iv) holds. For part (iv'), let h ∈ L p (r), ε > 0 and choose g ∈ C ∞ c (a, b) such that h − g p ≤ ε. Then it follows from (6.1) and part (iv) that (a, b) is compact). Since ε is arbitrary, (iv') holds. lim sup t↓0 T t h − h p ≤ lim sup t↓0 T t h − T t g p + h − g p + T t g − g p ≤ 2ε + C ·lim sup t↓0 T t g − g ∞ = 2ε where C = [ supp(g) r(x)dx] 1/p (C < ∞ because the support supp(g) ⊂ The result for the space C 0 [a, b) means that {T t } is an L-translation-invariant conservative Feller semigroup. This semigroup is also symmetric with to the measure r(x)dx, that is, a, b). Any such symmetric Feller semigroup extends to a strongly continuous Markovian contraction semigroup {T (p) t } t≥0 on L p (r), 1 ≤ p < ∞ [9, Lemma 1.45]. However, the conclusion of Proposition 7.5 is stronger: it also states that the integral with respect to the Feller transition function is well-defined for all h ∈ ∪ 1≤p<∞ L p (r) and, accordingly, the extensions T b a (T t h)(x)g(x)r(x)dx = b a h(x)(T t g)(x)r(x)dx for h, g ∈ C c [(p) t are also given by h → (T µt h)(x) = [a,b) h(ξ)(δ x * µ t )(dξ). On the Hilbert space L 2 (r), we can take advantage of the L-transform to obtain a characterization of the generator of the L 2 -Markovian semigroup T (2) t ≡ T t : L 2 (r) −→ L 2 (r): Proposition 7.6. Let {µ t } be an L-convolution semigroup with exponent ψ. Then the infinitesimal generator (A (2) , D A (2) ) of the L 2 -Markovian semigroup {T (2) t } is given by F (A (2) h) = −ψ ·(F h), h ∈ D A (2) where D A (2) = h ∈ L 2 (r) [0,∞) ψ(λ) 2 (F h)(λ) 2 ρ L (dλ) < ∞ . Proof. We give a proof which follows closely that of the corresponding result for the ordinary convolution, as given in [4,Theorem 12.16]. Let h ∈ D A (2) , so that L 2 -lim t↓0 1 t (T t h − h) = A (2) h ∈ L 2 (m). Recalling that (by (4.3)) F (T t h) = µ t ·(F h) = e −tψ ·(F h) for all h ∈ L 2 (r), we see that L 2 -lim t↓0 1 t e −t ψ − 1 ·(F h) = F (A (2) h) The convergence holds almost everywhere along a sequence {t n } n∈N such that t n → 0, so we conclude that F (A (2) h) = −ψ · (F h) ∈ L 2 (R; ρ L ). Conversely, if we let h ∈ L 2 (r) with −ψ ·(F h) ∈ L 2 (R; ρ L ), then we have L 2 -lim t↓0 1 t F (T t h) − F h = −ψ ·(F h) ∈ L 2 (R; ρ L ) and the isometry gives that L 2 -lim t↓0 1 t T t h − h ∈ L 2 (m), meaning that h ∈ D A (2) . Additive and Lévy processes Definition 7.7. An [a, b)-valued Markov chain {S n } n∈N0 is said to be L-additive if there exist measures µ n ∈ P[a, b) such that P [S n ∈ B|S n−1 = x] = (µ n * δ x )(B), n ∈ N, a ≤ x < b, B a Borel subset of [a, b). (7.3) If µ n = µ for all n, then {S n } is said to be an L-random walk. An explicit construction can be given for L-additive Markov chains, based on the following lemma: Lemma 7.8. There exists a Borel measurable Φ : [a, b) × [a, b) × [0, 1] −→ [a, b) such that (δ x * δ y )(B) = m{Φ(x, y, ·) ∈ B}, x, y ∈ [a, b), B a Borel subset of [a, b) where m denotes Lebesgue measure on [0, 1]. Proof. Let Φ(x, y, ξ) = max a, sup{z ∈ [a, b) : (δ x * δ y )[a, z] < ξ} . Using the continuity of the Lconvolution, one can show that Φ is Borel measurable, see [6,Theorem 7.1.3]. It is straightforward that m{Φ(x, y, ·) ∈ [a, c]} = m{(δ x * δ y )[a, c] ≥ ξ} = (δ x * δ y )[a, c]. Let X 1 , U 1 , X 2 , U 2 , . . . be a sequence of independent random variables (on a given probability space (Ω, A, π)) where the X n have distribution P Xn = µ n ∈ P[a, b) and each of the (auxiliary) random variables U n has the uniform distribution on [0, 1]. Set S 0 = 0, S n = S n−1 ⊕ Un X n (7.4) where X ⊕ U Y := Φ(X, Y, U ). Then we have P Sn = P Sn−1 * µ n (n ∈ N 0 ) and, consequently, {S n } n∈N0 is an L-additive Markov chain satisfying (7.3). The identity P Sn = P Sn−1 * µ n is easily checked: P Sn (B) = P Φ(S n−1 , X n , U n ) ∈ B = [a,b) [a,b) m{Φ(x, y, ·) ∈ B}P Sn−1 (dx)P Xn (dy) = [a,b) [a,b) (δ x * δ y )(B)P Sn−1 (dx)P Xn (dy) = (P Sn−1 * µ n )(B). We now define the continuous-time analogue of L-random walks: Definition 7.9. An [a, b)-valued Markov process Y = {Y t } t≥0 is said to be an L-Lévy process if there exists an L-convolution semigroup {µ t } t≥0 such that the transition probabilities of Y are given by P Y t ∈ B|Y s = x = (µ t−s * δ x )(B), 0 ≤ s ≤ t, a ≤ x < b, B a Borel subset of [a, b). The notion of an L-Lévy process coincides with that of a Feller process associated with the Feller semigroup T t f = T µt f . Consequently, the general connection between Feller semigroups and Feller processes (see e.g. [9, Section 1.2]) ensures that for each (initial) distribution ν ∈ P[a, b) and L-convolution semigroup {µ t } t≥0 there exists an L-Lévy process Y associated with {µ t } t≥0 and such that P Y0 = ν. Any L-Lévy process has the following properties: • It is stochastically continuous: Y s → Y t in probability as s → t, for each t ≥ 0; • It has a càdlàg modification: there exists an L-Lévy process { Y t } with a.s. right-continuous paths and satisfying P Y t = Y t = 1 for all t ≥ 0. (These properties hold for all Feller processes, cf. [9, Section 1.2].) An analogue of the well-known theorem on appoximation of Lévy processes by triangular arrays holds for L-Lévy processes (below the notation d −→ stands for convergence in distribution): Proposition 7.10. Let X be an [a, b)-valued random variable. The following assertions are equivalent: (i) X = Y 1 for some L-Lévy process Y = {Y t } t≥0 . (ii) The distribution of X is L-infinitely divisible; (iii) S n mn d −→ X for some sequence of L-random walks S 1 , S 2 , . . . (with S j 0 = a) and some integers m n → ∞. Proof. The equivalence between (i) and (ii) is a restatement of the one-to-one correspondence (7.2) between L-infinitely divisible measures and L-convolution semigroups. It is obvious that (i) implies (iii): simply let m n = n and S n the random walk whose step distribution is the law of Y 1/n . Suppose that (iii) holds and let π n , µ be the distributions of S n j , X respectively. Choose ε > 0 small enough so that µ(λ) > C ε > 0 for λ ∈ [0, ε], where C ε > 0 is a constant. By (iii) and Proposition 4.6(c), π n (λ) mn → µ(λ) uniformly on compacts, which implies that π n (λ) → 1 for all λ ∈ [0, ε] and, therefore, by Proposition 4.6(d) π n w −→ δ a . Now let k ∈ N be arbitrary. Since π n w −→ δ a , we can assume that each m n is a multiple of k. Write ν n = π * (mn/k) n , so that ν * k Proof. For t ≥ 0, a ≤ x < b let us write p t,x (dy) ≡ P x [X t ∈ dy]. Recall from Lemma 2.9 that p t,x (dy) ≡ p(t, x, y)r(y)dy = [0,∞) e −tλ w λ (x) w λ (y) ρ L (dλ) r(y)dy, t > 0, a < x < b where the integral converges absolutely. Consequently, by Proposition 2.6, p t,x (λ) = e −tλ w λ (x), t ≥ 0, a ≤ x < b (the weak continuity of p t,x justifies that the equality also holds for t = 0 and for x = a). This shows that p t,x = p t,a * δ x where p t,a (λ) = e −tλ . It is clear from the properties of the L-transform that {p t,a } t≥0 is an L-convolution semigroup; therefore, X is an L-Lévy process. An L-convolution semigroup {µ t } t≥0 such that µ 1 is an L-Gaussian measure is said to be an L-Gaussian convolution semigroup, and an L-Lévy process associated with an L-Gaussian convolution semigroup is called an L-Gaussian process. It actually turns out that the diffusion X generated by (L (b) , D (iv) Y has a modification whose paths are a.s. continuous. If any of these conditions hold then the C b -generator of Y is a local operator, i.e., (Gh)(x) = (Gg)(x) whenever h, g ∈ D(G) and h = g on some neighbourhood of x ∈ [a, b). Proof. (i) =⇒ (ii): Let {t n } n∈N be a sequence such that t n → 0 as n → ∞, and let ν n = e 1 tn µ tn . We have lim n→∞ ν n (λ) = lim n→∞ exp 1 t n µ 1 (λ) tn − 1 = µ 1 (λ), λ > 0 (7.5) and therefore, by Proposition 4.6(d), ν n w −→ µ 1 as n → ∞. From this it follows, cf. [56], that if π n denotes the restriction of 1 tn µ tn to [a, b) \ V a , then {π n } is relatively compact; if π is a limit point, then e(π) is a divisor of µ 1 . Since µ 1 is Gaussian, e(π) = δ a , hence π must be the zero measure, showing that (ii) holds. (ii) =⇒ (i): As in (7.5), µ 1 (λ) = lim n→∞ exp 1 t n [a,b) w λ (x) − 1 µ tn (dx) = lim n→∞ exp 1 t n Va w λ (x) − 1 µ tn (dx) , λ > 0 where the second equality is due to (ii), noting that 1 tn [a,b)\Va (w λ (x) − 1)µ tn (dx) ≤ 2 t µ tn [a, b) \ V a . Given that ν n = e 1 tn µ tn w −→ µ 1 , we have (again, see [56]) µ 1 (λ) = exp (a,b) w λ (x) − 1 η(dx) , λ > 0 for some σ-finite measure η on (a, b) which, by the above, vanishes on the complement of any neighbourhood of the point a. Therefore, µ 1 is Gaussian. (ii) ⇐⇒ (iii): To prove the nontrivial direction, assume that (ii) holds, and fix x ∈ (a, b). Let V x be a neighbourhood of the point x and write E x = [a, b) \ V x . Pick a function h ∈ C 4 c,0 such that 0 ≤ h ≤ 1, h = 0 on E x and h = 1 on some smaller neighbourhood U x ⊂ V x of the point x. We begin by showing that lim y↓a 1 − (T x h)(y) 1 − w λ (y) = 0 for each λ > 0. (7.6) Indeed, it follows from Theorem 3.1 that lim y↓a (T x h)(y) = 1, lim y↓a ∂ [1] y (T x h)(y) = 0 and ℓ y (T x h)(y) = [0,∞) λ (F h)(λ) w λ (x) w λ (y) ρ L (dλ) = T x ℓ(h) (y) − −− → y↓a ℓ(h)(x) = 0, hence using L'Hôpital's rule twice we find that lim y↓a 1−(T x h)(y) 1−w λ (y) = lim y↓a ℓy(T x h)(y) λw λ (y) = 0 (λ > 0). By (7.6), for each λ > 0 there exists a λ > a such that (T x 1 Ex )(y) ≤ T x (1 − h) (y) ≤ 1 − w λ (x) for all y ∈ [a, a λ ) (here 1 Ex denotes the indicator function of E x ). We then estimate 1 t (µ t * δ x )(E x ) = 1 t [a,b) (T x 1 Ex )(y)µ t (dy) ≤ 1 t [a,a λ ) 1 − w λ (y) µ t (dy) + 1 t µ t [a λ , b) ≤ 1 t [a,b) 1 − w λ (y) µ t (dy) + 1 t µ t [a λ , b) = 1 t 1 − µ t (λ) + 1 t µ t [a λ , b). Given that we are assuming that (ii) holds and, by the L-semigroup property, lim t↓0 1 t 1 − µ t (λ) = lim t↓0 1 t 1 − µ 1 (λ) t = − log µ 1 (λ), the above inequality gives lim sup t↓0 1 t (µ t * δ x )(E x ) ≤ − log µ 1 (λ). This holds for arbitrary λ > 0. Since the right-hand side is continuous and vanishes for λ = 0, we conclude that lim t↓0 1 t (µ t * δ x )(E x ) = 0, as desired. (iii) =⇒ (iv): This follows from a general result in the theory of Feller processes [21, Chapter 4, Proposition 2.9] according to which lim t↓0 To finish this section, it is worth mentioning that analogues of the classical limit theorems -such as laws of large numbers or central limit theorems -can be established for the L-convolution measure algebra. As in the setting of hypergroup convolution structures (cf. Example 8.5), solutions {ϕ k } k∈N of the functional equation 1 t P x [Y t ∈ [a, b) \ V x ] =(T y ϕ k )(x) = k j=0 k j ϕ j (x)ϕ k−j (y) x, y ∈ [a, b) , ϕ 0 = 0, which are called L-moment functions, play a role similar to that of the monomials under the ordinary convolution. For the sake of illustration, let us state some strong laws of large numbers which hold true for the L-convolution: let {S n } be an L-additive Markov chain constructed as in (7.4), and define the L-moment functions of first and second order by ϕ 1 (x) = κη 1 (x), ϕ 2 (x) = 2[κη 2 (x) + η 1 (x)] respectively, where κ := lim ξ→∞ A ′ (ξ) A(ξ) = lim x↑b [(pr) 1/2 ] ′ (x) 2r (x) and the η j are given by (2.2). Then: 7.13.I. If {r n } n∈N is a sequence of positive numbers such that lim n r n = ∞ and ∞ n=1 1 rn E[ϕ 2 (X n )]− E[ϕ 1 (X n )] 2 < ∞, then lim n 1 √ r n ϕ 1 (S n ) − E[ϕ 1 (S n )] = 0 π-a.s. 7.13.II. If {S n } is an L-random walk such that E[ϕ 2 (X 1 ) θ/2 ] < ∞ for some 1 ≤ θ < 2, then E[ϕ 1 (X 1 )] < ∞ and lim n 1 n 1/θ ϕ 1 (S n ) − nE[ϕ 1 (X 1 )] = 0 π-a.s. 7.13.III. Suppose that ϕ 1 ≡ 0. If {r n } n∈N is a sequence of positive numbers such that lim n r n = ∞ and ∞ n=1 1 rn E[ϕ 2 (X n )] < ∞, then lim n 1 r n ϕ 2 (S n ) = 0 π-a.s. 7.13.IV. Suppose that ϕ 1 ≡ 0. If {S n } is an L-random walk such that E[ϕ 2 (X 1 ) θ ] < ∞ for some 0 < θ < 1, then lim n 1 n 1/θ ϕ 2 (S n ) = 0 π-a.s. The above assertions can be proved exactly as in the hypergroup framework: the reader is referred to [63, Section 7]. Examples We begin with two simple examples where the Sturm-Liouville operator is regular and nondegenerate, and the kernel of the L-transform can be written in terms of elementary functions. ℓ = − d 2 dx 2 , 0 < x < ∞ which is obtained by setting p = r = 1 and (a, b) = (0, ∞). This operator trivially satisfies assumption MP. Since the solution of the Sturm-Liouville boundary value problem (2.1) is w λ (x) = cos(τ x) (where λ = τ 2 ), the L-transform is simply the cosine Fourier transform (F h)(τ ) = ∞ 0 h(x) cos(τ x)dx. By elementary trigonometric identities, w τ (x)w τ (y) = 1 2 [w τ (|x − y|) + w τ (x + y)] , hence the L-convolution is given by δ x * δ y = 1 2 (δ |x−y| + δ x+y ), x, y ≥ 0. In other words, * is (up to identification) the ordinary convolution of symmetric measures. Example 8.2. If we let p(x) = r(x) = (1 + x) 2 and (a, b) = (0, ∞), we obtain the differential operator ℓ = − d 2 dx 2 − 2 1 + x d dx , 0 < x < ∞, which satisfies Assumption MP with η(x) = 2 1+x . The function w λ (x) = 1 1+x [cos(τ x) + 1 τ sin(τ x)], τ > 0 1, τ = 0 (λ = τ 2 ) is the solution of the boundary value problem (2.1), thus the L-transform can be expressed as a sum of cosine and sine Fourier transforms. A straightforward computation [63,Example 4.10] shows that the product formula w λ (x) w λ (y) = [a,b) w λ d(δ x * δ y ) holds for δ x * δ y defined by (δ x * δ y )(dξ) = 1 2(1 + x)(1 + y) (1+|x−y|)δ x−y (dξ)+(1+x+y)δ x+y (dξ)+(1+ξ)1 [|x−y|,x+y] (ξ)dξ (8.1) and therefore (by the uniqueness property, Proposition 4.6(b)) the L-convolution is given by (8.1). This example, which was introduced in [63, Example 4.10], illustrates that, in general, convolutions associated with regular Sturm-Liouville operators have both a discrete and an absolutely continuous component. Next we present the chief example of a convolution associated with a singular Sturm-Liouville operator: Example 8.3 (Hankel transform). Let α ≥ − 1 2 . The Bessel operator ℓ = − d dx 2 − 2α + 1 x d dx , 0 < x < ∞ has coefficients p(x) = r(x) = x 2α+1 . Clearly, Assumption MP holds with η = 0. Here the kernel of the L-transform is w λ (x) = J α (τ x) := 2 α Γ(α + 1)(τ x) −α J α (τ x) (λ = τ 2 ) where J α is the Bessel function of the first kind (this is easily checked using the basic properties of the Bessel function, cf. [42,Chapter 10]). The Sturm-Liouville type transform associated with the Bessel operator is the Hankel transform, (F h)(τ ) = ∞ 0 h(x) J α (τ x) x 2α+1 dx. It follows from classical integration formulae for the Bessel function [58, p. 411 ] that J α (τ x) J α (τ y) = ∞ 0 J α (τ ξ) (δ x * α δ y )(dξ), where (δ x * α δ y )(dξ) = 2 1−2α Γ(α + 1) √ π Γ(α + 1 2 ) (xyξ) −2α (ξ 2 − (x − y) 2 )((x + y) 2 − ξ 2 ) α−1/2 1 [|x−y|,x+y] (ξ) r(ξ)dξ for x, y > 0; this convolution is known as the Hankel convolution [25,13] or Kingman convolution [30,54]. This example has motivated the development of the theory of generalized translation and convolution operators back since the pioneering work of Delsarte [17]. It plays a special role in the context of the Sturm-Liouville hypergroups in Example 8.5 below; in particular, it appears as the limit distribution in central limit theorems on hypergroups [6,Section 7.5]. Moreover, since the diffusion (L-Lévy) process generated by ℓ is the Bessel process -a fundamental continuous-time stochastic process [8], which in the case α = d 2 − 1 (d ∈ N) can be defined as the radial part of a d-dimensional Brownian motion -the Hankel convolution is a useful tool for the study of the Bessel process, cf. e.g. [44,55]. The Jacobi operator provides another example of a singular Sturm-Liouville operator whose the product formula and convolution can be written in terms of standard special functions. Example 8.4 (Jacobi transform). The coefficients p(x) = r(x) = (sinh x) 2α+1 (cosh x) 2β+1 (α ≥ β ≥ − 1 2 , α = 1 2 ) give rise to the Jacobi operator ℓ = − d dx 2 − [(2α + 1) coth x + (2β + 1) tanh x] d dx , 0 < x < ∞. As in the previous example, Assumption MP holds with η = 0. The so-called Jacobi function w λ (x) = φ (α,β) τ (x) := 2 F 1 1 2 (σ − iτ ), 1 2 (σ + iτ ); α + 1; −(sinh x) 2 (σ = α + β + 1, λ = τ 2 + σ 2 ) where 2 F 1 denotes the hypergeometric function [42,Chapter 15], can be shown to be the unique solution of the Sturm-Liouville problem (2.1). The associated integral transform is the (Fourier-)Jacobi transform, (F h)(τ ) = ∞ 0 h(x) φ (α,β) τ (x) (sinh x) 2α+1 (cosh x) 2β+1 dx (this transformation is also known as Olevskii transform, index hypergeometric transform or, in the case α = β, generalized Mehler-Fock transform [62]). By a deep result of Koornwinder [22,32], the product formula φ (α,β) τ (x) φ (α,β) τ (y) = ∞ 0 φ (α,β) τ d(δ x * α,β δ y ) holds for the Jacobi convolution, defined by (δ x * α,β δ y )(dξ) = 2 −2σ Γ(α + 1)(cosh x cosh y cosh ξ) α−β−1 √ π Γ(α + 1 2 )(sinh x sinh y sinh ξ) 2α × × (1 − Z 2 ) α−1/2 2 F 1 α + β, α − β; α + 1 2 ; 1 2 (1 − Z) 1 [|x−y|,x+y] (ξ)r(ξ)dξ where Z := (cosh x) 2 +(cosh y) 2 +(cosh ξ) 2 −1 2 cosh x cosh y cosh ξ . For half-integer values of the parameters α, β, the Jacobi transform and convolution have various group theoretic interpretations; in particular, they are related with harmonic analysis on rank one Riemannian symmetric spaces [32]. Moreover, a remarkable property of the Jacobi transform is that it admits a positive dual convolution structure, that is, there exists a family {θ τ1,τ2 } of finite positive measures such that the dual product formula φ (α,β) τ1 (x) φ (α,β) τ2 (x) = ∞ 0 φ (α,β) τ3 (x) θ τ1,τ2 (dτ 3 ) holds, and this permits the construction of a generalized convolution which trivializes the inverse Jacobi transform [3]. All the examples presented so far belong to the class of Sturm-Liouville hypergroup convolutions which was introduced by Zeuner [63] as follows: ℓ = − d 2 dx 2 − A ′ (x) A(x) d dx , 0 < x < ∞, where the function A satisfies the following conditions: SL0 A ∈ C[0, ∞) ∩ C 1 (0, ∞) and A(x) > 0 for x > 0. SL1 One of the following assertions holds: SL2 There exists η ∈ C 1 [0, ∞) such that η ≥ 0, φ η ≥ 0 and the functions φ η , ψ η are both decreasing on (0, ∞) (φ η , ψ η are defined as in Assumption MP). The last condition ensures that A satisfies Assumption MP, hence this is a particular case of the general family of Sturm-Liouville operators considered in the previous sections. It was proved by Zeuner [63] that the convolution measure algebra (M C [0, ∞), * ) is a commutative hypergroup with identity involution; this means that the Banach algebra property of Proposition 4.3 and properties (b)-(c) of Corollary 5.2 hold, as well as the following axioms: • (x, y) → supp(δ x * δ y ) is continuous from [0, ∞) × [0, ∞) into the space of compact subsets of [0, ∞) (endowed with the Michael topology, see [27]); • 0 ∈ supp(δ x * δ y ) if and only if x = y. Observe that the Sturm-Liouville operator ℓ = − d 2 dx 2 − A ′ A d dx is either singular or regular, depending on whether the function A satisfies condition SL1.1 or SL1.2. In any event, the associated hyperbolic equation ℓ x f = ℓ y f is uniformly hyperbolic on [0, ∞) 2 . The construction of the product formula and convolution presented in the previous sections generalizes that of Zeuner because it is also applicable to parabolically degenerate operators. The next example shows that the two hypergroup axioms on the (compact) support of δ x * δ y are generally false for operators associated with degenerate hyperbolic equations: Example 8.6 (Index Whittaker transform). The choice p(x) = x 2−2α e −1/x and r(x) = x −2α e −1/x , with α < 1 2 , leads to the normalized Whittaker operator ℓ = −x 2 d 2 dx 2 − (1 + 2(1 − α)x) d dx , 0 < x < ∞. The standard form of this differential operator (Remark 2.4) is ℓ = − d 2 dz 2 − (e −z + 1 − 2α) d dz , where z = log x ∈ R, and it is apparent that Assumption MP holds with η = 0. As pointed out in Section 3, the fact that the operator ℓ is defined on the whole real line means that the hyperbolic partial differential equation associated with the normalized Whittaker operator has a non-removable parabolic degeneracy at the initial line. The unique solution of the boundary value problem (2.1) turns out to be given by w λ (x) = W α,iτ (x) := x α e 1 2x W α,iτ ( 1 x ) λ = τ 2 + ( 1 2 − α) 2 where W α,iτ is the Whittaker function of the second kind of parameters α and iτ [42,Chapter 13]. The eigenfunction expansion of the normalized Whittaker operator yields the index Whittaker transform [50,49] (F h)(τ ) = ∞ 0 h(x)W α,iτ (x)x −2α e −1/x dx. The product formula for the kernel W α,iτ has recently been established by the authors [46,47] using techniques from classical analysis and known facts in the theory of special functions; it is given by W α,iτ (x)W α,iτ (y) = ∞ 0 W α,iτ d(δ x * α δ y ), where * α is the Whittaker convolution, defined by (δ x * α δ y )(dξ) = 2 −1−α √ π (xyξ) − 1 2 +α exp 1 x + 1 y + 1 ξ − (x + y + ξ) 2 8xyξ D 2α x + y + ξ √ 2xyξ r(ξ)dξ for x, y > 0, with D µ denoting the parabolic cylinder function [20,Chapter VIII]. Notice in particular that supp(δ x * α δ y ) = [0, ∞) for every x, y > 0. The particular case α = 0 is worthy of special mention, because in this case the index Whittaker transform reduces to (F h)(τ ) = π −1/2 ∞ 0 h(x)K iτ ( 1 2x )x −1/2 e − 1 2x dx, which is (a normalized form of) the Kontorovich-Lebedev transform; here K iτ is the modified Bessel function of the second kind with parameter iτ [42,Chapter 10]. The Kontorovich-Lebedev transform plays a central role in the theory of index type integral transforms [61]. The Whittaker convolution of parameter α = 0, which can be written in the simplified form (δ x * 0 δ y )(dξ) = 1 2 √ πxyξ exp 1 x + 1 y − (x + y + ξ) 2 4xyξ dξ, is identical (up to an elementary change of variables) to the Kontorovich-Lebedev convolution, which was introduced by Kakichev in [29] and has been extensively studied, cf. [61] and references therein. Our final example illustrates that the (degenerate) hyperbolic equation approach allows us to generalize the results on the Whittaker product formula and convolution to a much larger class of degenerate operators: Example 8.7. Let ζ ∈ C 1 (0, ∞) be a nonnegative decreasing function such that ∞ 1 ζ(y) dy y = ∞, and let κ > 0. The differential expression ℓ = −x 2 d 2 dx 2 − κ + x 1 + ζ(x) d dx , 0 < x < ∞ is a particular case of (1.4), obtained by considering p(x) = xe −κ/x+I ζ (x) and r(x) = 1 x e −κ/x+I ζ (x) , where I ζ (x) = x 1 ζ(y) dy y . (If κ = 1 and ζ(x) = 1 − 2α > 0, we recover the normalized Whittaker operator from Example 8.6.) The change of variable z = log x ∈ R transforms ℓ into the standard form ℓ = − d 2 dz 2 − A ′ (z) A(z) d dz , where A ′ (z) A(z) = κe −κz + ζ(e z ) . It is clear that ℓ satisfies Assumption MP with η = 0, and the additional assumption lim x↑b p(x)r(x) = ∞ holds because I ζ (∞) = ∞. Therefore, all the results in the previous sections hold for the Sturm-Liouville operator ℓ. This shows that the class of Sturm-Liouville operators for which one can construct a positivity-preserving convolution structure includes irregular operators which are simultaneously degenerate (in the sense that the associated hyperbolic equation is parabolic at the initial line) and singular (in the sense that the first order coefficient is unbounded near the left endpoint). Proposition 3. 8 .L 8Suppose Assumption MP holds, and let m ∈ N. If h ∈ D and h ≥ 0, then the function f m given by(3.10) is such thatf m (x, y) ≥ 0 for x ≥ y > a m . (3.17) If, in addition, h ≤ C (where C is a constant), then f m (x, y) ≤ C for x ≥ y > a m .Proof. It follows from Proposition 3.3 that the function u m (x, y):= f m (γ −1 (x), γ −1 (y)) is a solution of the Cauchy problem ( ℓ x u m )(x, y) = ( ℓ y u m )(x, y), x, y >ã m (3.18) u m (x,ã m ) = h(γ −1 (x)), x >ã m (3.19) (∂ y u m )(x,ã m ) = 0, x >ã m(3.20) whereã m = γ(a m ). Clearly, u m satisfies the inequalities (3.16) for arbitrary x 0 ≥ y 0 ≥ã m (here c =ã m ). By Theorem 3.7, u m (x 0 , y 0 ) ≥ 0 for all x 0 ≥ y 0 >ã m ; consequently, (3.17) holds. The proof of the last statement is straightforward: if we have h ≤ C, then u m (x, y) = C − u m (x, y) is a solution of (3.18) with initial conditions u m (x,ã m ) = C − h(γ −1 (x)) ≥ 0 and (3.20), thus the reasoning of the previous paragraph yields that C − u m ≥ 0 for x ≥ y >ã m . The previous result gives the positivity-preservingness for the solution of the nondegenerate Cauchy problem (3.11). The extension of this property to the possibly degenerate problem (3.1) is an immediate consequence of the pointwise convergence result of Corollary 3.4: Corollary 3.9. Suppose Assumption MP holds. If h ∈ D coincides with the solution (3.2) of the hyperbolic Cauchy problem. Having this in mind, let us first confirm that the solution of the hyperbolic Cauchy problem can be represented as an integral with respect to a family of positive measures: Proposition 4.1. Fix x, y ∈ [a, b). There exists a subprobability measure ν x,y ∈ M + [a, b) such that, for all initial conditions h ∈ C 4 c,0 , the solution (3.2) of the hyperbolic Cauchy problem (3.1) can be written as f h (x, y) = Remark 4. 4 . 4Given a measure µ ∈ M C [a, b), it is natural to define the L-translation by µ as Proposition 4. 6 . 6The L-transform µ of µ ∈ M C [a, b) has the following properties:(a) µ is continuous on [0, ∞). Moreover, if a family of measures {µ j } ⊂ M C [a, b) is tight and uniformly bounded, then { µ j } is equicontinuous on [0, ∞). (b) Each measure µ ∈ M C [a, b) is uniquely determined by µ. In particular, each f ∈ L 1 (r) is uniquely determined by F f ≡ µ f , where µ f ∈ M C [a, b) is defined by µ f (dx) = f (x)r(x)dx. (c ) )If {µ n } is a sequence of measures belonging to M + [a, b), µ ∈ M + [a, b), for λ in compact sets. Remark 4.7. I. Parts (c) and (d) of the proposition above show that (whenever lim x↑b w λ (x) = 0 for all λ > 0) the L-transform possesses the following important property: the L-transform is a topological homeomorphism between P[a, b) with the weak topology and the set P of L-transforms of probability measures with the topology of uniform convergence in compact sets.II. Recall that, by definition [2, §30], the measures µ n converge vaguely to µ if lim n [a,b) g(ξ)µ n (dξ) = [a,b) g(ξ)µ(dξ) for all g ∈ C 0 [a, b). Much like weak convergence, vague convergence of measures can be formulated via the L-transform, provided that lim x↑b w λ (x) = 0 for all λ > 0. Indeed, using v −→ to denote vague convergence of measures, we have: II.1 If {µ n } ⊂ M + [a, b), µ ∈ M + [a, b), and µ n v −→ µ, then lim µ n (λ) = µ(λ) pointwise for each λ > 0; II.2 If {µ n } ⊂ M + [a, b), {µ n } is uniformly bounded and lim µ n (λ) = f (λ) pointwise in λ > 0 for some function f ∈ B b (0, ∞), then µ n v −→ µ for some measure µ ∈ M + [a, b) such that µ ≡ f . (The first part is trivial; the second follows from the reasoning in the first paragraph of the proof of (d) in the proposition above, together with the fact that any uniformly bounded sequence of positive measures contains a vaguely convergent subsequence [2, p. 213].) III. Concerning the additional assumption in the above remarks, one can state: a necessary and sufficient condition for the condition lim x↑b w λ (x) = 0 (λ > 0) to hold is that lim x↑b p(x)r(x) = ∞. This fact can be proved using the transformation into the standard form (Remark 2.4) and known results on the asymptotic behavior of solutions of the Sturm-Liouville equation −u ′′ − A ′ A u ′ = λu (see [23, proof of Lemma 3.7]). Proof of Theorem 5.1 for the case γ(a) > −∞. Assume first that ℓ = − 1 A d dx (A d dx ), 0 < x < ∞, and that Assumption MP holds with a = γ(a) = 0. Fix λ ∈ C, and let {w x) = w λ (x) for x ∈ [0, n], w n λ (x) = 0 for x ≥ n + 1. Let f n (x, y) be the unique solution of the hyperbolic Cauchy problem (3.1) with initial condition h(x) = w n λ (x). Since the family of characteristics for the hyperbolic equation (ℓ x u)(x, y) = (ℓ y u)(x, y) is x ± y = const., the solution f n (x, y) depends only on the values of the initial condition on the interval [|x − y|, x + y]. Observing that the function w n λ (x) w n λ (y) is a solution of the hyperbolic equation (ℓ x u)(x, y) = (ℓ y u)(x, y) on the square (x, y) ∈ [0, n] 2 , we deduce that ( note that supp(ν x,y ) = [|x − y|, x + y] because of the domain of dependence of the hyperbolic equation). Since n is arbitrary, the identity holds for all x, y ∈ [0, ∞), proving that the theorem holds for operators of the form ℓ = − 1 A d dx (A d dx ), 0 < x < ∞. Now, in the general case of an operator ℓ of the form (1.4), note that γ(a) > −∞ means that r(y) p(y) Corollary 5. 2 . 2Let µ, ν, π ∈ M C [a, b). ( a ) aWe have π = µ * ν if and only ifπ(λ) = µ(λ) ν(λ)for all λ ≥ 0.(b) Probability measures are closed under L-convolution: if µ, ν ∈ P[a, b), then µ * ν ∈ P[a, b). By relative compactness of D({π * mn n }) (see the proof of Theorem 7.3), the sequence {ν n } n∈N has a weakly convergent subsequence, say ν nj w −→ µ k as j → ∞, and from this it clearly follows that µ * k k = µ. Consequently, (ii) holds. As one would expect, the diffusion process generated by the Sturm-Liouville operator (1.4) (cf. Lemma 2.8) is an L-Lévy process: Proposition 7.11. The irreducible diffusion process X generated by (L (b) , D (b) L ) is an L-Lévy process. L ) is an L-Gaussian process. This is a consequence of the following characterization of L-Gaussian measures:Proposition 7.12. Let Y = {Y t } t≥0be an L-Lévy process, let {µ t } t≥0 be the associated L-convolution semigroup and let (G, D(G)) be the C b -generator of the process Y . The following conditions are equivalent:(i) µ 1 is a Gaussian measure;(ii) lim t↓0 1 t µ t [a, b) \ V a = 0 for every neighbourhood V a of the point a; (iii) lim t↓0 1 t (µ t * δ x ) [a, b) \ V x = 0 for every x∈ [a, b)and every neighbourhood V x of the point x; Example 8. 5 ( 5Sturm-Liouville hypergroups). Consider a Sturm-Liouville operator on the positive halfline with coefficients p = r = A, SL1. 1 A( 0 ) 10= 0 and A ′ (x) A(x) = α0 x + α 1 (x) for x in a neighbourhood of 0, where α 0 > 0 and α 1 ∈ C ∞ (R) is an odd function; SL1.2 A(0) > 0 and A ∈ C 1 [0, ∞). 0 is a sufficient condition for a given [a, b)-valued Feller process Y to have continuous paths.(iv) =⇒ (iii): This is a consequence of Ray's theorem for one-dimensional Markov processes, which is stated and proved in[26, Theorem 5.2.1]. Finally, it is well-known that Markov processes with continuous paths have local generators (see e.g. [26, Theorem 5.1.1]), thus the last assertion holds. Example 8.1 (Cosine Fourier transform). Consider the Sturm-Liouville operator AcknowledgementsThe first and third authors were partly supported by CMUP (UID/MAT/00144/2019), which is funded by Fundação para a Ciência e a Tecnologia (FCT) (Portugal) with national (MCTES) and European structural funds through the programs FEDER, under the partnership agreement PT2020, and Project STRIDE -NORTE-01-0145-FEDER-000033, funded by ERDF -NORTE 2020. The first author was also supported by the grant PD/BD/135281/2017, under the FCT PhD Programme UC|UP MATH PhD Program. The second author was partly supported by the project CEMAPRE -UID/MULTI/00491/2013 financed by FCT/MCTES through national funds. H Bauer, 10.1515/9783110814668Probability Theory. BerlinWalter De GruyterH. Bauer, Probability Theory, Walter De Gruyter, Berlin (1996). Measure and Integration Theory. H Bauer, 10.1515/9783110866209Walter De GruyterBerlinH. Bauer, Measure and Integration Theory, Walter De Gruyter, Berlin (2001). Convolution semigroups and central limit theorem associated with a dual convolution structure. N , Ben Salem, 10.1007/BF02214276J. Theor. Probab. 72N. Ben Salem, Convolution semigroups and central limit theorem associated with a dual convolution structure, J. Theor. Probab. 7, no. 2, pp. 417-436 (1994). C Berg, G Forst, 10.1007/978-3-642-66128-0Potential Theory on Locally Compact Abelian Groups. BerlinSpringerC. Berg, G. Forst, Potential Theory on Locally Compact Abelian Groups, Springer, Berlin (1975). A V Bitsadze, 10.1016/C2013-0-01727-6Equations of the Mixed Type. OxfordPergamon PressA. V. Bitsadze, Equations of the Mixed Type, Pergamon Press, Oxford, 1964. W R Bloom, H Heyer, 10.1515/9783110877595Harmonic Analysis of Probability Measures on Hypergroups. BerlinWalter de GruyterW. R. Bloom, H. Heyer, Harmonic Analysis of Probability Measures on Hypergroups, Walter de Gruyter, Berlin (1994). . V I Bogachev, 10.1007/978-3-540-34514-5Measure Theory. IISpringerV. I. Bogachev, Measure Theory. Vol. II , Springer, Berlin (2007). Handbook of Brownian Motion: Facts and Formulae. A N Borodin, P Salminen, 10.1007/978-3-0348-8163-0SpringerBaselA. N. Borodin, P. Salminen, Handbook of Brownian Motion: Facts and Formulae, Springer, Basel (2002). Lévy-Type Processes: Construction, Approximation and Sample Path Properties. B Böttcher, R Schilling, J Wang, Springer Lecture Notes in Mathematics. BerlinSpringer2099III of theB. Böttcher, R. Schilling, J. Wang, Lévy-Type Processes: Construction, Approximation and Sample Path Properties, in Springer Lecture Notes in Mathematics vol. 2099 (vol. III of the "Lévy Matters" subseries), Springer, Berlin (2014). R W Carroll, 10.1016/S0304-0208(08)72271-0Transmutation theory and applications. North-Holland; AmsterdamR. W. Carroll, Transmutation theory and applications, North-Holland, Amsterdam (1985). Opérateus de translation généralisée et semi-groupes de convolution. H Chebli, 10.1007/BFb0060609Théorie du Potentiel et Analyse Harmonique. J. FarautBerlinSpringerH. Chebli, Opérateus de translation généralisée et semi-groupes de convolution, in Théorie du Potentiel et Analyse Harmonique (J. Faraut, editor), Springer, Berlin, pp. 35-59 (1974). H Chebli, Sturm-Liouville, Hypergroups, 10.1090/conm/183/02055Applications of hypergroups and related measure algebras: A joint summer research conference on applications of hypergroups and related measure algebras. Seattle, WA; Providence RIAmerican Mathematical SocietyH. Chebli, Sturm-Liouville Hypergroups, in: Applications of hypergroups and related measure algebras: A joint summer research conference on applications of hypergroups and related measure algebras, July 31-August 6, 1993, Seattle, WA, American Mathematical Society, Providence RI, pp. 71-88 (1995). A Hankel convolution complex inversion theory. F M Cholewinski, Mem. Amer. Math. Soc. 58American Mathematical SocietyF. M. Cholewinski, A Hankel convolution complex inversion theory, Mem. Amer. Math. Soc. no. 58, American Mathematical Society (1965) D L Cohn, 10.1007/978-1-4614-6956-8Measure Theory. New YorkBirkhäuserSecond EditionD. L. Cohn, Measure Theory, Second Edition, Birkhäuser, New York (2013). Convolution and hypergroup structures associated with a class of Sturm-Liouville systems. W C Connett, C Markett, A L Schwartz, Trans. Amer. Math. Soc. 3321W. C. Connett, C. Markett, A. L. Schwartz, Convolution and hypergroup structures associated with a class of Sturm-Liouville systems, Trans. Amer. Math. Soc. 332, no. 1, pp. 365-390 (1992). R Courant, Partial Differential Equations. New YorkWileyIIR. Courant, Methods of Mathematical Physics -Vol. II: Partial Differential Equations, Wiley, New York (1962). Sur une extension de la formule de Taylor. J Delsarte, J. Math. Pures Appl. 17J. Delsarte, Sur une extension de la formule de Taylor, J. Math. Pures Appl. 17, pp. 213-231 (1938). N Dunford, J T Schwartz, Linear Operators -Part II: Spectral Theory. New YorkWileyN. Dunford, J. T. Schwartz, Linear Operators -Part II: Spectral Theory, Wiley, New York (1963). Sturm-Liouville operators with measure-valued coefficients. J Eckhardt, G Teschl, 10.1007/s11854-013-0018-xJ. Anal. Math. 120J. Eckhardt, G. Teschl, Sturm-Liouville operators with measure-valued coefficients, J. Anal. Math. 120, pp. 151-224 (2013). . A Erdélyi, W Magnus, F Oberhettinger, F G Tricomi, Higher Transcendental Functions. IIMc-Graw-HillA. Erdélyi, W. Magnus, F. Oberhettinger, F. G. Tricomi, Higher Transcendental Functions -Vol. II, Mc- Graw-Hill, New York (1953). S N Ethier, T G Kurtz, 10.1002/9780470316658Markov Processes -Characterization and Convergence. New YorkWileyS. N. Ethier, T. G. Kurtz, Markov Processes -Characterization and Convergence, Wiley, New York (1986). The convolution structure for Jacobi function expansions. M Flensted-Jensen, T H Koornwinder, 10.1007/BF02388521Ark. Mat. 11M. Flensted-Jensen, T. H. Koornwinder, The convolution structure for Jacobi function expansions, Ark. Mat. 11, pp. 245-262 (1973). Sturm-Liouville hypergroups and asymptotics. F , 10.1007/s00605-017-1048-8Monatsh. Math. 1861F. Fruchtl, Sturm-Liouville hypergroups and asymptotics, Monatsh. Math. 186, no. 1, pp. 11-36 (2018). On general boundary conditions for one-dimensional diffusions with symmetry. M Fukushima, 10.2969/jmsj/06610289J. Math. Soc. Japan. 661M. Fukushima, On general boundary conditions for one-dimensional diffusions with symmetry, J. Math. Soc. Japan 66, no. 1, pp. 289-316 (2014). Variation diminishing Hankel transforms. I I Hirschman, 10.1007/BF02786854J. Analyse Math. 8I. I. Hirschman, Variation diminishing Hankel transforms, J. Analyse Math. 8, pp. 307-336 (1960). K Itô, 10.1090/mmono/231Essentials of Stochastic Processes. Providence RIAmerican Mathematical SocietyK. Itô, Essentials of Stochastic Processes, American Mathematical Society, Providence RI (2006). R I Jewett, 10.1016/0001-8708(75)90002-XSpaces with an Abstract Convolution of Measures. 18R. I. Jewett, Spaces with an Abstract Convolution of Measures, Advances in Mathematics 18(1), pp. 1-101 (1975). The existence of spectral functions of generalized second order differential systems with boundary conditions at the singular end. I S Kac, Amer. Math. Soc. TranslI. S. Kac, The existence of spectral functions of generalized second order differential systems with boundary conditions at the singular end , Amer. Math. Soc. Transl. (2) 62, pp. 204-262 (1967). On the convolution for integral transforms. V A Kakichev, Izv Vyssh Uchebn Zaved Mat. 2in RussianV. A. Kakichev, On the convolution for integral transforms, Izv Vyssh Uchebn Zaved Mat. 2, pp. 53-62 (1967) (in Russian). Random walks with spherical symmetry. J F C Kingman, 10.1007/BF02391808Acta Math. 109J. F. C. Kingman, Random walks with spherical symmetry, Acta Math. 109, pp. 11-53 (1963). A Klenke, Probability Theory -A Comprehensive Course. LondonSpringerSecond EditionA. Klenke, Probability Theory -A Comprehensive Course, Second Edition, Springer, London (2014). Jacobi functions and analysis on noncompact semisimple Lie groups. T H Koornwinder, 10.1007/978-94-010-9787-1_1Special functions: group theoretical aspects and applications. R. A. Askey, T. H. Koornwinder, W. SchemppReidel, DordrechtT. H. Koornwinder, Jacobi functions and analysis on noncompact semisimple Lie groups, in Special functions: group theoretical aspects and applications (R. A. Askey, T. H. Koornwinder, W. Schempp, editors), Reidel, Dordrecht, pp. 1-85 (1984). Die Verallgemeinerung der Operation der Verschiebung im Zusammenhang mit fastperiodischen Funktionen. B M Levitan, Mat. Sb. 749B. M. Levitan, Die Verallgemeinerung der Operation der Verschiebung im Zusammenhang mit fastperiodis- chen Funktionen, Mat. Sb. 7, no. 49, pp. 449-478 (1940). On a class of solutions of the Kolmogorov-Smoluchowski equation. B M Levitan, Vestnik Leningrad. Univ. 157in RussianB. M. Levitan, On a class of solutions of the Kolmogorov-Smoluchowski equation, Vestnik Leningrad. Univ. 15, no. 7, pp. 81-115 (1960) (in Russian). The spectral decomposition of the option value. V Linetsky, 10.1142/S0219024904002451Int. J. Theor. Appl. Finance. 73V. Linetsky, The spectral decomposition of the option value, Int. J. Theor. Appl. Finance 7, no. 3, pp. 337-384 (2004). Decomposition of Random Variables and Vectors. J V Linnik, I V Ostrovskiǐ, 10.1090/mmono/048American Mathematical SocietyProvidence RIJ. V. Linnik, I. V. Ostrovskiǐ, Decomposition of Random Variables and Vectors, American Mathematical Society, Providence RI (1977). Hypergroups and hypergroup algebras. G L Litvinov, 10.1007/BF01088201J. Soviet Math. 382G. L. Litvinov, Hypergroups and hypergroup algebras, J. Soviet Math. 38, no. 2, pp. 1734-1761 (1987). On representation of a solution to a modified Cauchy problem. N K Mamadaliev, 10.1007/BF02674745Sib. Math. J. 415N. K. Mamadaliev, On representation of a solution to a modified Cauchy problem, Sib. Math. J. 41, no. 5, pp. 889-899 (2000). Analytical Treatment of One-dimensional Markov Processes. P , SpringerBerlinP. Mandl, Analytical Treatment of One-dimensional Markov Processes, Springer, Berlin (1968). Elementary solutions for certain parabolic partial differential equations. H Mckean, 10.1090/S0002-9947-1956-0087012-3Transactions of the American Mathematical Society. 82H. McKean, Elementary solutions for certain parabolic partial differential equations, Transactions of the American Mathematical Society 82, 519-548 (1956). Linear differential operators. Part II: Linear differential operators in Hilbert space. M A Naimark, Frederick Ungar Publishing CoNew YorkM. A. Naimark, Linear differential operators. Part II: Linear differential operators in Hilbert space, Frederick Ungar Publishing Co., New York, 1968. F W J Olver, D W Lozier, R F Boisvert, NIST Handbook of Mathematical Functions. C. W. ClarkCambridgeCambridge University PressF. W. J. Olver, D. W. Lozier, R. F. Boisvert, C. W. Clark (Eds.), NIST Handbook of Mathematical Functions, Cambridge University Press, Cambridge, 2010. Equations with nonnegative characteristic form II. E V Radkevich, 10.1007/s10958-009-9395-1J. Math. Sci. 158E. V. Radkevich, Equations with nonnegative characteristic form II , J. Math. Sci. 158, pp. 453-604 (2009). C Rentzsch, M Voit, 10.1090/conm/261/04135Lévy Processes on Commutative Hypergroups. Gainesville, Florida; Providence RIAmerican Mathematical SocietyProbability on Algebraic StructuresC. Rentzsch, M. Voit, Lévy Processes on Commutative Hypergroups, in: Probability on Algebraic Struc- tures: AMS Special Session on Probability on Algebraic Structures, March 12-13, 1999, Gainesville, Florida, American Mathematical Society, Providence RI, pp. 83-105 (2000). M Rösler, 10.1090/conm/183/02068Applications of hypergroups and related measure algebras: A joint summer research conference on applications of hypergroups and related measure algebras. Seattle, WA; Providence RIAmerican Mathematical SocietyConvolution algebras which are not necessarily positivity-preservingM. Rösler, Convolution algebras which are not necessarily positivity-preserving, in: Applications of hyper- groups and related measure algebras: A joint summer research conference on applications of hypergroups and related measure algebras, July 31-August 6, 1993, Seattle, WA, American Mathematical Society, Providence RI, pp. 71-88 (1995). On the product formula and convolution associated with the index Whittaker transform. R Sousa, M Guerra, S Yakubovich, arXiv:1802.06657PreprintR. Sousa, M. Guerra, S. Yakubovich, On the product formula and convolution associated with the index Whittaker transform, Preprint, arXiv:1802.06657 (2018). Lévy processes with respect to the index Whittaker convolution. R Sousa, M Guerra, S Yakubovich, arXiv:1805.03051PreprintR. Sousa, M. Guerra, S. Yakubovich, Lévy processes with respect to the index Whittaker convolution, Preprint, arXiv:1805.03051 (2018). Sturm-Liouville hypergroups without the compactness axiom. R Sousa, M Guerra, S Yakubovich, PreprintR. Sousa, M. Guerra, S. Yakubovich, Sturm-Liouville hypergroups without the compactness axiom, Preprint (2019). The spectral expansion approach to index transforms and connections with the theory of diffusion processes. R Sousa, S Yakubovich, Commun. Pure Appl. Anal. 176R. Sousa, S. Yakubovich, The spectral expansion approach to index transforms and connections with the theory of diffusion processes, Commun. Pure Appl. Anal. 17, no. 6, pp. 2351-2378 (2018). A class of index transforms with Whittaker's function as the kernel. H M Srivastava, Y V Vasil&apos;ev, S Yakubovich, 10.1093/qmathj/49.3.375Quart. J. Math. Oxford. 492H. M. Srivastava, Y. V. Vasil'ev, S. Yakubovich, A class of index transforms with Whittaker's function as the kernel, Quart. J. Math. Oxford 49(2), 375-394 (1998). Mathematical methods in quantum mechanics. G , 10.1090/gsm/157American Mathematical SocietyProvidence RIWith applications to Schrödinger operators, Second EditionG. Teschl, Mathematical methods in quantum mechanics. With applications to Schrödinger operators, Second Edition, American Mathematical Society, Providence RI (2014). E C Titchmarsh, Eigenfunction Expansions Associated with Second-Order Differential Equations, Second Edition. OxfordOxford University PressE. C. Titchmarsh, Eigenfunction Expansions Associated with Second-Order Differential Equations, Second Edition, Oxford University Press, Oxford (1962). Generalized convolutions. K Urbanik, 10.4064/sm-23-3-217-245Studia Math. 23K. Urbanik, Generalized convolutions, Studia Math. 23, pp. 217-245 (1964). K Urbanik, 10.1007/978-94-009-3859-5_11Transactions of the Tenth Prague Conference on Information Theory, Statistical Decision Functions, Random Processes. DordrechtAnalytical Methods in Probability TheoryK. Urbanik, Analytical Methods in Probability Theory, in: Transactions of the Tenth Prague Conference on Information Theory, Statistical Decision Functions, Random Processes, Vol. A, Reidel, Dordrecht, pp. 151-163 (1988). A convolution approach to multivariate Bessel processes. N Van Thu, S Ogawa, M Yamazato, 10.1142/9789812770448_0014Stochastic processes and applications to mathematical finance, Proceedings of the 6th Ritsumeikan International Symposium. SingaporeWorld ScientificN. Van Thu, S. Ogawa, M. Yamazato, A convolution approach to multivariate Bessel processes, in: Stochastic processes and applications to mathematical finance, Proceedings of the 6th Ritsumeikan International Sympo- sium, World Scientific, Singapore, pp. 233-244 (2007). Infinitely divisible distributions in algebras with stochastic convolution. V E Volkovich, 10.1007/BF01083639Journal of Soviet Mathematics. 404V. E. Volkovich, Infinitely divisible distributions in algebras with stochastic convolution, Journal of Soviet Mathematics 40, no. 4, pp. 459-467 (1988). On Symmetric Stochastic Convolutions. V E Volkovich, 10.1007/BF01060427Journal of Theoretical Probability. 53V. E. Volkovich, On Symmetric Stochastic Convolutions, Journal of Theoretical Probability 5, no. 3, pp. 417-430 (1992). G N Watson, A Treatise on the Theory of Bessel Functions. CambridgeCambridge University PressSecond EditionG. N. Watson, A Treatise on the Theory of Bessel Functions, Second Edition, Cambridge University Press, Cambridge (1944) J Weidmann, 10.1007/BFb0077960Spectral Theory of Ordinary Differential Operators. BerlinSpringerJ. Weidmann, Spectral Theory of Ordinary Differential Operators, Springer, Berlin (1987). A maximum property of Cauchy's problem. H Weinberger, 10.2307/1969598Ann. Math. 642H. Weinberger, A maximum property of Cauchy's problem, Ann. Math. 64, no. 2, pp. 505-513 (1956). . S Yakubovich, 10.1142/9789812831064Index Transforms, World Scientific. S. Yakubovich, Index Transforms, World Scientific, Singapore (1996). On the Plancherel theorem for the Olevskii transform. S Yakubovich, Acta Math. Vietnam. 313S. Yakubovich, On the Plancherel theorem for the Olevskii transform, Acta Math. Vietnam. 31, no. 3, pp. 249-260 (2006). Moment functions and laws of large numbers on hypergroups. H Zeuner, 10.1007/BF02571436Mathematische Zeitschrift. 2111H. Zeuner, Moment functions and laws of large numbers on hypergroups, Mathematische Zeitschrift 211(1), pp. 369-407 (1992).
[]
[ "Absolute X-distribution and self-duality", "Absolute X-distribution and self-duality" ]
[ "Andrei Alexandru ", "Ivan Horváth [email protected] ", "\nThe George Washington University\nWashingtonDCUSA\n", "\nUniversity of Kentucky\nLexingtonKYUSA\n" ]
[ "The George Washington University\nWashingtonDCUSA", "University of Kentucky\nLexingtonKYUSA" ]
[ "XXIX International Symposium on Lattice Field Theory" ]
Various models of QCD vacuum predict that it is dominated by excitations that are predominantly self-dual or anti-self-dual. In this work we look at the tendency for self-duality in the case of pure-glue SU(3) gauge theory using the overlap-based definition of the field-strength tensor. To gauge this property, we use the absolute X-distribution method which is designed to quantify the dynamical tendency for polarization for arbitrary random variables that can be decomposed in a pair of orthogonal subspaces.
10.22323/1.139.0268
[ "https://arxiv.org/pdf/1111.3897v1.pdf" ]
54,008,122
1111.3897
a0af571fc28c0b09e1e3a3e8bcbce319f9a76e47
Absolute X-distribution and self-duality July 10-16 2011 16 Nov 2011 Andrei Alexandru Ivan Horváth [email protected] The George Washington University WashingtonDCUSA University of Kentucky LexingtonKYUSA Absolute X-distribution and self-duality XXIX International Symposium on Lattice Field Theory July 10-16 2011 16 Nov 2011Squaw Valley, Lake Tahoe, California * Speaker. Various models of QCD vacuum predict that it is dominated by excitations that are predominantly self-dual or anti-self-dual. In this work we look at the tendency for self-duality in the case of pure-glue SU(3) gauge theory using the overlap-based definition of the field-strength tensor. To gauge this property, we use the absolute X-distribution method which is designed to quantify the dynamical tendency for polarization for arbitrary random variables that can be decomposed in a pair of orthogonal subspaces. Motivation Various models of QCD vacuum use semi-classical arguments to describe the mechanism responsible for confinement or chiral-symmetry breaking. The semi-classical arguments start by expanding QCD partition function around extremal points of the action, i.e., Ω | e −Hτ | Ω ≈ e −S cl Dx(τ) exp − 1 2 δ x δ 2 S δ x 2 x cl δ x + · · · . (1.1) The first task is then to find the extremal points of the action and then take into account gaussian fluctuations around these extrema. The action for pure-glue QCD can be expressed in terms of the self-dual and anti-self-dual components of the field strength tensor S = 1 4g 2 d 4 x F a µν F a µν = 1 4g 2 d 4 x ±F a µνF a µν + 1 2 F a µν ∓F a µν 2 . (1. 2) The integral of the F a µνF a µν term is a boundary term that is related to the topological charge of the configuration. If we keep the boundary values fixed, the integral is minimized when the quantity in the parenthesis vanishes. This happens when the field is self-dual, F a µν =F a µν , or anti-self-dual F a µν = −F a µν . A more sophisticated analysis leads to the conclusion that all the extremal points of the classical action that are not saddle points satisfy this condition [1]. It is then natural to expect that if QCD vacuum is correctly described by a semi-classical model, the field strength in a typical lattice QCD ensemble will exhibit a high degree of self-duality. To gauge this tendency we decompose the field strength at every point on the lattice into its self-dual components and analyze their polarization properties. To do this, we use the method of absolute Xdistribution designed to analyze the dynamical aspects of polarization [2]. A more detailed account of this work is given in Ref. [3]. Dynamical polarization We start by reviewing the method of absolute X-distribution. A first version of this approach was introduced in a study of the local chirality of the low-lying eigenmodes of the Dirac operator [4]. In general, for an arbitrary observable that can be split in two components Q = Q 1 + Q 2 , we say that Q is polarized when it tends to be aligned with either one of the components. More precisely, if we look at the magnitude of components, q i = Q i , we tend to think that the observable Q is polarized when the probability distribution P b (q 1 , q 2 ), with support in the positive quadrant of the q 1 q 2 -plane, is peaked in the vicinity of the q 1,2 axes. The raw distribution P b (q 1 , q 2 ) is difficult to characterize. A more direct measure is offered by the induced distribution of the polarization angle. In Fig. 1 we plot the raw distribution of chirality components as determined in a previous study [2] and the corresponding polarization angle distribution (the curve indicated by α = 1), which we call the X-distribution. We see that the X-distribution tends to be concentrated towards the middle of the graph, suggesting an antipolarization tendency. A more careful analysis reveals that the conclusions based on this method can be misleading. The X-distribution is determined by the choice of parametrization for the angles measured in the q 1 q 2 -plane. The definition we used to plot Fig. 1 is x = 4 π arctan Q 2 Q 1 − 1 . (2.1) We will refer to this choice as the reference polarization [4]. However, this choice is not unique. Alternative definitions were used in various studies. Using t ≡ Q 2 / Q 1 , one class of valid angle variables is given by a generalization of the above definition x = 4 π arctan(t α ) − 1 , (2.2) where α > 0 is an arbitrary parameter [2]. For α = 1 the angle parameterx is the reference polarization defined above, while the definition based onx with α = 2 was used in a study of selfduality in pure gauge QCD [5]. In the right panel of Fig. 1 we compare the X-distribution for the ensemble shown in the left panel, measured using the reference polarization and the polarization defined byx with α = 4. The qualitative behavior of the distribution changes dramatically, while the dynamics producing the original distribution is unchanged. It is clear then that conclusions based on X-distributions alone cannot be trusted. To address this problem we define the absolute X-distribution, a measure of the pair correlation induced by the underlying dynamics [2,6]. The basic idea is to compare the correlated distribution P b (q 1 , q 2 ) with a similar distribution where the components are statistically independent, to isolate the effect of the dynamics. The uncorrelated distribution is constructed from the marginal distributions P 1 (q 1 ) = dq 2 P b (q 1 , q 2 ) and P 2 (q 2 ) = dq 1 P b (q 1 , q 2 ) . (2.3) For our application, symmetry guarantees that P 1 = P 2 . The uncorrelated distribution is P u (q 1 , q 2 ) ≡ P 1 (q 1 )P 2 (q 2 ). We define an angle variable that has constant angular density for the uncorrelated distribution. This is the absolute polarization. The histogram of this angle variable for the uncorrelated distribution is flat. In our figures this is indicated by a horizontal dashed line. The X-distribution in terms of the absolute polarization is the absolute X-distribution. In the left panel of Fig. 2 we present the X-distribution for the reference polarizations for both correlated distribution, P b , and the uncorrelated one, P u , for the ensemble presented in Fig. 1. Notice that these two distributions are almost identical indicating that there is little dynamical correlation. In the right panel we plot the absolute polarization histogram, which is almost flat. There is a small enhancement towards the edges indicating that the dynamics induces a slight polarization. This is consistent with the plots in the right panel, where we see that the uncorrelated distribution is more prominent towards the center of the histogram. Based on the absolute polarization distribution, P A (x), we construct a more compact measure of the polarization tendency, the correlation coefficient C A = 2Γ − 1 where Γ = 1 −1 dx P A (x) |x| . (2.4) The coefficient Γ measures the probability that a sample drawn from distribution P b is more polarized than one drawn from P u . When we have no dynamical correlation this probability is 0.5; the correlation coefficient is scaled such that C A = 0 in this case. Field strength definition In this study, we will use a definition of the field strength based on the overlap operator. Compared to the ultra-local definitions, the overlap definition is less susceptible to ultra-violet fluctuations, so no arbitrary link smearing or cooling is needed. Moreover, this definition provides a natural expansion in terms of eigenmodes of the Dirac operator which allows us to define a smoothed version of field strength tensor controlled by the value of the eigenvalue cutoff. If we denote with S F =ψD(x, y)ψ the fermionic contribution to the action in the overlap formulation, it is easy to show that tr s σ µν D(x, x) has the same quantum numbers as the field strength F µν [7]. Here tr s denotes the trace over the spinor index. It was shown by explicit calculation that on smooth fields in the limit a → 0 these definitions agree [8,9], tr s σ µν D(x, x) = c T F µν (x) + O(a 4 ) . (3.1) Above, c T is a constant that depends on the kernel used to define the overlap operator. The lattice version of the field strength operator used in this study is F ov µν (x) ≡ 1 c T tr s σ µν D(x, x) = − 1 c T tr s σ µν [2ρ − D(x, x)] ,(3.2) where 2ρ is the largest eigenvalue of D, the eigenvalue associated with the zero modes' partners. We used the fact that tr s σ µν = 0 to cast the definition in a form useful for eigenmode expansion. Using the expansion in terms of the eigenmodes of the Dirac operator, we define the smoothed version of the field strength [2] F Λ µν (x) ≡ − 1 c T ∑ |λ |<Λa tr s σ µν (2ρ − λ )ψ λ (x)ψ λ (x) † . (3.3) This definition has the property that lim Λ→∞ F Λ = F and that the contribution of the largest eigenmodes is suppressed. The self-dual and anti-self-dual parts of the field strength are defined using the dual of the field strengthF µ,ν = 1 2 ε µναβ F αβ F S = 1 2 (F +F) F A = 1 2 (F −F) . (3.4) Numerical results For our study we used a set of pure-glue ensembles generated using Iwasaki action [10]. The parameters for these ensembles are presented in Table 1. To study the continuum limit we have a set of 5 ensembles with the same volume. To determine the finite volume effects we also generated one ensemble with a larger volume. In Fig. 3 we plot the histogram for the absolute polarization for all ensembles with volume (1.32 fm) 4 . We find a small tendency for polarization that decreases as we make the lattice spacing smaller. To understand whether this tendency survives the continuum limit, we compute the correlation coefficient and fit it with a quadratic polynomial in a. As we can see from the right panel Table 1: The size and lattice spacing for the ensembles used in this study. Figure 3: Left: absolute X-distribution for self-duality components. Note that the y-scale is magnified to better show the difference between different lattice spacings. Right: the correlation coefficient as a function of the lattice spacing and its continuum limit extrapolation. Error bars are present in these plots but they are smaller than the symbol size. of Fig. 3 the polynomial fits the data well. The coefficient remains positive in the continuum limit, indicating a very small tendency for polarization. The probability that the sample drawn from the correlated distribution is more polarized than one drawn from the uncorrelated distribution is 51% compared to 50% when the dynamics would produce no correlation. To gauge the size of the finite volume effects, we compute the absolute polarization on two ensembles with the same lattice spacings but different volumes. Referring to Table 1, these are ensembles E 4 and E 6 . In the left panel of Fig. 4 we compare the absolute polarizations on these two ensembles. We find no difference between the two histograms and we conclude that the finite volume effects are negligible. We also computed a set of eigenmodes of the overlap Dirac operators on ensembles E 2 , E 3 and E 4 and used them to compute the smoothed field strength operator F Λ . To study the continuum limit, a consistent definition of the smoothed operator sums over all modes smaller than a physical 100 % 50 % 10 % Figure 5: Left: X-distribution for self-duality components of a smooth field strength based on the low-lying modes of the chirally-improved Dirac operator [5]. The curved marked with 100% is the relevant one for our comparison. Right: absolute X-distribution P A and X-distribution P r based on two different polarization variables (see Eq. 2.2) for ensemble E 2 . cutoff. We set the cutoff Λ = 1000 MeV and found that the behavior of the absolute X-distribution is similar to the full version of the operator. In the right panel of Fig. 4 we compare the correlation coefficient with the one computed using the full operator. We find that while the values of the correlation coefficient are slightly different, the qualitative behavior remains the same. We conclude our discussion with a comparison with a similar work by Gattringer [5] who studied the self-duality polarization using a smoothed field strength operator. This operator was constructed using an eigenmode expansion of the chirally-improved Dirac operator. In Ref. [5] it was found that the self-duality exhibits a strong polarization (see left panel of Fig. 5) supporting a model of vacuum dominated by topological "lumps". In contrast, we only find a mild dynamical tendency for polarization. This is seen in the right panel of Fig. 5 where we plot the absolute Xdistribution of ensemble E 2 which is similar to the ensemble used in Ref. [5]. The discrepancy is due to the fact that Ref. [5] uses a polarization measure dominated by kinematical effects. To show this, in the right panel of Fig. 5 we also plot the X-distribution measured using the reference polarization, α = 1, and the polarization angle used in Ref. [5], α = 2. To better compare our results, for these plots we used, as in the referenced study, a smoothed F Λ constructed using the same number of modes. We see then that when using the same angle definition, our results are consistent with those of Ref. [5]. However, using another valid angle parametrization produces qualitatively different results due to kinematical effects. We conclude that the strong polarization observed in Ref. [5] is mainly due to the specific choice of angle variable rather than the underlying dynamics. Conclusions In this work we studied the dynamical polarization propertied of self-duality components induced by pure-glue QCD dynamics. We found a very mild polarization tendency that survives in the continuum limit. This results has negligible finite-volume corrections. The self-duality tendency is very small making it unlikely that the vacuum fluctuations are well-described by semi-classical models. Our findings are at variance with the results of a previous study [5]. We conclude that the discrepancy is the result of kinematical effects. Figure 1 : 1Sample pair distribution generated by chirality components of the lowest eigenmodes of ensemble E 1 from[2]. Right: the associated X-distribution, i.e., the induced distribution of the polarization angle. Figure 2 : 2X-distribution using the reference polarization for the correlated and uncorrelated distributions (left) and the absolute X-distribution (right). Figure 4 : 4Left: absolute X-distribution for ensemble E 4 (circles) and E 6 (crosses) which have the same lattice spacing but different volume. Right: correlation coefficient for the smoothed strength field (diamonds) compared to the full version (circles). Error bars are included in both plots. Acknowledgments: Andrei Alexandru is supported in part under DOE grant DE-FG02-95ER-40907. The computational resources for this project were provided in part by the George Washington University IMPACT initiative. Ivan Horváth acknowledges warm hospitality of the BNL Theory Group during which part of this work has been completed. . T Schafer, E V Shuryak, hep-ph/9610451Rev.Mod.Phys. 70Instantons in QCDT. Schafer and E. V. Shuryak, Instantons in QCD, Rev.Mod.Phys. 70 (1998) 323-426, [hep-ph/9610451]. A Alexandru, T Draper, I Horváth, T Streuer, arXiv:1009.4451The Analysis of Space-Time Structure in QCD Vacuum II: Dynamics of Polarization and Absolute X-Distribution. 326A. Alexandru, T. Draper, I. Horváth, and T. Streuer, The Analysis of Space-Time Structure in QCD Vacuum II: Dynamics of Polarization and Absolute X-Distribution, Annals of Physics 326 (2011) 1941-1971, [arXiv:1009.4451]. How Self-Dual is QCD?. A Alexandru, I Horváth, arXiv:1110.2762A. Alexandru and I. Horváth, How Self-Dual is QCD?, arXiv:1110.2762. Evidence against instanton dominance of topological charge fluctuations in QCD. I Horváth, N Isgur, J Mccune, H B Thacker, hep-lat/0102003Phys. Rev. 6514502I. Horváth, N. Isgur, J. McCune, and H. B. Thacker, Evidence against instanton dominance of topological charge fluctuations in QCD, Phys. Rev. D65 (2002) 014502, [hep-lat/0102003]. Testing the self-duality of topological lumps in SU(3) lattice gauge theory. C Gattringer, hep-lat/0202002Phys. Rev. Lett. 88221601C. Gattringer, Testing the self-duality of topological lumps in SU(3) lattice gauge theory, Phys. Rev. Lett. 88 (2002) 221601, [hep-lat/0202002]. Improved measure of local chirality. T Draper, A Alexandru, Y Chen, S.-J Dong, I Horváth, hep-lat/0408006Nucl.Phys.Proc.Suppl. 140T. Draper, A. Alexandru, Y. Chen, S.-J. Dong, I. Horváth, et. al., Improved measure of local chirality, Nucl.Phys.Proc.Suppl. 140 (2005) 623-625, [hep-lat/0408006]. A Framework for Systematic Study of QCD Vacuum Structure II: Coherent Lattice QCD. I Horváth, hep-lat/0607031I. Horváth, A Framework for Systematic Study of QCD Vacuum Structure II: Coherent Lattice QCD, hep-lat/0607031. Gauge field strength tensor from the overlap Dirac operator. K Liu, A Alexandru, I Horváth, hep-lat/0703010Phys.Lett. 659K. Liu, A. Alexandru, and I. Horváth, Gauge field strength tensor from the overlap Dirac operator, Phys.Lett. B659 (2008) 773-782, [hep-lat/0703010]. Classical Limits of Scalar and Tensor Gauge Operators Based on the Overlap Dirac Matrix. A Alexandru, I Horváth, K.-F Liu, arXiv:0803.2744Phys.Rev. 7885002A. Alexandru, I. Horváth, and K.-F. Liu, Classical Limits of Scalar and Tensor Gauge Operators Based on the Overlap Dirac Matrix, Phys.Rev. D78 (2008) 085002, [arXiv:0803.2744]. Equation of state for pure SU(3) gauge theory with renormalization group improved action. M Okamoto, CP-PACS Collaboration Collaborationhep-lat/9905005Phys.Rev. 6094510CP-PACS Collaboration Collaboration, M. Okamoto et. al., Equation of state for pure SU(3) gauge theory with renormalization group improved action, Phys.Rev. D60 (1999) 094510, [hep-lat/9905005].
[]
[ "Spatial deconvolution of spectropolarimetric data: an application to quiet Sun magnetic elements", "Spatial deconvolution of spectropolarimetric data: an application to quiet Sun magnetic elements" ]
[ "C Quintero Noda \nInstituto de Astrofísica de Canarias\nLa LagunaE-38200TenerifeSpain\n\nDepartamento de Astrofísica\nUniv. de La Laguna, La LagunaTenerife, E-38205Spain\n", "A Asensio Ramos \nInstituto de Astrofísica de Canarias\nLa LagunaE-38200TenerifeSpain\n\nDepartamento de Astrofísica\nUniv. de La Laguna, La LagunaTenerife, E-38205Spain\n", "D Orozco Suárez \nInstituto de Astrofísica de Canarias\nLa LagunaE-38200TenerifeSpain\n\nDepartamento de Astrofísica\nUniv. de La Laguna, La LagunaTenerife, E-38205Spain\n", "B Ruiz Cobo \nInstituto de Astrofísica de Canarias\nLa LagunaE-38200TenerifeSpain\n\nDepartamento de Astrofísica\nUniv. de La Laguna, La LagunaTenerife, E-38205Spain\n" ]
[ "Instituto de Astrofísica de Canarias\nLa LagunaE-38200TenerifeSpain", "Departamento de Astrofísica\nUniv. de La Laguna, La LagunaTenerife, E-38205Spain", "Instituto de Astrofísica de Canarias\nLa LagunaE-38200TenerifeSpain", "Departamento de Astrofísica\nUniv. de La Laguna, La LagunaTenerife, E-38205Spain", "Instituto de Astrofísica de Canarias\nLa LagunaE-38200TenerifeSpain", "Departamento de Astrofísica\nUniv. de La Laguna, La LagunaTenerife, E-38205Spain", "Instituto de Astrofísica de Canarias\nLa LagunaE-38200TenerifeSpain", "Departamento de Astrofísica\nUniv. de La Laguna, La LagunaTenerife, E-38205Spain" ]
[]
Context. One of the difficulties in extracting reliable information about the thermodynamical and magnetic properties of solar plasmas from spectropolarimetric observations is the presence of light dispersed inside the instruments, known as stray light. Aims. We aim to analyze quiet Sun observations after the spatial deconvolution of the data. We examine the validity of the deconvolution process with noisy data as we analyze the physical properties of quiet Sun magnetic elements. Methods. We used a regularization method that decouples the Stokes inversion from the deconvolution process, so that large maps can be quickly inverted without much additional computational burden. We applied the method on Hinode quiet Sun spectropolarimetric data. We examined the spatial and polarimetric properties of the deconvolved profiles, comparing them with the original data. After that, we inverted the Stokes profiles using the Stokes Inversion based on Response functions (SIR) code, which allow us to obtain the optical depth dependence of the atmospheric physical parameters.Results. The deconvolution process increases the contrast of continuum images and makes the magnetic structures sharper. The deconvolved Stokes I profiles reveal the presence of the Zeeman splitting while the Stokes V profiles significantly change their amplitude. The area and amplitude asymmetries of these profiles increase in absolute value after the deconvolution process. We inverted the original Stokes profiles from a magnetic element and found that the magnetic field intensity reproduces the overall behavior of theoretical magnetic flux tubes, that is, the magnetic field lines are vertical in the center of the structure and start to fan when we move far away from the center of the magnetic element. The magnetic field vector inferred from the deconvolved Stokes profiles also mimic a magnetic flux tube but in this case we found stronger field strengths and the gradients along the line-of-sight are larger for the magnetic field intensity and for its inclination. Moreover, the discontinuity between the magnetic and non magnetic environment in the flux tube gets sharper. Conclusions. The deconvolution process used in this paper reveals information that the smearing induced by the point spread function (PSF) of the telescope hides. Additionally, the deconvolution is done with a low computational load, making it appealing for its use on the analysis of large data sets.
10.1051/0004-6361/201425414
[ "https://arxiv.org/pdf/1505.03219v1.pdf" ]
55,439,584
1505.03219
1ef2a923899394853479e6ddfbeadd272f5f8de7
Spatial deconvolution of spectropolarimetric data: an application to quiet Sun magnetic elements 13 May 2015 May 14, 2015 C Quintero Noda Instituto de Astrofísica de Canarias La LagunaE-38200TenerifeSpain Departamento de Astrofísica Univ. de La Laguna, La LagunaTenerife, E-38205Spain A Asensio Ramos Instituto de Astrofísica de Canarias La LagunaE-38200TenerifeSpain Departamento de Astrofísica Univ. de La Laguna, La LagunaTenerife, E-38205Spain D Orozco Suárez Instituto de Astrofísica de Canarias La LagunaE-38200TenerifeSpain Departamento de Astrofísica Univ. de La Laguna, La LagunaTenerife, E-38205Spain B Ruiz Cobo Instituto de Astrofísica de Canarias La LagunaE-38200TenerifeSpain Departamento de Astrofísica Univ. de La Laguna, La LagunaTenerife, E-38205Spain Spatial deconvolution of spectropolarimetric data: an application to quiet Sun magnetic elements 13 May 2015 May 14, 2015Received/AcceptedAstronomy & Astrophysics manuscript no. network c ESO 2015methods: data analysisstatistical -techniques: polarimetricspectroscopic -Sun: magnetic fieldspho- tosphere Context. One of the difficulties in extracting reliable information about the thermodynamical and magnetic properties of solar plasmas from spectropolarimetric observations is the presence of light dispersed inside the instruments, known as stray light. Aims. We aim to analyze quiet Sun observations after the spatial deconvolution of the data. We examine the validity of the deconvolution process with noisy data as we analyze the physical properties of quiet Sun magnetic elements. Methods. We used a regularization method that decouples the Stokes inversion from the deconvolution process, so that large maps can be quickly inverted without much additional computational burden. We applied the method on Hinode quiet Sun spectropolarimetric data. We examined the spatial and polarimetric properties of the deconvolved profiles, comparing them with the original data. After that, we inverted the Stokes profiles using the Stokes Inversion based on Response functions (SIR) code, which allow us to obtain the optical depth dependence of the atmospheric physical parameters.Results. The deconvolution process increases the contrast of continuum images and makes the magnetic structures sharper. The deconvolved Stokes I profiles reveal the presence of the Zeeman splitting while the Stokes V profiles significantly change their amplitude. The area and amplitude asymmetries of these profiles increase in absolute value after the deconvolution process. We inverted the original Stokes profiles from a magnetic element and found that the magnetic field intensity reproduces the overall behavior of theoretical magnetic flux tubes, that is, the magnetic field lines are vertical in the center of the structure and start to fan when we move far away from the center of the magnetic element. The magnetic field vector inferred from the deconvolved Stokes profiles also mimic a magnetic flux tube but in this case we found stronger field strengths and the gradients along the line-of-sight are larger for the magnetic field intensity and for its inclination. Moreover, the discontinuity between the magnetic and non magnetic environment in the flux tube gets sharper. Conclusions. The deconvolution process used in this paper reveals information that the smearing induced by the point spread function (PSF) of the telescope hides. Additionally, the deconvolution is done with a low computational load, making it appealing for its use on the analysis of large data sets. Introduction Observations of the Sun from the Earth are always limited by the presence of the atmosphere, which strongly disturbs the images. A solution to this problem is to place the telescopes in space satellites, which produce observations without any (or limited) atmospheric aberrations. Recent examples of these atmospheric-free observations are the Hinode mission (Kosugi et al. 2007), especially the spectropolarimeter (SP, Lites et al. 2013) of the solar optical telescope (SOT, Tsuneta et al. 2008), and the vector magnetograph IMaX (Martínez Pillet et al. 2011) on board the Sunrise balloon . Although the images from space are not affected by atmospheric seeing, the optical properties of the instruments still limit the observations. In the case of diffraction limited observations, the point spread function (PSF) establishes the maximum allowed spatial resolution, defined as the distance between two nearby structures that can be properly distinguished. In space observations, the central core of the PSF is typi-cally dominated by the Airy disk, which is a consequence of a physical limitation imposed by the diffraction. Even in a diffraction limited instrument, real PSFs have typically the shape of the Airy pattern, with very extended tails. These tails do not limit the spatial resolution but induce a dispersion of the light from different parts of the image, leading to what is commonly termed as stray light or dispersed light. This effect produces that light observed in a spatial location at the focal plane is a combination of the light emitted in the object at relatively distant spatial locations. Therefore, the contrast of the object (defined as the pixel-to-pixel variation of the illumination normalized to the average illumination) measured in the focal plane is typically smaller than the contrast in the original object. The presence of stray light is important both for imaging instruments and slit spectropolarimeters. A first successful attempt to correct for this effect in imaging instruments was carried out by Martinez Pillet (1992), where an analytical PSF with long tails was proposed and the im-age was deconvolved from it following a least-squares approach. Another method could be to consider the stray light contamination as the sum of two components: a spectrally dispersed component and a parasitic component of the spectrally undispersed light caused by scattering inside the spectrograph (Beck et al. 2011). In addition, we can also find in the literature the multi-object multi-frame blind deconvolution technique (van Noort et al. 2005) which has been used to correct for all the perturbing effects of the atmosphere and the instrument. The novelty of the latter method is that a very general functional form for the PSF is proposed and blindly estimated from the observations, together with the corrected images. The case of slit spectrographs is more complicated because the images are not immediately available. Instead, an image is constructed by adding the different scanning steps of the slit at different times. Therefore, one has to make some assumptions about the stability of the object so that the reconstructed images can be used for deconvolution. Furthermore, given that the spectral resolution of slit spectropolarimeters is much larger than that of image instruments, the number of monochromatic images is much larger. This makes the application of any deconvolution scheme a much more computationally heavy task. For this reason, it has been customary to postpone the treatment of dispersed light to the inversion phase of the spectropolarimetric data. A physical model (atmospheric model+radiative transfer) is proposed to explain the observed Stokes profiles, and an ad-hoc contamination is linearly added to account for the stray light. It is possible to find different ways of computing this contamination in the literature: from local approaches that compute the stray light in a box of N × N pixels around the pixel of interest (Orozco Suárez et al. 2007) to global approaches that use an average Stokes I profile in the whole field-of-view. Global approaches are preferred to local ones for different reasons (Asensio Ramos & Manso Sainz 2011), essentially because the use of local approaches somehow make the inversion process uncontrollable. One way to proceed when carrying out an inversion of spectropolarimetric data is to simultaneously do the inversion and the image deconvolution. The first effort in this direction has been carried out by van Noort (2012), in which a standard inversion code for the Stokes parameters (Frutiger et al. 2000) is modified to simultaneously take the presence of the spatial coupling induced by the PSF into account. This represents the first realistic approach to a full inversion of the Stokes profiles without an ad-hoc treatment of the stray light. The approach followed by van Noort (2012) is computationally complex. The reason is that the inversion of the Stokes profiles is carried out simultaneously with the spatial deconvolution, using a Levenberg-Marquardt algorithm, without a distinction between the two processes. This algorithm needs to compute and invert a Hessian matrix that is very large. To minimize the computational load, Ruiz Cobo & Asensio Ramos (2013) used a simplified approach in which the inversion is carried out in two steps: in the first step the spectropolarimetric data is deconvolved from the known PSF using a regularization based on a Karhunen-Loève transformation or principal component analysis PCA (see, Loève 1955) and then inverted using standard Stokes inversion codes. In this paper, we explain the technique (only briefly presented in Ruiz Cobo & Asensio Ramos 2013) in detail, and use it to analyze spectropolarimetric observations of a quiet Sun magnetic element obtained with Hinode/SP. The magnetic field of the solar surface is structured in a wide range of scales. The largest are the sunspots and can reach sizes of Mm. If we move to smaller structures we find pores, plages, faculae, or small magnetic elements that could reach a size of 100 km. Early spectropolarimetric observations have revealed many of the fundamental properties of these structures, finding, for example, that the magnetic field intensity of these magnetic elements should be of the order of kG values (see, for instance, Stenflo 1973). Combined with basic magnetohydrodynamical (MHD) theory, this deduction led to the development of the thin flux tube model (Steiner et al. 1998;Spruit & Zweibel 1979;Parker 1976) as a fundamental element on the structure of the photosphere. As time passes, the designs of solar telescopes has improved, providing new observational data to analyze their apparent size, brightness, field structure, dynamics, and evolution. Works Berger & Title (1996); Keller (1992); Muller & Keil (1983), and the reviews of Solanki et al. (2006) andde Wijn et al. (2009) roughly provide a complete picture of our current knowledge. All of these studies generally support the idea that virtually all of the small-scale structure in active and quiet network regions is composed of filamentary flux tubes of kG magnetic field strength. In the present work, we explain in detail the spatial deconvolution technique employed for the first time in Ruiz Cobo & Asensio Ramos (2013). In addition, we also apply this method to quiet Sun Hinode/SP data for the first time, aiming to take advantage of the spatial deconvolution process to analyze the physical properties of quiet Sun magnetic elements. Observations and data analysis Observations The polarimetric data we used were acquired with the spectropolarimeter (SP; Lites et al. 2013) on board the Hinode spacecraft (Kosugi et al. 2007). We selected a data set with a field of view of 82 ′′ × 164 ′′ recorded at disk center on April 21 th , 2007 (see Figure 1). The SP instrument measures the Stokes vector of the Fe i line-pair at 630 nm with a spectral and spatial sampling of 2.15 pm pixel −1 and 0.16 ′′ , respectively. The exposure time is 12.8 s per slit position, making it possible to achieve a noise level of 7.0 × 10 −4 I c in Stokes V and 7.2 × 10 −4 I c in Stokes Q and U . Here I c refers to the mean continuum intensity in the granulation. This data set displays a signal-tonoise ratio 12.8/4.8 times higher than that of the normal Hinode/SP maps in which the exposure time is 4.8 s. To calibrate the spectra, we averaged the intensity profile from the whole map and compared it with the Fourier transform spectrometer spectral atlas (Kurucz et al. 1984;Brault & Neckel 1987) once it was convolved with the spectral PSF of Hinode. A similar calibration was done by Cabrera Solana et al. (2007) (see, Eq. 1). We found a difference between the intensity of both profiles which can be interpreted as parasitic light inside the instrument, de-fined as "veil" in Ruiz Cobo & Asensio Ramos (2013). In the present observation, the estimated value for this veil, is C=0.0357 referred to the continuum signal I c . The data is finally corrected as I f inal (λ) = (I or (λ) − C)/(I c − C). We subtracted this value from the continuum intensity before normalization. Since we aim to analyze strong magnetic elements in the quiet Sun (see black and white structures in Figure 1), we selected one isolated magnetic structure, far from the edges of the map (see the red square), with strong longitudinal field signals, to study its properties in detail. Deconvolution We face the problem of correcting two-dimensional spectropolarimetric data from the perturbation introduced by the PSF of the Hinode solar optical telescope. We obtained the two dimensional data by scanning a slit on the surface of the Sun and recording the information of the four Stokes profiles (I, Q, U, V ) on each point along the slit for a set of discrete wavelength points around the 630 nm Fe i doublet. As a consequence, the data can be considered to be four three-dimensional cubes of images. We use the notation I(λ), Q(λ), U(λ), and V(λ) to refer to observed images at a certain wavelength λ. In practice, given the scanning process, these are not strictly speaking images because each column of the image is taken at a different time. In general, in the standard image formation paradigm, the observed image I (for simplicity we focus on Stokes I, but the same expressions apply to any Stokes parameter given the linear character of the convolution operator) that one obtains in the detector after degradation by the atmosphere and the optical devices of the telescope at a given wavelength can be written as I = O * P + N,(1) where P is the PSF of the atmosphere+telescope in the image of interest, while O is the original unperturbed image that one would obtain with a perfect instrument without diffraction and without any atmospheric perturbation. The operator * is the standard convolution operator and the quantity N is the noise contribution in the image formation produced at the camera. We assume that we are not in the low illumination regime and N follows a Gaussian distribution with zero mean and diagonal covariance matrix with equal variance σ 2 N . The previous expression can be applied to individual monochromatic images, with potentially different PSFs for each wavelength. For simplicity, we make the assumption that the PSF is wavelengthindependent, which turns out to be a very good approximation given the wavelength ranges that we are dealing with (less than 2.5Å in the Hinode/SP case). The specific PSF that we consider is described in van Noort (2012) and obtained from the pupil specified by Suematsu et al. (2008), which takes the entrance pupil of the telescope and the presence of a spider into account. Under the presence of uncertainties induced by the noise, any deconvolution must be treated under a statistical framework. Consequently, we only have access to the distribution of reconstructed images. Using the Bayes theorem, the posterior distribution p(O|I, P) which describes the probability of the restored image given the observed image and information about the image-forming system is given by p(O|I, P) = p(I|O, P)p(O) p(I) ,(2) where p(I|O, P) is the likelihood or, in other words, the probability that an observed image I has been obtained given an original image O and the PSF. The quantity p(O), also named prior, encodes all the a-priori statistical information we have for the original images (i.e., degree of smoothness, presence of large gradients, etc.). This prior contents the a-priori statistical information of the whole field of view of the image. This statistical information could be, for example, that the Stokes I is a function that is defined as strictly positive, or that the magnetic field has spatial correlation between pixels. Finally, p(I) is a normalization constant (termed evidence) that is independent of the unperturbed image. Under the assumption of uncorrelated Gaussian noise in every pixel of the image, the likelihood can be written as p(I|O, P) = N k=1 exp − (I k − (O * P) k ) 2 2σ 2 N ,(3) where the product is done over all N pixels of the image, I k represents the k-th pixel of the observed image (in lexicographic order), and (O * P) k is the k-th pixel (in lexicographic order) of the original image convolved with the PSF. The previous formalism allows us to obtain the maximum a-posteriori (MAP) image, i.e. the image that maximizes the posterior distribution. In addition, we assume that the prior over restored images is flat (all images are equally probable), the MAP solution is equal to the maximum likelihood solution. Assuming that the prior p(O) is flat is equivalent to not limit the deconvolution process with any a priori statistical restriction. This solution can be found by taking the derivative of the previous Gaussian likelihood with respect to the original image O. The resulting equation can be solved iteratively with an algorithm known as the Gaussian version of the Richardson-Lucy (RL) algorithm (Richardson 1972;Lucy 1974) O new = O old + [I − O old * P] ⊗ P,(4) where the symbol ⊗ represents image correlation (which can be written in terms of the convolution operator). The previous iterative scheme does not guarantee positivity of the images even when the observed images are positive and thus somehow have to be forced for Stokes I. However, note that for the Stokes parameters, the pixel values can be positive and negative. Additionally, since the RL deconvolution is a maximum-likelihood algorithm, it is sensitive to over reconstruction produced by the presence of noise. The most notable effect is the appearance of high frequency structures in the reconstructed image. To avoid this problem, it is customary to stop the iterative scheme before these artifacts appear. Regularization The most straightforward way to deconvolve twodimensional spectropolarimetric data is to deconvolve the monochromatic images of the Stokes profiles in a way similar to what is done with imaging data (e.g., van Noort et al. 2005). This approach presents two drawbacks. First, the number of deconvolutions one has to carry out is large. For instance, the spectropolarimetric data of Hinode SOT/SP contains 112 wavelength points. Second, many of these wavelengths contain practically no information in Stokes Q, U , and V . This is the case of the continuum wavelengths where, unless strong velocity fields are present, the polarimetric signal is expected to be zero. Therefore, one ends up in the difficult situation of having to deconvolve very noisy images. The nature of the RL algorithm then induces an exponential increase of the spatially high frequency noise, making the final images useless. In general, and as a consequence of the smoothing introduced by the PSF, some information is irremediably lost. This unavoidably transforms the deconvolution process into an ill-posed problem. In particular, a set of solutions with potentially diverging power in the high spatial frequencies are perfectly compatible with the observations. Standard spatial deconvolution techniques solve this dilemma with ad-hoc spatial filtering methods that avoid the divergence of high frequencies during the deconvolution process. Typical methods include setting a hard or soft threshold on the resulting modulation transfer function that avoids the appearance of high frequencies in the resulting image. We pursue a regularized deconvolution. Contrary to the typical procedure in image deconvolution, the regulariza-tion that we propose acts on the spectral dimension and not on the spatial dimensions. We assume that the unperturbed Stokes profiles at each pixel can be written as a linear combination of the elements of a complete orthonormal basis formed by the eigenfunctions {φ i (λ)}. Consequently, any of the unperturbed Stokes profiles can be written as O(λ) = N λ i=1 ω i φ i (λ),(5) where N λ is the number of wavelength points along the spectral dimension. If only a few elements of the eigenbasis are enough to reproduce the unperturbed Stokes profiles, it is advisable to truncate the previous sum and only take the first N ≪ N λ eigenfunctions into account. Therefore, the unperturbed data is now described by a set of images ω i (that we term projected images), which are built by projecting the Stokes profiles of each pixel on the orthonormal basis functions. Given that we have assumed that the monochromatic PSF is wavelength independent, the observed perturbed Stokes profiles are obtained after applying Eq. (1) where we have used the fact that the convolution operator only acts on the spatial dimensions. Because of the presence of noise, we can find the original unperturbed Stokes profiles by computing the projection of the previous equation onto the orthonormal basis functions I(λ) = N λ i=1 (ω i * P)φ i (λ) + N,(6)I(λ), φ k (λ) = N λ i=1 (ω i * P) φ i (λ), φ k (λ) + N,(7) where ·, · indicates the dot product of the two functions. The noise term still maintains the same statistical properties because the basis is orthonormal, which allows us to simplify the previous expression leading to I(λ), φ k (λ) = ω k * P + N,(8) Consequently, the regularization process we used implies that we have to deconvolve the projected images (associated to the basis functions φ k (λ)) from the PSF and reconstruct the unperturbed image using Eq. (5). This deconvolution is done using the RL iteration of Eq. (4). The previous approach is valid for any set of orthonormal functions that one utilizes to explain the Stokes profiles (e.g., del Toro Iniesta & López Ariste 2003). However, the basis obtained after PCA is ideal in our case because the PCA decomposition transformation is defined so that the first principal component accounts for as much of the variability in the data as possible, and each additional principal component in turn explains the largest variability in the data under the orthogonality constraint. Therefore, working with PCA-projected images, we find that the real signal present in each pixel only appears associated with the first few elements of the basis set, while the remaining elements are used to explain the noise. Consequently, the influence of noise is largely minimized if we only focus on the maps of low-order coefficients. This is a huge advantage with respect to the wavelength-by-wavelength deconvolution. The procedure starts by building the M × N λ matrix of measurements, where the Stokes profiles (with the mean Stokes profile subtracted) are placed as the rows of a matrix for each one of the M observed pixels. This matrix (or equivalently its covariance matrix) is diagonalized using the singular value decomposition (e.g., Press et al. 1986). The eigenvectors obtained after the diagonalization form a basis that is efficient in reproducing the observed Stokes profiles and only a few of them are needed. In principle, and according to Eq. (5), we should use the PCA eigenvectors obtained with the original Stokes profiles. Since we do not have access to those profiles, we have used the observed Stokes profiles to build this database. Unless the original profiles are radically different from those observed, we expect the eigenbasis to be efficient in reproducing the original Stokes profiles also. Fig. 3. Comparison between the observed and deconvolved continuum maps, first column; and the observed and deconvolved Fe i 6302.5Å magnetograms, second column. The observed region corresponds to the red box in Figure 1. Finally, the four colored squares indicate the location of the Stokes profiles we examine in detail later. The first eight eigenvectors for the four Stokes profiles computed using all the pixels of Fig. 1 are shown in Fig. 2. For the quiet Sun, contrary to the case of an active region, the noise contribution appears in the first PCA eigenvectors (see the 7 th or 8 th eigenvector in the linear polarization profiles). This property is a consequence of the predominance of low signal polarization profiles, that need to be carefully taken into account. For our analysis, we only selected the first eight families of eigenvectors for Stokes I and V , and the first four families of eigenvectors for Stokes Q and U . Comparison with other approaches In the case of the inversion strategy presented by van Noort (2012), the regularization is done through the selection of the number and position of nodes that describe the physical models. This has the advantage that the physical interpretation of the filtering process is easy: the method eliminates the high frequencies in the Stokes profiles that need perturbations in the depth stratification of the physical magnitudes with more than 3 nodes in depth. Obviously, the number of nodes can be changed at will, but then the inversion of the whole map has to be repeated. Our filtering procedure consists, essentially, of a very similar indirect suppression of high frequencies in the image by a filtering of the spectral features. However, since we use an empirical complete basis set that, in principle, reconstructs all the profiles in the field of view to the noise level, we do not eliminate any important spectral feature that is already present in the data. Additionally, given the effective separation between the spatial decon-volution and the non linear inversion of the Stokes profiles, the resulting code is computationally simpler. It has the advantage that any of the existing inversion codes as SIR (Ruiz Cobo & del Toro Iniesta 1992), NICOLE (Socas-Navarro et al. 2014), Spinor (Frutiger et al. 2000), or Helix+ (Lagg et al. 2004) can be used directly. The only addition is a first step in which one has to carry out the spatial deconvolution with a code that we provide for free for the community in the web address indicated in the conclusions. Properties of the Stokes profiles We mainly focus in the analysis of the magnetic patch enclosure inside the red box of Fig. 1. This magnetic element displays the strongest polarization signals and it is one of the largest structures of the map. It shows an almost circular shape of longitudinal fields that surrounds some granules. Spatial properties The first property we study is the changes the deconvolution process induced in the continuum and in the magnetogram images. We plotted, in the first row of Fig. 3, the original continuum map and the Fe i 6302.5Å magnetogram (calculated as the difference between the Stokes V profiles at ±86 mÅ from the rest center), while in the second row we show the same magnitudes after the deconvolution process. We see a clear enhancement in the contrast of the Fig. 4. From top to bottom, each row corresponds to Stokes I, Q, U , and V , respectively. Inside each panel we plotted the original profile in black and the corresponding deconvolved profile in red. In addition, we marked each panel with a colored square to indicate its position on Figure 3. continuum features. The contrast, defined as the standard deviation of the brightness in the continuum normalized to its average value, increases from 7.6% in the original map to 11.9% in the deconvolved map. The other noticeable difference is the increase in the sharpness of the structure, together with a slight decrease in its size. This effect is probably better observed in the magnetogram, where the structure becomes smaller in the deconvolved data, indicating that the smearing produced by the stray light contamination affects the polarization Stokes profiles. In addition, the deconvolution process also reveals an enhancement in the Stokes V signals in the interiors of the magnetic element. This enhancement indicates that the deconvolved Stokes V amplitude sometimes reaches a factor of 2 larger than its original amplitude. Finally, we observed that a reversal in the Stokes V polarity appears in the edges of some structures (e.g., see the small isolated black patch close to the yellow square at coordinates 3. ′′ 5, 8. ′′ 7 in Fig. 3). These ringing structures may be similar to those reported by Buehler et al. (2015) where they found a magnetic field reversal related to magnetic patches of opposite polarity and magnetic field intensities below 300 G. However, we believe that, in our case, the ringing effects might be generated by the deconvolution procedure itself. We are currently studying spatial regular-ization techniques (like total variation regularizations) to efficiently suppress these effects. Stokes profiles The Stokes profiles found inside the magnetic elements with strong longitudinal fields (see white and black patches in Fig. 3) are almost identical in the different regions. The Stokes V profiles are nearly antisymmetric with high amplitudes, between 6-15% (normalized to I c ), in the interior of each patch, and their amplitude decreases as we move toward the edge of the magnetic patch. Linear polarization signals, on the contrary, are negligible in these magnetic elements. Finally, Stokes I profiles display a slightly asymmetric red wing that indicates the presence of velocity gradients along the line of sight (LOS) in the center of the magnetic element. To show the effect of the deconvolution process in the Stokes profiles, we have selected four different pixels that display strong Stokes V signals. The position of these pixels are indicated with colored squares in Fig. 3. The corresponding original (black) and deconvolved (red) Stokes profiles are shown in Fig. 4. The Stokes I profile displays major changes, typically presenting a large increase (for granules) or decrease (for intergranules) of the continuum signal with respect to the original signal. Most important, the deconvolved profiles show strong changes in the core of the line. In fact, we detect the effect of the magnetic field producing the splitting of the σ-components. We believe that this effect is real (and not an artifact of the deconvolution) and related to the presence of a magnetic field because in most cases the splitting is present in the Fe i 6302.5Å, which is the most sensitive line to the magnetic field, while it is barely visible in the Fe i 6301.5Å line, which is the least sensitive to the magnetic field. Concerning the linear polarization signals, they are always below the noise level both in the original and in the deconvolved profiles. Circular polarization signals, on the contrary, display strong Stokes V amplitudes, reaching more than 10% of I c in the original profiles. The deconvolution process affects the Stokes V profiles in, at least, two different ways: it slightly changes the Stokes area asymmetries and it abruptly changes their profile amplitude. As we mentioned before, we find cases where, depending on the surroundings of each pixel, the change of amplitude could reach up to twice of its original amplitude signal. Analysis of Stokes profiles asymmetries Area and sometimes amplitude asymmetries are related to the correlation between velocity and magnetic field gradients along the line of sight (Illing et al. 1975). From the results of previous sections, it is clear that the Stokes profiles change after the deconvolution process. For this reason, we intend to study how the Stokes V area and am-plitude asymmetries change. To analyze the Stokes V area and amplitude asymmetries, we follow the definition used in Martínez Pillet et al. (1997), that is, the area asymmetry is obtained as δA = s i V (λ i ) i | V (λ i ) | ,(9) where the sum is extended along the wavelength axis and s is the sign of the Stokes V blue lobe (chosen as +1 if the blue lobe is positive and −1 if the blue lobe is negative). We choose the range of integration of the Stokes V signals from −0.43Å to 0.43Å around the Fe i 6302.5Å line center. Likewise, the amplitude asymmetry is defined as δa = a b − a r a b + a r ,(10) where a b and a r are the unsigned maximum value of the blue and red lobe of Stokes V . We calculate these quantities over the small fragment of the map contained the red square in Fig. 1, where we only considered pixels that show Stokes V amplitudes higher than 1% of I c . The results of this study are included in Fig. 5. The first row corresponds to the original data set while the second row shows the deconvolved data. The top left panel shows that the area asymmetry is positive for the whole magnetic structure, with maximum values close to 10%. This area asymmetry is higher at the edges of the structure and decreases to being roughly compatible with zero when we move to the center of the structure. This effect is also visible in the amplitude asymmetry of the original data (top right panel), which presents values up to 20% at the edges. The sign of the amplitude asymmetry is also unaltered for the whole structure. The maximum value of the area and amplitude asymmetries do not appreciably change in the deconvolved data (bottom row). However, the magnetic element shows some pixels that have changed the sign of both asymmetries, mainly in the center of the structure and around some edges. Similar changes have been found in Asensio Ramos et al. (2012) where the authors concluded that the smearing produced by PSF induces changes in the Stokes V asymmetries which can be partially recovered using reconstruction techniques based on the phase-diversity procedure. This change of sign, especially that of the area asymmetry, indicates that the deconvolution process allows us to detect a change in the gradient of velocity and magnetic field along the line of sight. In the following section, we study how this information is interpreted by the inversion process to define some relevant properties of the magnetic concentration. Stokes profiles inversions To obtain physical information of the atmospheric parameters where the Fe i lines form, we carry out the inversion of the Stokes profiles using the SIR (Stokes Inversion based on Response functions; Ruiz Cobo & del Toro Iniesta 1992) code, which allows us to infer the optical-depth dependence of these atmospheric parameters at each pixel independently. We analyze the magnetic element shown in Fig. 3 with two different approaches, which we describe in detail in the following. The main difference resides that, in one case, we set the microturbulence fixed and equal to zero while, in the other case, the microturbulence was set as a free parameter. The microturbulence correction has been applied in low spatial resolution observations to reproduce the properties of the Stokes I line core (for instance, Westendorp Plaza et al. 2001). Although we are using Hinode/SP data, we were not sure if the spatial resolution of these observations is high enough to avoid the use of the microturbulence as a free parameter in the inversions. Thus, we aimed to analyze the results of the inversions using these two configurations, that is, with and without, microturbulence. Solution 1. No microturbulence We employed a single magnetic component parameterized by seven nodes in temperature T(τ 500 ) 1 , five in the LOS component of the velocity v LOS (τ 500 ), five in the magnetic intensity B(τ 500 ), three for the inclination of the magnetic field with respect to the LOS γ(τ 500 ), and one for the azimuthal angle of the magnetic field in the plane perpendicular to the LOS φ(τ 500 ). Variables such as micro-and macroturbulence are fixed to zero and not inverted. At each iteration, the synthetic profiles are convolved with the spectral transmission profile of Hinode/SP (Lites et al. 2013). Since each node corresponds to a free parameter during the inversion, our model includes a total of 21 free parameters. The number of nodes in B(τ 500 ), γ(τ 500 ), and v LOS (τ 500 ) are necessary to reproduce the small area asymmetries of the observed (see Fig. 5) circular polarization profiles (Landolfi & Landi Degl'Innocenti 1996), but mainly to reproduce the complex shape displayed by the deconvolved Stokes I profiles. Given that the inferred physical parameters could be reliant on the initial atmosphere, we minimize this effect by inverting each individual pixel with 100 different initial atmospheric models. These initial models were constructed by randomly perturbing the temperature stratification of the Harvard-Smithsonian Reference Atmosphere (HSRA) model (Gingerich et al. 1971). The rest of the physical parameters of the initial model (B(τ 500 ), γ(τ 500 ) and v LOS (τ 500 )) are extracted from uniform probability distributions and considered to be independent of the optical depth. Out of the 100 solutions obtained for a given pixel, we keep the one that yields the best fit. Solution 2. Non zero microturbulence The second configuration is similar to the first, but in this case we also set the microturbulence as a free parameter. We choose three nodes for the microturbulence, while the number of nodes for the rest of atmospheric parameters remains the same. Then, the total number of free parameters for this second configuration is 24. We also follow the same strategy of inverting each pixel with 100 random inversions and choose the one with the smallest χ 2 value (see Eq. 11) from the different solutions. Examples Some examples of the results of the inversion of Stokes profiles using these two configurations are plotted in Fig. 6. In this figure, we only focus on the deconvolved profiles, although we also inverted the original profiles to compare the results in the following sections. We can see that the resulting fits from the two different configurations (red and blue lines) are close to the deconvolved profiles (black lines). However, although the Stokes profiles are well fitted in general, the core of Stokes I is not well reproduced for the Fe i 6302.5Å line. We believe that a configuration with the possibility to introduce in the atmospheric parameters abrupt changes in height is necessary to fit these profiles. None of the solutions can be discarded from the point of standard model comparison. However, the solutions using these two configurations have some physical contradictions that we will explain later. Vertical cut The results of the inversion of the pixels marked by the vertical solid line in Fig. 5 using the first configuration, with no microturbulence, are shown in Fig. 7. From left to right, we show the depth stratifications of the temperature, LOS velocity, and magnetic field. The first row shows the atmospheric parameters retrieved form the inversion of the original profiles, while the bottom row shows the same results using the deconvolved Stokes profiles. Concerning the temperature, we find a strong enhancement between log τ 500 = −1 and log τ 500 = −2 inside the magnetized region. This range is precisely the place of maximum sensitivity of the pair of iron lines at 630 nm. Coincident with the Fig. 6. Results from the inversion of the deconvolved Stokes profiles presented in Fig. 4 (see the red profiles). We plot the Stokes I profiles in the first row and the Stokes V profiles in the second row. We omit the linear polarization profiles because their signals are always below the noise level. We plot the deconvolved profiles in black, the results from the inversion using the first configuration in red, and the results from the second configuration in blue. We also show in each panel the difference between the deconvolved and the inverted profile using the same color code of the corresponding configuration, and a colored square indicating the position of the pixel on the map of Fig. 3. increase in temperature we also find strong magnetic fields. Additionally, the height of the atmosphere at which the magnetic field is strong increases when moving apart from the center of the magnetic structure. This can be taken as an indication that the magnetic field lines are starting to fan out. This opening of the magnetic lines is due to a decrease of the gas pressure outside the magnetic element. Finally, this increase in the magnetic field and temperature is also consistent with a strong upward motion, while the plasma located outside the central part of the magnetic element (at 0.7 ′′ and 1.6 ′′ ) shows downflow velocities. The values of the upflow velocities reach close to 4 km s −1 . In the case of the deconvolved data, second row of Fig. 7, the behavior of the physical parameters is slightly different. We find a temperature enhancement in the same region, although the spatial size of the structure is significantly reduced. This is because the original structure is smeared by the spatial PSF and the deconvolution process has partially canceled this smearing. The magnetic field structure of the magnetic concentration is now different, showing a strong increase in the center of the structure. These higher intensity values are needed to reproduce the Zeeman splitting displayed by the deconvolved Stokes I profiles and occur at different heights at different pixels. In the central pixels, we find weak magnetic field values at middle heights (log τ 500 between −1 and −2), and strong values at higher heights. Finally, the velocity of the deconvolved data shows a less smooth solution as compared with the original results. If we use the second configuration, that is, we set the microturbulence as a free parameter, we find the results shown in Fig. 8. The panel distribution is the same used in Fig. 7, but we added a column at the rightmost part of the figure that corresponds to the microturbulence results. Focusing on the original data (first row), the left most panel shows a minor enhancement of the temperature at the center of the magnetic element as compared with the first solution. The magnetic field shows essentially the same structure as Figure 7, concentrated field lines along the atmosphere in the central part of the magnetic element, while these lines fan out when we move away from the center part of the structure. The third panel shows major differences between the results of the velocity along the LOS and those obtained with the first configuration. The LOS velocity in the central part of the magnetic element is close to zero and slightly moving downward, while for the rest of the region is slowly moving upward. The changes on the LOS velocity at the central part of the structure are responsible for the inversion of the area asymmetry sign found in the Fig. 7. From left to right, temperature, magnetic field, and LOS velocity. The color code for the latter indicates in blue the upflowing material while the downflowing material is marked in red. The horizontal axis corresponds to the length of the vertical solid line in Figure 5 and the vertical axis corresponds to the optical depth. deconvolved data in some pixels, see Figure 5. Finally, the microturbulence is large in the central part of the magnetic element and decreases when we move away from the center of the magnetic structure. Its distribution resembles the magnetic field configuration. The second row of Fig. 8 shows the results of the inversion of the deconvolved profiles. As occurred with the first configuration, second row of Figure 7, the spatial size of the structure is smaller in the deconvolved data case. Likewise, the magnetic field intensity is more intense in the inversion of the deconvolved data. In addition, we also find that some pixels display weak magnetic field values at middle heights (log τ 500 = [−1, −2]) and strong magnetic values at higher heights. Finally, the microturbulence also presents similar values between the original (first row) and the deconvolved data (second row), although with the same loss of spatial smoothness between the atmospheric stratification of adjacent pixels as we found for the LOS velocity. The accuracy of the solutions It is clear from the previous discussion that two different solutions provide similar good fits. Given that both solutions give different configurations for some physical parameters, it is sensible to compare them using the χ 2 values to see if one of the two solutions is preferable. To do that, we use the following definition of the reduced χ 2 : χ 2 = 1 ν n λ i=1 I obs i − I fit i σ I 2 + n λ i=1 Q obs i − Q fit i σ Q 2 + n λ i=1 U obs i − U fit i σ U 2 + n λ i=1 V obs i − V fit i σ V 2 ,(11) where, the superindex "obs" and "fit" designate the observed profile and the fitted profile. The quantity ν is the degrees of freedom that corresponds to the difference between the number of wavelength points and the number of free parameters. Finally, σ 2 i is the noise variance of the original data and it has different values depending on each Stokes profile: σ I = 6.1 × 10 −3 , σ Q = 7.2 × 10 −4 , σ U = 7.2×10 −4 , σ V = 7.0×10 −4 . These values are normalized to the mean continuum signal obtained for the whole map. We show in Fig. 9 the ratio between the reduced χ 2 values of the solution with microturbulence over the χ 2 values from the solution without microturbulence. We calculated this value for the inversions of the original and deconvolved data. We can see that they present similar values for the inversion of the original (black line) and the deconvolved (gray line) profiles as both lines are close to 1. In addition, the standard deviation of the ratio between the reduced χ 2 values for all the inverted pixels is σ = 0.4, and, consequently, both solutions are equivalent because the ratio values are mostly inside 1±σ. We conclude then that the accuracy of both solutions is similar although the results for the physical parameters are, in many cases, different (see Martínez González et al. 2006, for a similar conclusion). However, we note that the first configuration is favored when using the Bayesian Information Criterion (BIC; Schwarz 1978), usually applied for approximate model comparison. 9. Ratio between the χ 2 values obtained from the second solution, with microturbulence, over the χ 2 values of the first solution, without microturbulence. Black designates this ratio obtained from the inversion of the original profiles, while gray depicts the ratio between both configurations when we inverted the deconvolved profiles. These Stokes profiles correspond to the pixels marked with the vertical line on Figure 5. Discussion and conclusions The analysis strategy (decoupled deconvolution and inversion) we presented is very straightforward to use once the spatial PSF of the instrument is known. Since this is not the case for earth-based observations because of the presence of the fast-changing atmosphere, it is ideal for space-based observatories. We are convinced that the spatial deconvolution of 2D spectropolarimetric data prior to the inversion of the Stokes profiles (or during the inversion, van Noort 2012) is the way to proceed in the near future. The stray-light contamination is correctly treated and one avoids the danger of incurring potential pitfalls (e.g., Asensio Ramos & Manso Sainz 2011). From the technical point of view, carrying out our analysis is straightforward with the standard tools freely available to any researcher. However, we make our IDL (and Python in the future) code 2 available online for everyone to use it, and also for the sake of reproducible research. The process of spatial deconvolution produces some changes in the Stokes profiles that we studied in detail. The first thing we found is an increase of the continuum contrast of quiet Sun regions from 7.6% for the original map to 11.9% in the deconvolved data, accompanied by a reduction of the size of the structures observed at continuum wavelengths and in the magnetogram. In addition, we detect a reversal in the Stokes V polarity at the edges of some of the analyzed magnetic structures. The validity of these polarity reversals has to be studied in future works because it can be an effect of over-reconstruction during the deconvolution process. The next noticeable difference that we found is the appearance of the Zeeman splitted σ-components in the Stokes I profiles in regions of strong longitudinal field. The same pixels also display a sizable increase in the Stokes V amplitude. The analysis of the Stokes V area and amplitude asymmetries revealed that the deconvolution process uncovers negative asymmetries in the central regions of magnetic structures. The absolute value of the maximum asymmetry does not appreciably change. We examined in detail a vertical cut crossing a magnetic element. After the inversion of the Stokes profiles, we found good fits using two different inversion configurations. The first fit did not include the microturbulence as a free parameter. Using this configuration, we obtained enhanced temperatures on the core of the magnetized region, together with strong upflows. The magnetic field intensity displays, in the original and the deconvolved data, the schematic picture of a magnetic element whose magnetic lines are expanding with height. This scenario can be found in the literature in theoretical works (Grossmann-Doerth et al. 2000) as well as in observational results . The second configuration introduces the microturbulence as a free parameter and the LOS velocity distribution completely changes; the plasma inside the magnetic element is slowly downflowing. We also found a temperature enhancement at the locations of the magnetic element, albeit this enhancement is less important. The magnetic structure is similar to the one obtained with the first configuration. Finally, the microturbulence displays low values outside the magnetic structure although it could reaches up to 2 km s −1 inside the magnetic element. Concerning the differences between the inversions of the original and the deconvolved data, we found that the magnetic structure becomes sharper and the smoothness between consecutive pixels is less clear in the deconvolved data. The detected temperature enhancement increases in both cases while the magnetic field intensity strongly increases due to the necessity of reproducing the Zeeman splitting of the Stokes I profiles. From the two physical solutions obtained in the inversion of the Stokes profiles, we noticed that the velocity configuration of the second solution, see Fig. 8, is closer to the results of previous studies related to the solar magnetic elements (see Solanki et al. 2006;de Wijn et al. 2009). However, the presence of enhanced microturbulence velocity inside the magnetic element is not expected in this part of the structure, which is roughly at rest. In addition, we expect the microturbulence velocity to be more important in outer parts of the magnetic element where we find the interface between the magnetic and non magnetic regions and where abrupt changes on the LOS velocity could be also found. It is also interesting to note the lack of coherence of the inferred LOS velocity in the results of the inversion of the deconvolved maps. This makes us think that the inference of enhanced velocities in the central parts of the magnetic structure is partially a consequence of the presence of the PSF. The inversion code fits the broadening of the Stokes I profile with this LOS velocity distribution. The deconvolution process removes this additional broadening and this compensation is not needed. In spite of using different inversion configurations, we could not perfectly fit the Stokes profiles. In particular, of interest is the Stokes I line core of the Fe i 6302.5Å that displays the Zeeman splitting but with different intensities in each σ-component (see Fig. 6). If this is not an artifact of the deconvolution process, this indicates the existence of abrupt line of sight changes of the physical parameters, probably with short height scales, at the line core formation region that could not affect to the Stokes V profiles. These abrupt changes could produce a complex shape of the Stokes I, depending on the π-and σ-components shapes, while this effect would be not present in the Stokes V profiles. In fact, if the line is forming in this complex en-vironment, the presence of microturbulence on the second configuration of our inversion process is justified. Finally, we stress the fact that for the first time we have run the deconvolution method presented in Ruiz Cobo & Asensio Ramos (2013) on quiet Sun observations. We have demonstrated that the strong magnetic elements display enough signal-to-noise ratio to reliably reconstruct the information perturbed by the PSF and do not introduce many artifacts. We also point out that our approach allows us to pursue a trial-and-error study (unavoidable when inverting weak signals) that is only possible because we have decoupled the spatial deconvolution and the inversion of the Stokes profiles. Although the true nature of quiet Sun magnetic elements is still far from being completely understood, we conclude that the spatial deconvolution of space solar observation will help to obtain more accurate results. like Martínez González et al. (2012); Lagg et al. (2010); Viticchié et al. (2010); Rezaei et al. (2007); Berger et al. (2004); Domínguez Cerdeña et al. (2003); Berger & Title (2001); van Ballegooijen et al. (1998); Fig. 1 . 1Continuum map, left, and Fe i 6302.5Å magnetogram, right. The presence of small and not very common magnetic patches indicates that this map corresponds to a very quiet Sun region. Red square marks the magnetic element studied in detail. Fig. 2 . 2First eight eigenvectors obtained after the PCA decomposition of the observed Stokes parameters from Figure 1. The corresponding eigenvectors for Stokes I, Q, U , and V are displayed from top to bottom and the order of the eigenvectors increases from left to right. Fig. 5 . 5Comparison between the original and deconvolved area asymmetry, first column; and the original and deconvolved amplitude asymmetry, second column. The vertical solid line marks the position of the inverted pixels we analyze in following sections. Fig. 8 . 8Same asFigure 7, except we added the microturbulence results at the rightmost panel. Fig. Fig. 9. Ratio between the χ 2 values obtained from the second solution, with microturbulence, over the χ 2 values of the first solution, without microturbulence. Black designates this ratio obtained from the inversion of the original profiles, while gray depicts the ratio between both configurations when we inverted the deconvolved profiles. These Stokes profiles correspond to the pixels marked with the vertical line on Figure 5. The parameter τ500 refers to the optical depth evaluated at a wavelength where there are no spectral lines (continuum). In our case this wavelength is 500 nm. http://www.iac.es/proyectos/inversion/deconvolution Acknowledgements. This work has been partially funded by the Spanish Ministry of Economy and Competitiveness through the Project No. ESP2013-47349-C6-6. AAR also acknowledges financial support through the Ramón y Cajal fellowship and the AYA2010-18029 (Solar Magnetism and Astrophysical Spectropolarimetry) and Consolider-Ingenio 2010 CSD2009-00038 projects. Hinode is a Japanese mission developed and launched by ISAS/JAXA, with NAOJ as a domestic partner, and NASA and STFC (UK) as international partners. It is operated by these agencies in cooperation with ESA and NSC (Norway). . Asensio Ramos, A Manso Sainz, R , ApJ. 731125Asensio Ramos, A. & Manso Sainz, R. 2011, ApJ, 731, 125 . Asensio Ramos, A Martínez González, M J Khomenko, E Pillet, V , A&A. 53942Asensio Ramos, A., Martínez González, M. J., Khomenko, E., & Martínez Pillet, V. 2012, A&A, 539, A42 . C Beck, R Rezaei, D Fabbian, A&A. 535129Beck, C., Rezaei, R., & Fabbian, D. 2011, A&A, 535, A129 . T E Berger, Rouppe Van Der, L H M Voort, M G Löfdahl, A&A. 428613Berger, T. E., Rouppe van der Voort, L. H. M., Löfdahl, M. G., et al. 2004, A&A, 428, 613 . T E Berger, A M Title, ApJ. 463365Berger, T. E. & Title, A. M. 1996, ApJ, 463, 365 . T E Berger, A M Title, ApJ. 553449Berger, T. E. & Title, A. M. 2001, ApJ, 553, 449 Spectral Atlas of the Solar Absolute Disk-Averaged and Disk-Center. J Brault, H Neckel, Intensity from 3290 to 12 510Å, (unpublished, digital IDL version provided by the KIS sofware libBrault, J. & Neckel, H. 1987, Spectral Atlas of the Solar Absolute Disk-Averaged and Disk-Center Intensity from 3290 to 12 510Å, (unpublished, digital IDL version provided by the KIS sofware lib) . D Buehler, A Lagg, S K Solanki, M Van Noort, Buehler, D., Lagg, A., Solanki, S. K., & van Noort, M. 2015, ArXiv e-prints . D Cabrera Solana, L R Bellot Rubio, C Beck, J C Del Toro Iniesta, A&A. 4751067Cabrera Solana, D., Bellot Rubio, L. R., Beck, C., & Del Toro Iniesta, J. C. 2007, A&A, 475, 1067 . A G De Wijn, J O Stenflo, S K Solanki, S Tsuneta, Space Sci. Rev. 144275de Wijn, A. G., Stenflo, J. O., Solanki, S. K., & Tsuneta, S. 2009, Space Sci. Rev., 144, 275 . J C Del Toro Iniesta, A López Ariste, A&A. 412875del Toro Iniesta, J. C. & López Ariste, A. 2003, A&A, 412, 875 . I Domínguez Cerdeña, F Kneer, J Sánchez Almeida, ApJ. 58255Domínguez Cerdeña, I., Kneer, F., & Sánchez Almeida, J. 2003, ApJ, 582, L55 . C Frutiger, S K Solanki, M Fligge, J H M J Bruls, A&A. 3581109Frutiger, C., Solanki, S. K., Fligge, M., & Bruls, J. H. M. J. 2000, A&A, 358, 1109 . O Gingerich, R W Noyes, W Kalkofen, Y Cuny, Sol. Phys. 18347Gingerich, O., Noyes, R. W., Kalkofen, W., & Cuny, Y. 1971, Sol. Phys., 18, 347 . U Grossmann-Doerth, M Schüssler, M Sigwarth, O Steiner, A&A. 357351Grossmann-Doerth, U., Schüssler, M., Sigwarth, M., & Steiner, O. 2000, A&A, 357, 351 . R M E Illing, D A Landman, D L Mickey, A&A. 41183Illing, R. M. E., Landman, D. A., & Mickey, D. L. 1975, A&A, 41, 183 . C U Keller, Nature. 359307Keller, C. U. 1992, Nature, 359, 307 . T Kosugi, K Matsuzaki, T Sakao, Sol. Phys. 2433Kosugi, T., Matsuzaki, K., Sakao, T., et al. 2007, Sol. Phys., 243, 3 R L Kurucz, I Furenlid, J Brault, L Testerman, Solar flux atlas from 296 to 1300 nm. Kurucz, R. L., Furenlid, I., Brault, J., & Testerman, L. 1984, Solar flux atlas from 296 to 1300 nm . A Lagg, S K Solanki, T L Riethmüller, ApJ. 723164Lagg, A., Solanki, S. K., Riethmüller, T. L., et al. 2010, ApJ, 723, L164 . A Lagg, J Woch, N Krupp, S K Solanki, A&A. 4141109Lagg, A., Woch, J., Krupp, N., & Solanki, S. K. 2004, A&A, 414, 1109 . M Landolfi, E Landi Degl&apos;innocenti, Sol. Phys. 164191Landolfi, M. & Landi Degl'Innocenti, E. 1996, Sol. Phys., 164, 191 . B W Lites, D L Akin, G Card, Sol. Phys. 283579Lites, B. W., Akin, D. L., Card, G., et al. 2013, Sol. Phys., 283, 579 M M Loève, Probability Theory. PrincetonVan Nostrand CompanyLoève, M. M. 1955, Probability Theory (Princeton: Van Nostrand Company) . L B Lucy, AJ. 79745Lucy, L. B. 1974, AJ, 79, 745 . M J Martínez González, L R Bellot Rubio, S K Solanki, ApJ. 75840Martínez González, M. J., Bellot Rubio, L. R., Solanki, S. K., et al. 2012, ApJ, 758, L40 . M J Martínez González, M Collados, B Ruiz Cobo, A&A. 4561159Martínez González, M. J., Collados, M., & Ruiz Cobo, B. 2006, A&A, 456, 1159 . V Martinez Pillet, Sol. Phys. 140207Martinez Pillet, V. 1992, Sol. Phys., 140, 207 . Martínez Pillet, V Del Toro Iniesta, J C Álvarez-Herrero, A , Sol. Phys. 26857Martínez Pillet, V., Del Toro Iniesta, J. C.,Álvarez-Herrero, A., et al. 2011, Sol. Phys., 268, 57 . Martínez Pillet, V Lites, B W Skumanich, A , ApJ. 474810Martínez Pillet, V., Lites, B. W., & Skumanich, A. 1997, ApJ, 474, 810 . R Muller, S L Keil, Sol. Phys. 87243Muller, R. & Keil, S. L. 1983, Sol. Phys., 87, 243 . D Orozco Suárez, L R Bellot Rubio, J C Del Toro Iniesta, ApJ. 66231Orozco Suárez, D., Bellot Rubio, L. R., & del Toro Iniesta, J. C. 2007, ApJ, 662, L31 . E N Parker, ApJ. 204259Parker, E. N. 1976, ApJ, 204, 259 Numerical recipes. The art of scientific computing. W H Press, B P Flannery, S A Teukolsky, Press, W. H., Flannery, B. P., & Teukolsky, S. A. 1986, Numerical recipes. The art of scientific computing . R Rezaei, O Steiner, S Wedemeyer-Böhm, A&A. 47633Rezaei, R., Steiner, O., Wedemeyer-Böhm, S., et al. 2007, A&A, 476, L33 . W H Richardson, Journal of the Optical Society of America. 6255Richardson, W. H. 1972, Journal of the Optical Society of America (1917-1983), 62, 55 . B Ruiz Cobo, A Asensio Ramos, A&A. 5494Ruiz Cobo, B. & Asensio Ramos, A. 2013, A&A, 549, L4 . B Ruiz Cobo, J C Del Toro Iniesta, ApJ. 398375Ruiz Cobo, B. & del Toro Iniesta, J. C. 1992, ApJ, 398, 375 . G E Schwarz, The Annals of Statistics. 6461Schwarz, G. E. 1978, The Annals of Statistics, 6, 461 . H Socas-Navarro, De La Cruz, J Rodriguez, A Asensio Ramos, J Trujillo Bueno, B Ruiz Cobo, Socas-Navarro, H., de la Cruz Rodriguez, J., Asensio Ramos, A., Trujillo Bueno, J., & Ruiz Cobo, B. 2014, ArXiv e-prints . S K Solanki, P Barthol, S Danilovic, ApJ. 723127Solanki, S. K., Barthol, P., Danilovic, S., et al. 2010, ApJ, 723, L127 . S K Solanki, B Inhester, M Schüssler, Reports on Progress in Physics. 69563Solanki, S. K., Inhester, B., & Schüssler, M. 2006, Reports on Progress in Physics, 69, 563 . H C Spruit, E G Zweibel, Sol. Phys. 6215Spruit, H. C. & Zweibel, E. G. 1979, Sol. Phys., 62, 15 . O Steiner, U Grossmann-Doerth, M Knölker, M Schüssler, ApJ. 495468Steiner, O., Grossmann-Doerth, U., Knölker, M., & Schüssler, M. 1998, ApJ, 495, 468 . J O Stenflo, Sol. Phys. 3241Stenflo, J. O. 1973, Sol. Phys., 32, 41 . Y Suematsu, S Tsuneta, K Ichimoto, Sol. Phys. 249197Suematsu, Y., Tsuneta, S., Ichimoto, K., et al. 2008, Sol. Phys., 249, 197 . S Tsuneta, K Ichimoto, Y Katsukawa, Sol. Phys. 249167Tsuneta, S., Ichimoto, K., Katsukawa, Y., et al. 2008, Sol. Phys., 249, 167 . A A Van Ballegooijen, P Nisenson, R W Noyes, ApJ. 509435van Ballegooijen, A. A., Nisenson, P., Noyes, R. W., et al. 1998, ApJ, 509, 435 . M Van Noort, A&A. 5485van Noort, M. 2012, A&A, 548, A5 . M Van Noort, Rouppe Van Der, L Voort, M G Löfdahl, Sol. Phys. 228191van Noort, M., Rouppe van der Voort, L., & Löfdahl, M. G. 2005, Sol. Phys., 228, 191 . B Viticchié, D Del Moro, S Criscuoli, F Berrilli, ApJ. 723787Viticchié, B., Del Moro, D., Criscuoli, S., & Berrilli, F. 2010, ApJ, 723, 787 . C Westendorp Plaza, J C Del Toro Iniesta, B Ruiz Cobo, ApJ. 5471130Westendorp Plaza, C., del Toro Iniesta, J. C., Ruiz Cobo, B., et al. 2001, ApJ, 547, 1130
[]
[ "Local structure and Fe-vacancy disorder to order crossover in K", "Local structure and Fe-vacancy disorder to order crossover in K" ]
[ "X Fe \nCondensed Matter Physics\nMaterials Science Department\nBrookhaven National Laboratory\nUpton11973NYUSA\n", "P Mangelis \nInstitute of Electronic Structure and Laser\nFoundation for Research and TechnologyHellas\nVassilika Vouton711 10HeraklionGreece\n\nCondensed Matter Physics\nMaterials Science Department\nBrookhaven National Laboratory\nUpton11973NYUSA\n", "R J Koch \nCondensed Matter Physics\nMaterials Science Department\nBrookhaven National Laboratory\nUpton11973NYUSA\n", "H Lei \nCondensed Matter Physics\nMaterials Science Department\nBrookhaven National Laboratory\nUpton11973NYUSA\n", "R B Neder \nInstitute of Condensed Matter Physics\nFriedrich-Alexander-Universität Erlangen-Nürnberg\nStaudtstr. 391058ErlangenGermany\n", "M T Mcdonnell \nNeutron Scattering Division\nOak Ridge National Laboratory\n37831Oak RidgeTNUSA\n", "M Feygenson \nNeutron Scattering Division\nOak Ridge National Laboratory\n37831Oak RidgeTNUSA\n", "C Petrovic \nCondensed Matter Physics\nMaterials Science Department\nBrookhaven National Laboratory\nUpton11973NYUSA\n", "A Lappas \nInstitute of Electronic Structure and Laser\nFoundation for Research and TechnologyHellas\nVassilika Vouton711 10HeraklionGreece\n", "E S Bozin \nCondensed Matter Physics\nMaterials Science Department\nBrookhaven National Laboratory\nUpton11973NYUSA\n" ]
[ "Condensed Matter Physics\nMaterials Science Department\nBrookhaven National Laboratory\nUpton11973NYUSA", "Institute of Electronic Structure and Laser\nFoundation for Research and TechnologyHellas\nVassilika Vouton711 10HeraklionGreece", "Condensed Matter Physics\nMaterials Science Department\nBrookhaven National Laboratory\nUpton11973NYUSA", "Condensed Matter Physics\nMaterials Science Department\nBrookhaven National Laboratory\nUpton11973NYUSA", "Condensed Matter Physics\nMaterials Science Department\nBrookhaven National Laboratory\nUpton11973NYUSA", "Institute of Condensed Matter Physics\nFriedrich-Alexander-Universität Erlangen-Nürnberg\nStaudtstr. 391058ErlangenGermany", "Neutron Scattering Division\nOak Ridge National Laboratory\n37831Oak RidgeTNUSA", "Neutron Scattering Division\nOak Ridge National Laboratory\n37831Oak RidgeTNUSA", "Condensed Matter Physics\nMaterials Science Department\nBrookhaven National Laboratory\nUpton11973NYUSA", "Institute of Electronic Structure and Laser\nFoundation for Research and TechnologyHellas\nVassilika Vouton711 10HeraklionGreece", "Condensed Matter Physics\nMaterials Science Department\nBrookhaven National Laboratory\nUpton11973NYUSA" ]
[]
The detailed account of the local atomic structure and structural disorder at 5 K across the phase diagram of KxFe2−ySe2−zSz (0 ≤ z ≤ 2) high temperature superconductor is obtained from neutron total scattering and associated atomic pair distribution function (PDF) approach. Various model independent and model dependent aspects of the analysis reveal a high level of structural complexity on the nanometer length-scale. Evidence is found for considerable disorder in the c-axis stacking of the FeSe1−xSx slabs, presumably associated with substoichiometric potassium intercalation. The faulting does not exhibit any noticeable S-content dependence. The diffraction data do not display observable signs of turbostratic character of the disorder. In contrast to non-intercalated FeSe parent superconductor, substantial Fe-vacancies are present in KxFe2−ySe2−zSz, deemed detrimental for superconductivity when ordered. Our study suggests that distribution of vacancies significantly modifies the effective interatomic potentials which, in turn, affect the nearest neighbor environment, in agreement with observed evolution of the PDF signal. Complementary aspects of the data analysis imply existence of a cross-over like transition at around z = 1 from predominantly vacancydisordered state at the selenium end to more vacancy-ordered state closer to the sulfur end of the diagram. The S-content dependent measures of the local structure are found to correlate well with the evolution of the electronic state, suggesting that local structure may play an important role in the observed electronic properties. The local structural measures do not evolve across the superconducting TC in the Se endmember, and exhibit distinct behavior with S-content for z<1 and z>1. This behavior appears to be anti-correlated with the evolution of the Fe-vacancy distribution with S content, reinforcing the idea of the intimate relationship of Fe vacancies to the local structure and observed properties. arXiv:1903.00088v1 [cond-mat.supr-con]
10.1103/physrevb.100.094108
[ "https://arxiv.org/pdf/1903.00088v1.pdf" ]
119,415,518
1903.00088
5153535e8d58cac9aee280471b3aa893e28ef812
Local structure and Fe-vacancy disorder to order crossover in K X Fe Condensed Matter Physics Materials Science Department Brookhaven National Laboratory Upton11973NYUSA P Mangelis Institute of Electronic Structure and Laser Foundation for Research and TechnologyHellas Vassilika Vouton711 10HeraklionGreece Condensed Matter Physics Materials Science Department Brookhaven National Laboratory Upton11973NYUSA R J Koch Condensed Matter Physics Materials Science Department Brookhaven National Laboratory Upton11973NYUSA H Lei Condensed Matter Physics Materials Science Department Brookhaven National Laboratory Upton11973NYUSA R B Neder Institute of Condensed Matter Physics Friedrich-Alexander-Universität Erlangen-Nürnberg Staudtstr. 391058ErlangenGermany M T Mcdonnell Neutron Scattering Division Oak Ridge National Laboratory 37831Oak RidgeTNUSA M Feygenson Neutron Scattering Division Oak Ridge National Laboratory 37831Oak RidgeTNUSA C Petrovic Condensed Matter Physics Materials Science Department Brookhaven National Laboratory Upton11973NYUSA A Lappas Institute of Electronic Structure and Laser Foundation for Research and TechnologyHellas Vassilika Vouton711 10HeraklionGreece E S Bozin Condensed Matter Physics Materials Science Department Brookhaven National Laboratory Upton11973NYUSA Local structure and Fe-vacancy disorder to order crossover in K (Dated: March 4, 2019) The detailed account of the local atomic structure and structural disorder at 5 K across the phase diagram of KxFe2−ySe2−zSz (0 ≤ z ≤ 2) high temperature superconductor is obtained from neutron total scattering and associated atomic pair distribution function (PDF) approach. Various model independent and model dependent aspects of the analysis reveal a high level of structural complexity on the nanometer length-scale. Evidence is found for considerable disorder in the c-axis stacking of the FeSe1−xSx slabs, presumably associated with substoichiometric potassium intercalation. The faulting does not exhibit any noticeable S-content dependence. The diffraction data do not display observable signs of turbostratic character of the disorder. In contrast to non-intercalated FeSe parent superconductor, substantial Fe-vacancies are present in KxFe2−ySe2−zSz, deemed detrimental for superconductivity when ordered. Our study suggests that distribution of vacancies significantly modifies the effective interatomic potentials which, in turn, affect the nearest neighbor environment, in agreement with observed evolution of the PDF signal. Complementary aspects of the data analysis imply existence of a cross-over like transition at around z = 1 from predominantly vacancydisordered state at the selenium end to more vacancy-ordered state closer to the sulfur end of the diagram. The S-content dependent measures of the local structure are found to correlate well with the evolution of the electronic state, suggesting that local structure may play an important role in the observed electronic properties. The local structural measures do not evolve across the superconducting TC in the Se endmember, and exhibit distinct behavior with S-content for z<1 and z>1. This behavior appears to be anti-correlated with the evolution of the Fe-vacancy distribution with S content, reinforcing the idea of the intimate relationship of Fe vacancies to the local structure and observed properties. arXiv:1903.00088v1 [cond-mat.supr-con] INTRODUCTION The discovery of alkali metal-intercalated A x Fe 2 Se 2 (A = K, Rb, Cs or Tl) superconductors [1,2] with relatively high transition temperatures (T C ≈ 31 K) as compared to their non-intercalated FeSe counterpart [3] has placed them in the focus of interest in the condensed matter physics community. However, the exact crystal structure of these materials and what triggers the superconductivity remains an unresolved problem. Importantly, this class of unconventional superconductors combines unique behaviours that are not observed in other iron-based superconducting systems, such as the coexistence of superconductivity and a long-range antiferromagnetic (AF) order with a large magnetic moment and a Nel temperature above room temperature [4][5][6][7]. Large number of detailed studies based on a wide range of experimental techniques, such as neutron [8] and high-resolution synchrotron Xray [9] diffraction, Mössbauer spectroscopy [10], Raman scattering [11], scanning tunneling microscopy [12], scanning single-crystal X-ray diffraction [13] and transmission electron microscopy [14], have shown strong evidence for a nanoscale phase separation in order to describe as accurately as possible the crystal structure and explain the coexistence of superconducting and AF states. The majority insulating AF I4/m Fe-vacancy-ordered A 2 Fe 4 Se 5 phase is separated from the minority superconducting I4/mmm phase A x Fe 2 Se 2 [15]. The latter is characterized by fully occupied Fe atomic sites (Fig. 1). However, the inhomogeneity and complexity of phase separation in A 1−x Fe 2−y Se 2 makes it difficult to derive definitive conclusions about the precise nature of the correlation between the atomic structure and the observed physical properties of the system. Our earlier neutron PDF study [16] of the K-intercalated iron selenide and iron sulfide analogues revealed that the two-phase mixture description, combining the insulating Fe-vacancyordered K 2 Fe 4 Ch 5 and the superconducting K x Fe 2 Ch 2 (Ch=Se, S) constituents, is on a sub-nanometer scale effectively equivalent with a Fe-vacancy-disordered I4/m K 2−x Fe 4+y Se 5 single phase picture. This comes about due to the fact that Fe vacancies have an unconstrained distribution in the disordered superstructure. The question therefore still remains on the exact role that the Fe vacancies play in the emergence or collapse of superconductivity. Until recently, it was believed that the presence Fe vacancies, irrespective of ordering, in the insulating phase [15] electronic phase diagram of KxFe2−ySe2−zSz (0 ≤ z ≤ 2), reflecting electronic transport properties as a function of temperature T and composition z. Fe vacancies as well as various Fe-Fe interatomic distances discussed in text are indicated by arrows. is detrimental for the superconductivity. However, an understanding is emerging that what makes the material non-superconducting is the magnetism bearing √ 5x √ 5 long range ordering of Fe vacancies. This is described by a model with I4/m symmetry that features two crystallographically distinct atomic sites, and Fe atoms selectively fully occupy one of them (16i), leaving the other (4d) completely empty. Wu et al. have suggested that the non-superconducting FeSe-based magnetic insulators, which possess such Fe-vacancy ordered struc-tures, should be considered as parent compounds of these superconductors [17]. Disordering the Fe vacancy order of the parent magnetic insulating phase, K 2 Fe 4 Se 5 , has therefore been proposed to be the key for the emergence of superconductivity in K x Fe 2−y Se 2 [18]. As revealed by Wang et al., the Fe-vacancy order to disorder transition can be achieved by applying a simple hightemperature annealing [19]. Two X-ray absorption fine structure studies [20,21] provided clear evidence for local disorder in the structure of superconducting K x Fe 2−y Se 2 , based on which it was suggested that a non-zero population of Fe atoms at the 4d site is a key structural parameter for the bulk superconductivity [21]. Another recent study which combines high energy X-ray diffraction and Monte Carlo simulations, also suggests that superconductivity in quenched K x Fe 2−y Se 2 single crystals appears at the Fe-vacancy order-to-disorder boundary [22]. The quenching conditions during sample synthesis also appear to have a significant impact on the superconductivity of FeSe-based compounds [23]. Applying a higher cooling rate in the quenching process of single crystals of Rb x Fe 2−y Se 2 was found to yield specimens with a higher T C . An in situ scanning electron microscopy study revealed that during the cooling to room temperature, the formation mechanism of superconducting phase in pseudo-single-crystal K x Fe 2−y Se 2 comes from an imperfect Fe-vacancy disorder-to-order transition, which is deemed responsible for the phase separation [24]. The reported phase diagram across the K x Fe 2−y Se 2−z S z (0 ≤ z ≤ 2) compositional series revealed that the substitution of Se by the isovalent S suppresses the superconducting state and eventually, for z ≥ 1.6, superconductivity collapses [15]. The sulphide end-member, K x Fe 2−y S 2 exhibits a spin-glass behavior at temperatures below 32 K [25]. While it is clear that the suppression of superconductivity across the K x Fe 2−y Se 2−z S z series is connected to structural details, the nature of this connection remains unclear due to a lack of local structure studies, With this motivation, we have used PDF to probe the local atomic structure that may affect this notable change in behavior on a dense grid of compositions. To that end, we carried out a systematic neutron total scattering study at 5 K, the temperature at which superconductivity is observed in the selenium dominated part of the phase diagram, and which gets suppressed towards the sulfur end of the series. We utilized neutron PDF analysis combined with simulations based on large atomistic models to explore subtle nano-scale changes in the interatomic distances and the evolution of vacancy distribution in the Fe layers in the K x Fe 2−y Se 2−z S z system. Based on these complementary analysis methods, evidence for Fe-vacancy disorder to order crossover around z = 1 composition is revealed, reinforcing the idea that the Fe-vacancy distribution is the key structural parameter affecting the properties. Our study further unmasks so far unreported, yet not unexpected, disorder in the stacking of the FeSe 1−x S x slabs in this intercalated superconducting system, which reflects an additional component in an already structurally complex material. METHODS Sample synthesis -Single crystals of K x Fe 2−y Se 2−z S z (0 ≤ z ≤ 2) were grown by self-flux method, as described in detail elsewhere [25], and pulverized into fine powders. Samples were thoroughly characterized by X-ray powder diffraction, magnetic susceptibility, and electrical resistivity measurements, as previously reported [15]. Neutron total scattering -Experiments were performed at NOMAD instrument [26] at the Spallation Neutron Source at Oak Ridge National Laboratory. Eleven finely pulverized samples, 0.5 g of each, equally spaced across the Se/S compositional space (∆z = 0.1), were loaded into 6 mm diameter extruded vanadium containers under inert atmosphere and sealed. Helium was used as the exchange gas. Each sample was mounted in the diffractometer equipped with Orange cryostat. The instrument was calibrated using diamond powder standard. Powder diffraction data were collected after thorough equilibration at 5 K for 2 h of total counting time for each sample. Data correction and reduction followed standard protocols [27]. Neutron PDFs were obtained via Sine Fourier transforms of the measured reduced total scattering functions F (Q), where Q is the momentum transfer, over a range from Q min = 0.5Å −1 to Q max = 26Å −1 , using the PDFgetN program [27]. PDF peak analysis -Fitting of PDF data for decomposition into constituent peaks was done with the fityk v 1.3.1 software package [28] using Gaussian functions and a linear baseline function where the intercept term was fixed at zero. F (Q) diffuse signal fitting -The F (Q) data of each samples were fitted with a damped sine function after subtracting a fitted linear background. Structural models -Within the system, two possible structural models have been put forward. The simplest model, with space-group I4/mmm, consists of FeCh (Ch = Se/S) slabs featuring a Fe square planar sublattice. Each Fe is coordinated by four Ch creating layers of edgeshared FeCh 4 -tetrahedra, stacked along the c lattice direction and interleaved with K species equidistant from each layer. In this model, the asymmetric unit contains a single Fe at 4d (0.0, 0.5, 0.25), Ch at 4e (0.0, 0.0, z), and K at 2a (0.0, 0.0, 0.0). The relatively simple structure, with only one unique Fe site, gives little flexibility for handling vacancies on the Fe sub-lattice, which are common in this system [17]. A more complicated model, with space-group I4/m, is related to the higher symmetry I4/mmm model through a rotation in the a − b plane and a √ 2 increase in the a = b lattice parameters. The increase in unit cell size doubles the number of atoms in the asymmetric unit, with two symmetrically distinct Fe at 4d (0.0, 0.5, 0.25) and 16i (x, y, z), Ch at 4e (0.0, 0.0, z) and 16i (x, y, z), and K at 2b (0.0, 0.0, 0.5) and 8h (x, y, 0.5). Importantly, within both these symmetries the FeCh 4 -tetrahedra of adjacent layers are translated in the x and y direction by half unit cell lengths. This can be contrasted with the related, non-K intercalated FeCh group of superconductors, where adjacent layers of FeCh 4 -tetrahedra contain no such relative x and y translation. Small-box refinements -The experimental PDF data were fit with the structural models described over a 10Å range using the PDFgui software [29]. Over wider rranges the experimental PDF is dominated by the interlayer correlations, and our analysis reveals that these are dominated by appreciable stacking disorder, which is highly non-trivial to handle using conventional PDF approaches and is beyond the scope of this study. Two alternative modeling approaches were attempted in sub nanometer regime. First approach used a mixture of stoichiometric I4/mmm and fully Fe-vacancy ordered (VO) I4/m phase components. Second approach utilized vacancy disordered (VD) version of I4/m. The two approaches were found to yield effectively identical fit qualities and comparable descriptions of the underlying structure [16]. For simplicity, only the VD model with I4/m symmetry was systematically applied to all the data using a total of 19 fitting parameters. These included the unit cell parameters (a = b = c), a scale factor, and a correlated motion parameter δ 1 [30]. Further, the fractional coordinates were refined according to the space group constraints (9 parameters), and the atomic displacement parameters (ADP) were set to be isotropic (u 11 = u 22 = u 33 ) and identical for all atomic species of the same type (3 parameters). The occupancies of the two symmetry-distinct Fe crystallographic sites (4d and 16i) were allowed to vary separately, while the occupancies of potassium atoms in 2b and 8h sites were constrained to be equal because of relative insensitivity of the neutron PDF to potassium (3 occupancy parameters). The occupancies of Ch species were fixed at their nominal values as refinements did not suggest local segregation to the two symmetry-distinct sites. Refinements were done sequentially, such that for any given composition the PDF refinement was initialized using the converged model of the previous composition. Large-box simulations -Simulated PDFs were computed with aid of the DISCUS v 5.30.0 software package [31] using a model with space group I4/m. The global concentration of Fe was fixed at 80%, with 100% occupancy of the 16i site and 0% occupancy of the 4d site. The global concentration of K was fixed at 80%, spread evenly across the 2b and 8h sites. For stacking fault simulations, the stack module of the DISCUS software package was used, with a layer supercell of 2 × 2 unit cells and total thickness of 1,500 layers. The powder pattern was computed using the Fast Fourier method in DISCUS, with a 0.001 reciprocal length unit (r.l.u) mesh for powder integration. For simulations on the impact of vacancy disorder, a supercell of 50 × 50 × 1 unit cells was used. Atoms were initialized with a random thermal displacement such that their mean squared displacement across the whole supercell was consistent with the ADP parameters refined from the K x Fe 2−y Se 2−z S z PDF data. Following this, atomic displacement vectors were swapped between atoms of like identity to minimize the total energy composed of a pairwise Lennard Jones (LJ) potential between nearest neighbor (NN) Fe-Ch pairs. The LJ potential was constructed such that the equilibrium NN Fe-Ch bond distance was that refined from the K x Fe 2−y Se 2−z S z PDF data. Swaps were always accepted if they decreased the total energy, and conditionally accepted if they increased the total energy, with probability p = exp (−∆E/kT ), where ∆E is the change in energy associated with the swap, k is the Boltzmann constant, and T is the temperature, in this case 5 K. The total number of swaps was fixed at 100 times the number of atoms in the supercell. RESULTS AND DISCUSSION Neutron diffraction -Neutron diffraction data across the entire composition series show a broad and bi-modal distribution of intensity in the expected location of the (002) peak. An example of this is shown for K x Fe 2−y S 2 in Fig. 2b, where the primary (002) peak is expected at ∼ 0.95Å −1 , and no feature is expected at ∼ 0.8Å −1 , based on either the I4/mmm or the I4/m model. This is a feature commonly seen in clay systems with layered structures and variable inter-layer distances [32][33][34]. DISCUS simulations considering large super-cells of the K x Fe 2−y S 2 system with two distinct inter-layer distances, 6.55Å (93 % prevalence) and 8.0Å (75 % prevalence) (see Fig. 2a) reproduce this bi-modal intensity distribution well, as seen in see Fig. 2b. Similar DISCUS simulations with a single inter-layer distances but random shifts along the a and b lattice directions (see Fig. 2c) produce a diffraction pattern characteristic for a turbostratic material, where hkl peaks are broadened into indistinct hkbands [34,35]. As can be seen in Fig. 2d, this significantly reduces the number of distinct Bragg peaks, and creates a characteristic saw-tooth pattern. In Fig. 2d we again present an example diffraction pattern for K x Fe 2−y S 2 , where no such saw-tooth like features are observed. These simulations demonstrate the K x Fe 2−y Se 2−z S z series exhibits at least two distinct inter-layer distances, without turbostratic disorder. This anisotropic crystallinity naturally suppresses inter-layer correlations, while intra-layer correlations persist. For this reason, our small-box PDF modelling is limited to the very local structure (r < 10Å). Reduced total scattering structure functions F (Q) for the K x Fe 2−y Se 2 and K x Fe 2−y S 2 end-members are shown in Fig. 3a and Fig. 3c, respectively, along with the related FeSe compound in Fig. 3e. It is clear when viewing this series that Bragg peaks at high-Q are suppressed when moving from the sulfur to the selenium end-member. This is especially true when compared to the FeSe compound without K-intercalation. Suppression of Bragg peaks is a clear indicator for the presence of disorder, suggesting that disorder increases when moving from the FeSe "parent" non-intercalated phase to K x Fe 2−y S 2 and again when moving to K x Fe 2−y Se 2 . The diffuse signal in F (Q) for the entire K x Fe 2−y Se 2−z S z series, shown in Fig. 4a, also has a sinusoidal behavior at high-Q. Fitting with a damped Sine function in the range 12 ≤ Q ≤ 25Å −1 allows us to extract the amplitude and frequency of this oscillation as a function of sulfur content z, shown in Fig. 4d and Fig. 4e. We note that these oscillations are the strongest in the selenium end-member, and are gradually suppressed with increasing sulfur content, reaching a minimum in the range 1.2 ≤ z ≤ 1.6. Above z = 1.6, these oscillations again increase in amplitude. This suppression and recovery is highlighted when viewing the fitted Sine functions, as can be seen in Fig. 4b and This two regime behavior mirrors that of the superconducting temperature as shown in Fig. 1f, specifically, superconductivity is suppressed above z = 1.6. PDF analysis -Interpretation of these Sine-like oscillations of the diffuse signal in Q-space is not straightforward. However, the presence of a sinusoidal oscillation in F (Q) should map to the PDF G(r), as the two are related by a Fourier transform. Specifically, a periodic signal in F (Q) with frequency f and amplitude A should manifest as an enhancement (proportional to A) of the peak in G(r) at r = 2πf . Indeed, this agrees with qualitative assessments of the PDFs. The frequency and relative amplitude of the periodic signal in F (Q) parallels the position and relative sharpness (compared to higher-r peaks) of the first PDF peak. In K x Fe 2−y Se 2 , the PDF (Fig 3b) shows a sharp peak at r ≈ 2.43Å, followed by relatively broad features at higher-r. Conversely, the PDF of K x Fe 2−y S 2 , shown in Fig 3d, shows a peak at r ≈ 2.38Å which is of comparable breadth to high-r peaks. Typically, the observation of r-dependent PDF peak widths is associated with correlated atomic motion [30]. Generally speaking, these relative widths describe the nature of the bonding in the material [36]. The observed PDFs at 5 K are shown in Fig. 5a, and the positions of the first peak, representing Fe-Ch correlations, as well as a higher-r peak are shown in Fig. 5b (Color online) Fits of the structural models to the PDF data. The fit to the observed KxFe2−ySe2 PDF data using either: (a) a single-phase I4/m vacancy disordered (VD) model or (b) a two-phase model with I4/m vacancy ordered (VO) and I4/mmmm components. Similar one-and two-phase fits are shown in (c) and (d), respectively, for the KxFe2−yS2 PDF data. In various panels arrows point to intracluster and intercluster Fe-Fe PDF peaks, as indicated. See text and Fig. 1 for details). In all cases, fits were over the displayed r-range (1.5 ≤ r ≤ 10Å). PDF data and fits are shown with open circles and solid red lines, respectively, and the difference curves (solid green lines) are shown offset vertically for clarity. Inset in (c) summarizes the single-phase PDF refinement derived evolution with sulfur content of Fe occupancy of the two distinct Fe sites (4d and 16i) in the I4/m VD model. A vertical dashed line represents the two-regimes boundary discussed in the text. and Fig. 5c, respectively. Interestingly, the position of the first (Fe-Ch) peak is nearly unchanged in the range 0.0 ≤ z ≤ 1.0, signifying that the Fe-Ch pair distance is unaffected by the substitution of sulfur for selenium. This in stark contrast to the typical Vegard's law-type behavior, predicting a linear change in lattice parameter as a function of chemical substitution due to steric effects [37]. Indeed, this linear change is recovered if we consider the position of most higher-r peaks ( Fig. 5a and 5c), indicating that the very local structure has a distinct behavior. Fitting the local structure region of the observed PDFs (1.75 ≤ r ≤ 10Å) with a two-phase model incorporating both the vacancy ordered I4/m model and the I4/mmm model yielded fits which were equivalent to those done with a single-phase vacancy disordered I4/m model. Examples of these fits for the S-and Se-end-members are shown in Fig. 6. Importantly, only the lower symmetry I4/m model supports the existence of two unique Fe-Fe correlation peaks representing inter-and intracluster Fe-Fe correlations (see Fig. 1 and Fig. 6). For these reasons, the single-phase vacancy disordered I4/m model was preferred and utilized here. The inset in Fig. 6c shows the refined occupancy of the two symmetrically distinct Fe sites, at 4d and 16i. As sulfur is substituted for selenium, the 4d site moves from partially occupied to completely empty, whereas the 16i site moves from partially vacant to completely occupied. The situation where the 4d Fe is completely vacant and the 16i is completely occupied represents the vacancyordered phase, associated with suppression of superconductivity [17], whereas a uniformly vacancy disordered phase is recovered when both sites have near identical occupancies. Thus, we note a cross-over behavior in the vacancy ordering of the local structure. According to our PDF fits, this transition occurs in this system at z = 1.0, or about 50 % sulfur substitution at 5 K. These results again suggest a two-regime behavior of K x Fe 2−y Se 2−z S z in composition-space, consistent with the electronic properties ( Fig. 1f at 5 K, our analysis of the F (Q) (Fig. 4), and our model-independent analysis of the PDF (Fig. 5). While each aspect of our analysis highlights that the K x Fe 2−y Se 2−z S z series exhibits a two-regime behavior at 5 K, it is not immediately apparent how each aspect is connected. Electronic transport (Fig. 1f) measurements suggest that T c is suppressed beginning at z = 1.0, with no evidence of superconductivity above z ≈ 1.5. The structure function F (Q) shows high-Q oscillations which decrease in magnitude and frequency up to z = 1.6 and then recover for z > 1.6 ( Figs. 3 and 4). The frequency and magnitude of these F (Q) oscillations correlate with the position and relative sharpness, respectively, of the Fe-Ch NN peak as observed in the PDFs of the K x Fe 2−y Se 2−z S z series at 5 K. Both the F (Q) and G(r) results demonstrate that this Fe-Ch NN peak disobeys Vegard's law up to about z = 1.0 (Fig. 5). The exaggerated sharpness of this Fe-Ch NN peak (relative to higher-r PDF peaks) in the selenium rich (z < 1.0) regime signifies that the Fe-Ch bond length distribution (BLD) is significantly narrower compared to Fe-Ch BLD in the sulfur rich (z > 1.0) regime [16]. This Fe-Ch BLD in K x Fe 2−y Se 2 (Fig. 3b) is also much sharper even than that seen in FeSe, observed under similar conditions (Fig. 3f). This is interesting, as both K x Fe 2−y Se 2 and FeSe are composed locally of the same structural motif of FeSe 4 edge-shared tetrahedra, and thus the observed differences in the two Fe-Se BLDs are not due to simple steric effects. It is then also possible that something other than simple steric differences between sulfur and selenium contribute to the differences in relative Fe-Ch BLDs (peak sharpness) across the K x Fe 2−y Se 2−z S z series. Importantly, the PDF fitting results (Fig. 6) suggest that the two regime behavior is marked by a crossover at z = 1.0 from a partial (z < 1.0) to complete (z > 1.0) vacancy-order within the local structure. While it is expected that vacancy ordering impacts the overall disorder of the structure, it is also possible that it directly impacts the Fe-Ch BLD. To test this hypothesis, we considered two large atomistic models, one with a completely ordered arrangement of vacancies (Fig. 7a) and the other with a uniformly disordered arrangement of vacancies (Fig. 7c). Importantly, these two configurations had identical overall vacancy concentrations (20 %). In these atomic configurations, each chalcogen species can potentially be coordinated by up to four iron species, or up to four vacancies. For each atomistic model, we quantified the total number of each of the five possible configurations. These results are presented in Fig. 7b,d for the vacancy-ordered and -disordered atomistic model, respectively. For the vacancy-ordered case, the results are as expected, 20 % of chalcogens are fully coordinated by four iron, whereas 80 % of chalcogens are coordinated by three iron and a single vacancy. In the vacancyordered case there are no chalcogens coordinated by two or more vacancies. The vacancy-disordered case is more interesting: a significant fraction (∼ 20 %) are severely undercoordinated, with two or more missing iron NN. Moreover, the percentage of fully coordinated chalcogens nearly doubles in the vacancy-disordered case. These configurations are associated with the superconducting I4/mmm phase devoid of vacancies [15]. Surprisingly, the increase in fully coordinated chalcogens is a result of disordering the vacancies in this system. This supports but also possibly explains the results of previous stud-ies where annealing to induce Fe-vacancy disorder led to superconductivity [17]. Further, if the two configurations exemplified by Fig. 7a,c are energy minimized in the presence of an identical LJ potential between Fe and Ch NNs, the resulting NN BLDs, shown in Fig. 7e, are different. Specifically, the Fe-Ch BLD of the energy minimized vacancydisordered configuration is noticeably sharper than the vacancy-ordered configuration. This is surprising, given that the sharper distribution originates from a system with more overall disorder. This demonstrates that relative arrangement of vacancies has an intrinsic impact on the Fe-Ch BLD, all other things being equal. This makes intuitive sense, as the vacancy-disordered state shows a greater prevalence of severely undercoordinated chaclogen species. It is reasonable to assert that these severely undercoordinated chaclogens will bond more strongly with the fewer iron species they are coordinated by, leading to a relatively narrower Fe-Ch BLD. The result also offers an explanation of the two-regime behavior, specifically that for z < 0.50 (selenium-rich) the local structure is partially vacancy-disordered. This disorder manifests in F (Q) through suppression of Bragg peaks at high-Q (Fig. 3), leading to the observation of a strong sinusoidal intensity oscillation. In G(r) this oscillation manifests as a relatively sharp first peak followed by broader high-r peaks (Fig. 3), corresponding to a well defined Fe-Ch BLD. CONCLUSIONS In summary, the local atomic structure and structural disorder have been characterized across the phase diagram of K x Fe 2−y Se 2−z S z (0 ≤ z ≤ 2) system, displaying both superconductivity and magnetism, by means of neutron total scattering and associated atomic PDF analysis of 5 K data. Using various model independent and model dependent approaches we find a high level of structural complexity on the nanometer length-scale. Through simulations based on large scale atomistic models we demonstrate that the distribution of vacancies significantly modifies the effective interatomic potentials, observably affecting the nearest neighbor environment, as corroborated by the features in the data. Complementary aspects of the analysis reveal a crossover like behavior, with an onset at z ≈ 1, from predominantly vacancy-disordered state towards the selenium end of the phase diagram, to a more ordered vacancy distribution closer to the sulfur end of the diagram. The evolution of the local structure with sulfur content displays apparent correlation with the changes seen in the electronic state, emphasizing the importance of the local structure for the observed electronic properties. The results demonstrate the direct impact of the Fe-vacancy distribution on the local structure, and reinforce the idea of its critical influence on superconductivity. Finally, the analysis also highlights the presence of considerable disorder in the c-axis stacking of the FeSe 1−x S x slabs, which has a non-turbostratic character and appears independent of S-content. FIG . 1. (Color online) Crystallographic models of the atomic structure and electronic properties of KxFe2−ySe2−zSz. (a) A perspective view of a single unit cell in the I4/mmm model. (b) A single FeCh4 tetrahedron and other structural elements discussed in the text. hA marks the anion height. (c) The Fe sub-lattice viewed both along the a − b plane (top) and down the c-axis (bottom) in the I4/mmm model. (d) A perspective view of a single unit cell in the I4/m vacancy ordered model. (e) The Fe sub-lattice in the I4/m vacancy ordered model as viewed both along the a − b plane (top) and down the c-axis (bottom). (f) The previously reported online) Observation of stacking disorder in KxFe2−ySe2−zSz system. (a) An atomistic model depicting two distinct inter-layer distances, represented by either a purple or a green double-side arrow. The red vertical dashed line highlights the regular stacking along the a − b lattice directions. (b) The simulated neutron diffraction pattern (blue profile) of the KxFe2−yS2 for the case depicted in (a), where two distinct inter-layer distances are present. Also shown is the experimental diffraction pattern at 5 K (orange profile) featuring (002) basal reflection at around 0.95Å −1 . The feature at Q ≈ 0.8Å −1 is an irrational basal reflection reflecting the presence of a second, longer inter-layer spacing. (c) An atomistic model depicting turbostratic disorder, or uniform random layer displacements along the a − b lattice directions, highlighted by the vertical red dashed line. (d) The neutron diffraction pattern simulated for KxFe2−yS2 (blue profile) in the case of turbostratic disorder depicted in (c). This type of disorder reduces the number of observed Bragg reflections, and produces characteristic saw-tooth like features, as discussed in the text. The experimental diffraction pattern (orange profile) collected at 5 K does not display the features that would imply turbostratic character of the disorder. FIG. 3 . 3(Color online) Comparison of neutron total scattering derived data for several samples of interest. Reduced total scattering structure functions, F (Q), are shown in the left panels, while corresponding PDFs, G(r), are displayed in the right panels. The panels feature (a), (b) KxFe2−ySe2, (c), (d) KxFe2−yS2, and (e), (f) FeSe. Data for potassium containing samples were collecte,d at 5 K, while for FeSe at 10 K. online) Analysis of the diffuse scattering signal in F (Q) at high-Q in KxFe2−ySe2−zSz. (a) The measured (lighter line) and fitted (heavier line) F (Q) signal. The curves are sequentially offset vertically by 2Å −2 for clarity. (b) The model F (Q) signal over a regime of concentrations where the oscillation amplitude and frequency decrease. (c) The model F (Q) signal over a regime of concentrations where oscillation amplitude and frequency increase. (d) The amplitude of F (Q) diffuse oscillations as a function of sulfur content, z. (e) The frequency of diffuse oscillations as a function of z. Colors represent sulfur content, z, as indicated, and in (b) and (c) are consistent with those in (a). FIG. 5 . 5(Color online) PDF data of the KxFe2−ySe2−zSz series at 5 K. (a) The experimental PDFs in the local structure range 1.5 ≤ r ≤ 8Å. The sulfur and selenium end-members are shown with a blue and red line, respectively, while intermediate compositions are shown in gray. Inset is a false color map representation of the same PDF data plotted vs r and z, highlighting the relative lack of change in the position of the first PDF peak at ∼ 2.48Å as a function of sulfur content until approximately z = 1. The peak starts evolving for higher z values. (b) A plot of the peak position of the feature marked by an orange arrow in (a) as a function of sulfur content z. A dashed line is a guide to the eye, included to highlight the nearly linear behavior of this peak position with z. (c) A plot of the Fe-Ch peak position, marked by a blue arrow in (a), as a function of z. Two dashed lines are guides to the eye, highlighting the two-regimes behavior. Fig. 4c . 4cThe frequency of the wave follows a similar two-regime behavior, delineated by the point z = 1.6. FIG. 6. (Color online) Fits of the structural models to the PDF data. The fit to the observed KxFe2−ySe2 PDF data using either: (a) a single-phase I4/m vacancy disordered (VD) model or (b) a two-phase model with I4/m vacancy ordered (VO) and I4/mmmm components. Similar one-and two-phase fits are shown in (c) and (d), respectively, for the KxFe2−yS2 PDF data. In various panels arrows point to intracluster and intercluster Fe-Fe PDF peaks, as indicated. See text and Fig. 1 for details). In all cases, fits were over the displayed r-range (1.5 ≤ r ≤ 10Å). PDF data and fits are shown with open circles and solid red lines, respectively, and the difference curves (solid green lines) are shown offset vertically for clarity. Inset in (c) summarizes the single-phase PDF refinement derived evolution with sulfur content of Fe occupancy of the two distinct Fe sites (4d and 16i) in the I4/m VD model. A vertical dashed line represents the two-regimes boundary discussed in the text. FIG. 7 . 7(Color online) The effect of vacancy ordering on the local FeCh4 structural unit. (a) An atomistic model along the c-axis, showing perfect vacancy (white dots) ordering in the iron (brown dots) sub-lattice. (b) A histogram enumerating the total number of chalcogen species coordinated by either 0, 1, 2, 3, or 4 iron vacancies in the atomic configuration represented in (a). (c) Identical to (a), except the vacancies are uniformly disordered over the iron sub-lattice. (d) Similar to (b) except pertaining to the atomic configuration in (c). (e) The distribution of Fe-Ch pair distances following energy minimization in the presence of a Lennard-Jones (LJ) potential between Fe-Ch nearest neighbors, shown for both the vacancy ordered and disordered represented by the atomistic model in (a) and (b) respectively. LJ potential parameters were identical between the vacancy ordered and disordered cases, the differences in the pair distances distributions are a result of vacancy configuration, only. * [email protected] † Present address: Department of Physics, Renmin University, Beijing 100872, China ‡ Present address: Forschungszentrum Jlich, JCNS, D-52425 Jlich. Germany* [email protected] † Present address: Department of Physics, Renmin Uni- versity, Beijing 100872, China ‡ Present address: Forschungszentrum Jlich, JCNS, D- 52425 Jlich, Germany . J Guo, S Jin, G Wang, S Wang, K Zhu, T Zhou, M He, X Chen, https:/link.aps.org/doi/10.1103/PhysRevB.82.1805201098-0121, 1550-235XPhysical Review B. 82J. Guo, S. Jin, G. Wang, S. Wang, K. Zhu, T. Zhou, M. He, and X. Chen, Physical Review B 82 (2010), ISSN 1098-0121, 1550-235X, URL https://link.aps. org/doi/10.1103/PhysRevB.82.180520. . F Ye, S Chi, W Bao, X F Wang, J J Ying, X H Chen, H D Wang, C H Dong, M Fang, https:/link.aps.org/doi/10.1103/PhysRevLett.107.1370030031-9007, 1079-7114Physical Review Letters. 107F. Ye, S. Chi, W. Bao, X. F. Wang, J. J. Ying, X. H. Chen, H. D. Wang, C. H. Dong, and M. Fang, Physical Review Letters 107 (2011), ISSN 0031-9007, 1079-7114, URL https://link.aps.org/doi/10.1103/ PhysRevLett.107.137003. . F.-C Hsu, J.-Y Luo, K.-W Yeh, T.-K Chen, T.-W Huang, P M Wu, Y.-C Lee, Y.-L Huang, Y.-Y , F.-C. Hsu, J.-Y. Luo, K.-W. Yeh, T.-K. Chen, T.- W. Huang, P. M. Wu, Y.-C. Lee, Y.-L. Huang, Y.-Y. . D.-C Chu, Yan, http:/www.pnas.org/cgi/doi/10.1073/pnas.08073251050027- 8424, 0807.2369Proceedings of the National Academy of Sciences. 10514262Chu, D.-C. Yan, et al., Proceedings of the National Academy of Sciences 105, 14262 (2008), ISSN 0027- 8424, 0807.2369, URL http://www.pnas.org/cgi/doi/ 10.1073/pnas.0807325105. . W Bao, Q.-Z Huang, G.-F Chen, D.-M Wang, J.-B He, Y.-M Qiu, 0256-307X, 1741-3540Chinese Physics Letters. 2886104W. Bao, Q.-Z. Huang, G.-F. Chen, D.-M. Wang, J.-B. He, and Y.-M. Qiu, Chinese Physics Letters 28, 086104 (2011), ISSN 0256-307X, 1741-3540, URL http://stacks.iop.org/0256-307X/28/i=8/a=086104? key=crossref.f901c13ab3f52735c5ed4ab5ec60b52e. . M Wang, M Wang, G N Li, Q Huang, C H Li, G T Tan, C L Zhang, H Cao, W Tian, Y Zhao, https:/link.aps.org/doi/10.1103/PhysRevB.84.0945041098-0121, 1550-235XPhysical Review B. 84M. Wang, M. Wang, G. N. Li, Q. Huang, C. H. Li, G. T. Tan, C. L. Zhang, H. Cao, W. Tian, Y. Zhao, et al., Physical Review B 84 (2011), ISSN 1098-0121, 1550-235X, URL https://link.aps.org/doi/10.1103/ PhysRevB.84.094504. . D Louca, K Park, B Li, J Neuefeind, J Yan, 2045-2322Scientific Reports. 3D. Louca, K. Park, B. Li, J. Neuefeind, and J. Yan, Sci- entific Reports 3 (2013), ISSN 2045-2322, URL http: //www.nature.com/articles/srep02047. . A Krzton-Maziopa, V Svitlyk, E Pomjakushina, R Puzniak, K Conder, 1361-648XJournal of Physics: Condensed Matter. 28293002A. Krzton-Maziopa, V. Svitlyk, E. Pomjakushina, R. Puzniak, and K. Conder, Journal of Physics: Condensed Matter 28, 293002 (2016), ISSN 0953-8984, 1361-648X, URL http://stacks.iop. org/0953-8984/28/i=29/a=293002?key=crossref. 69cc6457d4c8d16193378c7658a543e4. . S V Carr, D Louca, J Siewenie, Q Huang, A Wang, X Chen, P Dai, https:/link.aps.org/doi/10.1103/PhysRevB.89.1345091098-0121, 1550-235XPhysical Review B. 89S. V. Carr, D. Louca, J. Siewenie, Q. Huang, A. Wang, X. Chen, and P. Dai, Physical Review B 89 (2014), ISSN 1098-0121, 1550-235X, URL https://link.aps. org/doi/10.1103/PhysRevB.89.134509. . D P Shoemaker, D Y Chung, H Claus, M C Francisco, S Avci, A Llobet, M G Kanatzidis, https:/link.aps.org/doi/10.1103/PhysRevB.86.1845111098-0121, 1550-235XPhysical Review B. 86D. P. Shoemaker, D. Y. Chung, H. Claus, M. C. Francisco, S. Avci, A. Llobet, and M. G. Kanatzidis, Physical Review B 86 (2012), ISSN 1098-0121, 1550-235X, URL https://link.aps.org/doi/10.1103/ PhysRevB.86.184511. . V Ksenofontov, G Wortmann, S A Medvedev, V Tsurkan, J Deisenhofer, A Loidl, C Felser, https:/link.aps.org/doi/10.1103/PhysRevB.84.1805081098-0121, 1550-235XPhysical Review B. 84V. Ksenofontov, G. Wortmann, S. A. Medvedev, V. Tsurkan, J. Deisenhofer, A. Loidl, and C. Felser, Physical Review B 84 (2011), ISSN 1098-0121, 1550-235X, URL https://link.aps.org/doi/10.1103/ PhysRevB.84.180508. . N Lazarevi, M Abeykoon, P W Stephens, H Lei, E S Bozin, C Petrovic, Z V Popovi, https:/link.aps.org/doi/10.1103/PhysRevB.86.0545031098-0121, 1550-235XPhysical Review B. 86N. Lazarevi, M. Abeykoon, P. W. Stephens, H. Lei, E. S. Bozin, C. Petrovic, and Z. V. Popovi, Physical Review B 86 (2012), ISSN 1098-0121, 1550-235X, URL https: //link.aps.org/doi/10.1103/PhysRevB.86.054503. . W Li, H Ding, P Deng, K Chang, C Song, K He, L Wang, X Ma, J.-P Hu, X Chen, 1745-2473Nature Physics. 8W. Li, H. Ding, P. Deng, K. Chang, C. Song, K. He, L. Wang, X. Ma, J.-P. Hu, X. Chen, et al., Nature Physics 8, 126 (2012), ISSN 1745-2473, 1745-2481, URL http: //www.nature.com/articles/nphys2155. . A Ricci, N Poccia, G Campi, B Joseph, G Arrighetti, L Barba, M Reynolds, M Burghammer, H Takeya, Y Mizuguchi, https:/link.aps.org/doi/10.1103/PhysRevB.84.0605111098-0121, 1550-235XPhysical Review B. 84A. Ricci, N. Poccia, G. Campi, B. Joseph, G. Arrighetti, L. Barba, M. Reynolds, M. Burghammer, H. Takeya, Y. Mizuguchi, et al., Physical Review B 84 (2011), ISSN 1098-0121, 1550-235X, URL https://link.aps. org/doi/10.1103/PhysRevB.84.060511. . Z.-W Wang, Z Wang, Y.-J Song, C Ma, Y Cai, Z Chen, H.-F Tian, H.-X Yang, G.-F Chen, J.-Q Li, http:/pubs.acs.org/doi/10.1021/jp306310m1932-7447, 1932-7455The Journal of Physical Chemistry C. 11617847Z.-W. Wang, Z. Wang, Y.-J. Song, C. Ma, Y. Cai, Z. Chen, H.-F. Tian, H.-X. Yang, G.-F. Chen, and J.- Q. Li, The Journal of Physical Chemistry C 116, 17847 (2012), ISSN 1932-7447, 1932-7455, URL http://pubs. acs.org/doi/10.1021/jp306310m. . H Lei, M Abeykoon, E S Bozin, K Wang, J B Warren, C Petrovic, https:/link.aps.org/doi/10.1103/PhysRevLett.107.1370020031-9007, 1079-7114Physical Review Letters. 107H. Lei, M. Abeykoon, E. S. Bozin, K. Wang, J. B. Warren, and C. Petrovic, Physical Review Letters 107 (2011), ISSN 0031-9007, 1079-7114, URL https://link. aps.org/doi/10.1103/PhysRevLett.107.137002. . P Mangelis, H Lei, M Mcdonnell, M Feygenson, C Petrovic, E Bozin, A Lappas, 2410-3896Condensed Matter. 3P. Mangelis, H. Lei, M. McDonnell, M. Feygenson, C. Petrovic, E. Bozin, and A. Lappas, Condensed Matter 3, 20 (2018), ISSN 2410-3896, URL http://www.mdpi. com/2410-3896/3/3/20. . M K Wu, P M Wu, Y C Wen, M J Wang, P H Lin, W C Lee, T K Chen, C C Chang, 0022-3727Journal of Physics D: Applied Physics. 48M. K. Wu, P. M. Wu, Y. C. Wen, M. J. Wang, P. H. Lin, W. C. Lee, T. K. Chen, and C. C. Chang, Journal of Physics D: Applied Physics 48, 323001 (2015), ISSN 0022-3727, 1361-6463, URL http: //stacks.iop.org/0022-3727/48/i=32/a=323001?key= crossref.7263814e420ff6b6a87858a9e1322440. . C.-H Wang, T.-K Chen, C.-C Chang, C.-H , C.-H. Wang, T.-K. Chen, C.-C. Chang, C.-H. . Y.-C Hsu, M.-J Lee, P M Wang, M.-K Wu, Wu, 1286-4854Europhysics Letters). 11127004EPLHsu, Y.-C. Lee, M.-J. Wang, P. M. Wu, and M.-K. Wu, EPL (Europhysics Letters) 111, 27004 (2015), ISSN 0295-5075, 1286-4854, URL http://stacks.iop.org/0295-5075/111/i=2/a=27004? key=crossref.2c1188a6ae92f29a1922c045e230150c. C Wang, T Chen, C Chang, Y Lee, M Wang, K Huang, P Wu, M Wu, 09214534Physica C: Superconductivity and its Applications. 54961C. Wang, T. Chen, C. Chang, Y. Lee, M. Wang, K. Huang, P. Wu, and M. Wu, Physica C: Su- perconductivity and its Applications 549, 61 (2018), ISSN 09214534, URL https://linkinghub.elsevier. com/retrieve/pii/S0921453417303283. . A Iadecola, B Joseph, L Simonelli, A Puri, Y Mizuguchi, H Takeya, Y Takano, N L Saini, 0953-8984, 1361-648XJournal of Physics: Condensed Matter. 24115701A. Iadecola, B. Joseph, L. Simonelli, A. Puri, Y. Mizuguchi, H. Takeya, Y. Takano, and N. L. Saini, Journal of Physics: Condensed Matter 24, 115701 (2012), ISSN 0953-8984, 1361-648X, URL http: //stacks.iop.org/0953-8984/24/i=11/a=115701?key= crossref.66d2a7c66dd3ba7a1548a1ff1aa22312. . H Ryu, H Lei, A I Frenkel, C Petrovic, https:/link.aps.org/doi/10.1103/PhysRevB.85.2245151098-0121Physical Review B. 85H. Ryu, H. Lei, A. I. Frenkel, and C. Petro- vic, Physical Review B 85 (2012), ISSN 1098-0121, 1550-235X, URL https://link.aps.org/doi/10.1103/ PhysRevB.85.224515. . C Duan, J Yang, Y Ren, S M Thomas, D Louca, https:/link.aps.org/doi/10.1103/PhysRevB.97.1845022469-9950Physical Review B 97. C. Duan, J. Yang, Y. Ren, S. M. Thomas, and D. Louca, Physical Review B 97 (2018), ISSN 2469-9950, 2469-9969, URL https://link.aps.org/doi/10.1103/ PhysRevB.97.184502. M Tanaka, H Takeya, Y Takano, 1882-0778, 1882-0786Applied Physics Express. 1023101M. Tanaka, H. Takeya, and Y. Takano, Ap- plied Physics Express 10, 023101 (2017), ISSN 1882-0778, 1882-0786, URL http://stacks.iop. org/1882-0786/10/i=2/a=023101?key=crossref. 3a3dd2dd6f0c075fbfeab15f8d298810. . Y Liu, Q Xing, W E Straszheim, J Marshman, P Pedersen, R Mclaughlin, T A Lograsso, https:/link.aps.org/doi/10.1103/PhysRevB.93.0645092469-9950, 2469-9969Physical Review B. 93Y. Liu, Q. Xing, W. E. Straszheim, J. Marsh- man, P. Pedersen, R. McLaughlin, and T. A. Lo- grasso, Physical Review B 93 (2016), ISSN 2469-9950, 2469-9969, URL https://link.aps.org/doi/10.1103/ PhysRevB.93.064509. . H Lei, M Abeykoon, E S Bozin, C Petrovic, https:/link.aps.org/doi/10.1103/PhysRevB.83.1805031098-0121Physical Review B. 83H. Lei, M. Abeykoon, E. S. Bozin, and C. Petro- vic, Physical Review B 83 (2011), ISSN 1098-0121, 1550-235X, URL https://link.aps.org/doi/10.1103/ PhysRevB.83.180503. J Neuefeind, M Feygenson, J Carruth, R Hoffmann, K K Chipley, 0168583XNuclear Instruments and Methods in Physics Research Section B: Beam Interactions with Materials and Atoms. 287J. Neuefeind, M. Feygenson, J. Carruth, R. Hoff- mann, and K. K. Chipley, Nuclear Instruments and Methods in Physics Research Section B: Beam Inter- actions with Materials and Atoms 287, 68 (2012), ISSN 0168583X, URL https://linkinghub.elsevier. com/retrieve/pii/S0168583X12003291. . P F Peterson, M Gutmann, T Proffen, S J L Billinge, 10.1107/S0021889800007123J. Appl. Crystallogr. 331192P. F. Peterson, M. Gutmann, T. Proffen, and S. J. L. Billinge, J. Appl. Crystallogr. 33, 1192 (2000), URL http://dx.doi.org/10.1107/S0021889800007123. . M Wojdyr, 00218898Journal of Applied Crystallography. 431126M. Wojdyr, Journal of Applied Crystallography 43, 1126 (2010), ISSN 00218898, URL http://scripts.iucr. org/cgi-bin/paper?S0021889810030499. . C L Farrow, P Juhás, J Liu, D Bryndin, E S Božin, J Bloch, T Proffen, S J L Billinge, J. Phys: Condens. Mat. 19C. L. Farrow, P. Juhás, J. Liu, D. Bryndin, E. S. Božin, J. Bloch, T. Proffen, and S. J. L. Billinge, J. Phys: Con- dens. Mat. 19, 335219 (2007), URL http://iopscience. iop.org/0953-8984/19/33/335219/. . I Jeong, T Proffen, F Mohiuddin-Jacobs, S J L Billinge, http:/pubs.acs.org/doi/abs/10.1021/jp9836978J. Phys. Chem. A. 103921I. Jeong, T. Proffen, F. Mohiuddin-Jacobs, and S. J. L. Billinge, J. Phys. Chem. A 103, 921 (1999), URL http: //pubs.acs.org/doi/abs/10.1021/jp9836978. . T Proffen, R B Neder, 00218898Journal of Applied Crystallography. 32T. Proffen and R. B. Neder, Journal of Applied Crystallography 32, 838 (1999), ISSN 00218898, URL http://scripts.iucr.org/cgi-bin/paper? hw0052http://scripts.iucr.org/cgi-bin/paper? S002188989600934X. . A Plançon, http:/ccm.geoscienceworld.org/content/52/1/47.fullhttp:/www.ingentaselect.com/rpsv/cgi-bin/cgi?ini=xref{&}body=linker{&}reqdoi=10.1346/CCMN.2004.052010600098604Clays and Clay Minerals. 52A. Plançon, Clays and Clay Minerals 52, 47 (2004), ISSN 00098604, URL http://ccm. geoscienceworld.org/content/52/1/47.fullhttp: //www.ingentaselect.com/rpsv/cgi-bin/cgi?ini= xref{&}body=linker{&}reqdoi=10.1346/CCMN.2004. 0520106. A Guinier, X-ray Diffraction in Crystals, Imperfect Crystals, and Amorphous Bodies. Dover9780486680118A. Guinier, X-ray Diffraction in Crystals, Imperfect Crystals, and Amorphous Bodies, Dover Books on Physics Series (Dover, 1994), ISBN 9780486680118, URL https://books.google.it/books?id=vjyZFo5nGUoC. V A Drits, C Tchoubar, ISBN 0-387-51222- 5X-Ray Diffraction by Disordered Lamellar Structures. Theory and applications to microdivided silicates and carbons. Berlin, Heidelberg, New YorkSpringer-VerlagV. A. Drits and C. Tchoubar, X-Ray Diffraction by Disordered Lamellar Structures. Theory and applications to microdivided silicates and carbons (Springer-Verlag, Berlin, Heidelberg, New York, 1990), ISBN 0-387-51222- 5. . K Ufer, G Roth, R Kleeberg, H Stanjek, R Dohrmann, J Bergmann, 00442968Zeitschrift fur Kristallographie. 219K. Ufer, G. Roth, R. Kleeberg, H. Stanjek, R. Dohrmann, and J. Bergmann, Zeitschrift fur Kristallographie 219, 519 (2004), ISSN 00442968. . I K Jeong, R H Heffner, M J Graf, S J L Billinge, http:/journals.aps.org/prb/abstract/10.1103/PhysRevB.67.104301Phys. Rev. B. 67104301I. K. Jeong, R. H. Heffner, M. J. Graf, and S. J. L. Billinge, Phys. Rev. B 67, 104301 (2003), URL http://journals.aps.org/prb/abstract/10.1103/ PhysRevB.67.104301. . L Vegard, Z. Phys. 517L. Vegard, Z. Phys. 5, 17 (1921).
[]
[ "arXiv:0707.1814v2 [quant-ph] Interference of multi-mode photon echoes generated in spatially separated solid-state atomic ensembles", "arXiv:0707.1814v2 [quant-ph] Interference of multi-mode photon echoes generated in spatially separated solid-state atomic ensembles" ]
[ "M U Staudt \nGroup of Applied Physics\nUniversity of Geneva\nCH-Geneva\nSwitzerland\n", "M Afzelius \nGroup of Applied Physics\nUniversity of Geneva\nCH-Geneva\nSwitzerland\n", "H De Riedmatten \nGroup of Applied Physics\nUniversity of Geneva\nCH-Geneva\nSwitzerland\n", "S R Hastings-Simon \nGroup of Applied Physics\nUniversity of Geneva\nCH-Geneva\nSwitzerland\n", "C Simon ", "R Ricken \nGroup of Applied Physics\nUniversity of Geneva\nCH-Geneva\nSwitzerland\n\nAngewandte Physik\nUniversity of Paderborn\n33095PaderbornGermany\n", "H Suche \nAngewandte Physik\nUniversity of Paderborn\n33095PaderbornGermany\n", "W Sohler \nAngewandte Physik\nUniversity of Paderborn\n33095PaderbornGermany\n", "N Gisin \nGroup of Applied Physics\nUniversity of Geneva\nCH-Geneva\nSwitzerland\n" ]
[ "Group of Applied Physics\nUniversity of Geneva\nCH-Geneva\nSwitzerland", "Group of Applied Physics\nUniversity of Geneva\nCH-Geneva\nSwitzerland", "Group of Applied Physics\nUniversity of Geneva\nCH-Geneva\nSwitzerland", "Group of Applied Physics\nUniversity of Geneva\nCH-Geneva\nSwitzerland", "Group of Applied Physics\nUniversity of Geneva\nCH-Geneva\nSwitzerland", "Angewandte Physik\nUniversity of Paderborn\n33095PaderbornGermany", "Angewandte Physik\nUniversity of Paderborn\n33095PaderbornGermany", "Angewandte Physik\nUniversity of Paderborn\n33095PaderbornGermany", "Group of Applied Physics\nUniversity of Geneva\nCH-Geneva\nSwitzerland" ]
[]
High-visibility interference of photon echoes generated in spatially separated solid-state atomic ensembles is demonstrated. The solid state ensembles were LiNbO3 WGs doped with Erbium ions absorbing at 1.53 µm. Bright coherent states of light in several temporal modes (up to 3) are stored and retrieved from the optical memories using two-pulse photon echoes. The stored and retrieved optical pulses, when combined at a beam splitter, show almost perfect interference, which demonstrates both phase preserving storage and indistinguishability of photon echoes from separate optical memories. By measuring interference fringes for different storage times, we also show explicitly that the visibility is not limited by atomic decoherence. These results are relevant for novel quantum repeaters architectures with photon echo based multimode quantum memories.
10.1103/physrevlett.99.173602
[ "https://arxiv.org/pdf/0707.1814v2.pdf" ]
14,483,056
0707.1814
005fcb3d3aa7be3ef2d3074bcae1ffa84232c0ab
arXiv:0707.1814v2 [quant-ph] Interference of multi-mode photon echoes generated in spatially separated solid-state atomic ensembles 16 Jun 2008 M U Staudt Group of Applied Physics University of Geneva CH-Geneva Switzerland M Afzelius Group of Applied Physics University of Geneva CH-Geneva Switzerland H De Riedmatten Group of Applied Physics University of Geneva CH-Geneva Switzerland S R Hastings-Simon Group of Applied Physics University of Geneva CH-Geneva Switzerland C Simon R Ricken Group of Applied Physics University of Geneva CH-Geneva Switzerland Angewandte Physik University of Paderborn 33095PaderbornGermany H Suche Angewandte Physik University of Paderborn 33095PaderbornGermany W Sohler Angewandte Physik University of Paderborn 33095PaderbornGermany N Gisin Group of Applied Physics University of Geneva CH-Geneva Switzerland arXiv:0707.1814v2 [quant-ph] Interference of multi-mode photon echoes generated in spatially separated solid-state atomic ensembles 16 Jun 2008(Dated: June 16, 2008) High-visibility interference of photon echoes generated in spatially separated solid-state atomic ensembles is demonstrated. The solid state ensembles were LiNbO3 WGs doped with Erbium ions absorbing at 1.53 µm. Bright coherent states of light in several temporal modes (up to 3) are stored and retrieved from the optical memories using two-pulse photon echoes. The stored and retrieved optical pulses, when combined at a beam splitter, show almost perfect interference, which demonstrates both phase preserving storage and indistinguishability of photon echoes from separate optical memories. By measuring interference fringes for different storage times, we also show explicitly that the visibility is not limited by atomic decoherence. These results are relevant for novel quantum repeaters architectures with photon echo based multimode quantum memories. Broad efforts are currently under way to extend quantum communication to very long distances [1,2,3,4,5,6,7,8,9,10]. Existing quantum communication systems are limited in distance mainly due to exponential transmission losses. One approach to overcome this limitation is the implementation of quantum-repeaters [1], which have quantum memories [6,7,8] as the core element. Many proposed quantum repeater protocols see e. g. [3,4,5] exhibit as a common feature the distribution of entanglement by interference of quantum states of light stored and released from spatially distant quantum memories (QMs). The storage must be phase preserving, as entanglement between remote QMs could not be created otherwise. To obtain reasonable counting rates in quantum communication systems for distances over 1000 km, some form of multiplexing is likely to be required [4,5]. For instance the protocol of Ref. [5] predicts a speed up in the entanglement generation rate by taking advantage of the storage of multiple distinguishable temporal modes (multi-modes) in a single quantum memory. Techniques based on photon echoes [11,12] seem well adapted for the storage of multi-modes, as photons absorbed at different times are emitted at different times. Multi-mode storage of classical pulses has been successfully implemented in swept-carrier photon-echo experiments, where up to 1760 modes have been stored and retrieved [13]. The storage of single photons with high efficiency using a photon-echo type scheme is in principle possible with a technique based on Controlled Reversible Inhomogeneous Broadening (CRIB) [14,15,16], where inhomogeneous rephasing is triggered by an external electric field instead of a strong optical pulse. Atomic ensembles in the solid state using rare-earth-iondoped materials seem promising for the implementation of a quantum memory, owing to the absence of atomic diffusion and the long coherence times that are possible for optical and hyperfine transitions [17,18]. For the realization of multi-mode QMs using photon echo techniques, long optical coherence times are essential in order to achieve sufficient storage times and high efficiency storage for several temporal modes [5]. Moreover, there is a wide range of wavelengths available using different rareearth-ions. In particular, the Erbium ion has a transition around 1530 nm, where the transmission loss in optical fibers is minimal. First steps towards photonic quantum storage in rare-earth-ion doped materials have been demonstrated using Electromagnetically Induced Transparency [18] and photon echoes approaches [16,19]. Several experiments have shown phase preserving storage of bright pulses in a single optical memory with photon echo techniques [19,20,21]. In this paper we address the issue of phase preservation in the storage and retrieval of bright coherent states in multiple temporal modes in two different solid state optical memories, based on the photon echo technique. The atomic ensembles, separated by 7 cm, are implemented with Erbium ions doped into a LiNbO 3 waveguide (WG). We also study the influence of atomic decoherence on phase preservation by varying the storage time. The implementation of CRIB requires an efficient three-level lambda system, which has not been demonstrated in Erbium so far. However, the phase preservation for a CRIB based quantum memory can be investigated in two-pulse photon echo based memory using bright pulses, as phase properties of the storage material will not change. To investigate the phase preservation, we use first order interference of photon echoes generated in two Erbiumdoped LiNbO 3 WGs placed in the arms of a balanced Mach-Zehnder interferometer (see Fig. 1). Two-pulse photon echoes are generated in the two ensembles by coherent excitation using two bright pulses, and the photon FIG. 1: Experimental setup: two Er 3+ :LiNbO3 WGs (a photo of one is shown in the inset) are placed in the arms of a fiberoptic interferometer in a region where the interferometer is at 4 K. While both arms have the same length (2.63 m), in one arm the fiber is coiled partially around a piezo-electric element, allowing for controlled phase shifts. The excitation light pulses are sent through the interferometer. The generated echoes interfere at the second coupler. In order to project the polarizations onto one axis we used a polarization controller (PC), a polarizer (P) and a photo detector (PD) echoes created in each arm interfere at the second beam splitter. The interference fringe visibility is used as a measure of the phase coherence of the memories. In a two-pulse photon echo experiment using an inhomogeneously broadened two-level atomic system, a first pulse of area Θ (called data pulse) brings the atoms into a coherent superposition of ground and excited state. A macroscopic dipole moment is produced and decays due to inhomogeneous dephasing. A second pulse, ideally a π pulse (called read pulse) a time t 12 after the data pulse will lead to rephasing, and after a time 2t 12 a macroscopic dipole moment is established producing a photon echo. The two-pulse photon echo can be seen as a storage and retrieval of the data pulse, which can also be a sequence of pulses. The time t s = 2t 12 is the storage time. The memories used were two Erbium doped LiNbO 3 crystals with single-mode Ti-indiffused optical WGs (for more details see Refs [19,22]). The atoms were excited on the 4 I 15/2 → 4 I 13/2 transition at a wavelength of 1532 nm. This transition is inhomogeneously broadened to 250 GHz due to slightly different local environments seen by each Erbium ion [17]. The WGs had a length of about 20 mm, where 10 mm were doped. WG II had two times higher Erbium doping concentration than WG I (WG I: 4 * 10 19 /cm 3 surface concentration before indiffusion), resulting in a higher absorption in WG II. The optical coherence time is limited by magnetic spin interactions between Erbium ions, but can be considerably increased by applying an external magnetic field [23]. A constant magnetic field of about 0.2 Tesla was applied over WG I and a variable field using a supra-conducting magnet was applied over WG II, both fields being parallel to the C 3 axis of the LiNbO 3 crystal. The different magnetic fields and Erbium doping concentrations resulted in different optical coherence times, T 2 = 18 µs in WG I and T 2 = 6 µs in WG II (at around 0.5 Tesla) at a temperature of about 3 K. The fiber-optic interferometer was placed within a pulsetube cooler (Vericold). Both arms were installed across a temperature gradient of 300 K, as the couplers had to be placed at ambient temperature. One Er 3+ -doped LiNbO 3 WG was mounted in each arm on the 4 Kplate, separated spatially by 7 cm. The photon echo excitation light pulses had durations of t pulse =15 ns and the pulse sequence was repeated at a frequency of 13.5 Hz. This frequency was found to minimize the phase noise of the interferometer due to vibrations in the pulsetube cooler. The light pulses were created by gating an external-cavity cw diode laser (Nettest Tunics Plus) with an intensity modulator, before amplification by an EDFA (Erbium Doped Fiber Amplifier). An additional acoustooptical modulator (AA opto-electronics) between the optical amplifier and the input of the pulse-tube cooler, which opened only for the series of pulses, helped to suppress light for all other times even further, thus avoiding spectral hole burning by the EDFA. The input peak powers were in the order of 60 mW (300 mW) for the data (read) pulses and the released echoes were in the order of a few µW due to the efficiency of the echo process (about 1 %) and losses in the interferometer. In order to obtain high visibility interference fringes with the retrieved photon echoes, two criteria must be fulfilled. (1) The storage must be phase preserving (i.e. the two echoes must have a fixed phase relation) and (2) the photon echoes must be indistinguishable in order to avoid which-path information. In order to satisfy the second criterion, the echoes from the two ensembles must be detected in the same spatial, polarization and spectral/temporal mode. Moreover, the intensities of the two echoes in front of the detector must be the same. Spatial indistinguishability is ensured by the fact that the interferometer is completely single-mode, owing to the use of single mode fibers and WGs. The polarization of the photon echoes could in principle be adjusted by using polarization maintaining fibers or by inserting polarization controllers inside the interferometer. For technical reasons, we chose instead to project the two photon echoes on a common polarization axis, by inserting a polarizer and a polarization controller before the photo-detector (see Fig. 1). Since the generation of photon echoes is a non-linear process, the spectral/temporal modes of the echoes depend strongly on the intensity of the excitation pulses and on the optical depth [24]. Since the two WGs have different absorption depths, the temporal shape could therefore be adjusted by tuning the wavelength of the excitation laser within the absorption profile. Finally, to equalize the intensity of the echoes, we used an ultra-high precision translation stage system (anp, attocube systems) to change the in-and out cou- pling powers. Moreover, by adjusting the magnetic field for WG II one can change the optical coherence time, and thus change the amplitude of the photon echo. In order to characterize the phase noise of the interferometer, we measured interference fringes using a cw-offresonant laser at λ = 1550 nm. While scanning the phase as the pulse-tube was cooling a visibility close to 92% was obtained, limited by phase noise introduced by the vibrations of the cooling system. By shutting down the cooling system for a short time, we obtained a visibility close to 100%. Thus for all photon echo measurements, which had to be carried out below 4K, a visibility of 92% was an upper technical limit. We first discuss the experiments carried out on the storage of a single temporal mode. In these experiments the storage time was fixed at 1.6 µs. Fig. 2 a shows interfering photon echoes for constructive and destructive interference. By scanning the phase difference of the interferometer continuously, interference fringes were obtained, an example is shown in Fig. 2 b. A visibility of 90.5% (±1.8%) was obtained by averaging over many measurements, which is within the limit set by the intrinsic phase noise due to the cooling system. Therefore we can conclude that the storage of the optical pulses in the solid state ensembles is fully phase preserving within the error bars. Moreover the two paths of the interferometer cannot be distinguished in any of the spatial, polarization or spectral/temporal modes. Note that high visibilities were achieved, even though the WGs had different physical properties, particularly in their absorption coefficients and optical coherence times. We now turn to the investigation of the storage of multiple modes. The data then contains several pulses sep- arated by 150 ns which coherently excite the atoms. If the intensity of the pulses is well below saturation, each pulse independently generates an echo triggered by the strong read pulse. In Fig. 3 we show examples of interfering photon echo signals for two and three modes, both for destructive and constructive interference. The visibility is measured for each mode separately from interference fringes (as in Fig. 2b) and is an average over several independent measurements. The average visibility is only slightly reduced from 90.5% for the case of the storage of one mode to 86.9% for the lowest value out of the three stored modes. Thus, we show that highvisibility storage of multiple modes in independent solidstate optical memories is possible. In the case of only one stored mode all parameters (intensity, pulse length, coupling etc.) were optimized in order to obtain a strong photon echo signal from both WGs. For the case of two and three stored modes optimization is experimentally more demanding, as one has to work in a regime where each of the data pulse areas Θ has to be much smaller than π/2. Otherwise echo distortion and additional undesired multi-pulse echoes, that might interfere with the echoes of the data pulses, will occur. Moreover, the issues related to different optical depths in the two crystals mentioned above [24] are even more critical in the case of multiple modes, since optimization of one mode does not guarantee the indistinguishability of the other modes. These issues together with the detector sensitivity were the main limitations for the number of modes that could be stored. Overcoming these technical issues, the enlargement of the storage capacity should be possible. By implementing a CRIB-type scheme, storage of even more modes should be possible [5]. Finally, we study the effect of atomic decoherence on the emitted photon echoes. The optical coherence in solid state systems is perturbed by interaction with the environment, leading to atomic decoherence. In Erbium doped LiNbO 3 this dephasing arises from interactions between the large Er 3+ magnetic moment and fluctuating local magnetic fields due to changing magnetic spins on neighboring Er-ions, so-called spin flips [17]. The atomic decoherence will cause an exponential decay of the twopulse photon echo signal, as exemplified for WG II in Fig. 4. From this decay of the photon echo signal, a T 2 of 6 µs was measured. In an optical memory, ideally the phase should be preserved independently of the storage time. At first sight this might seem to be a contradiction due to the unavoidable atomic decoherence. To test the influence of atomic decoherence on the visibility, we measured interference fringes as a function of the storage time (0.8 µs to 5.6 µs, as shown in Fig. 4). We found that although the photon echo amplitude decreases strongly due to atomic decoherence in this time period, the visibility is unaffected and remains at 90.5%. This can be interpreted as follows: the photon echo signal is a coherent collective signal, which is due to interference of the photon emission amplitudes from the whole atomic ensemble [25,26]. Its intensity is proportional to (N-N' ) 2 , where N is the total number of atoms and N ' is the number of decohered atoms. Atoms having lost their coherence due to interaction with the environment do not contribute to the collective photon echo emission. Besides the collective coherent emission in the forward mode, there is also an incoherent emission from all atoms, which would be there even without a data pulse, and whose total intensity in the forward spatial mode scales only with N. Therefore, the ratio of the collective photon echo emission to the incoherent background emission is (N-N' ) 2 /N. Since N is a very large number (N ∼ 10 8 ) we detect only the coherent collective part of the signal, even if N ' is comparable to N. Thus it is within expectation that the decay in photon echo amplitude has no effect on visibility. This holds as long as the background noise from other sources than the photon echo process itself is much smaller than the photon echo amplitude, as is the case in this experiment. Note in this context that CRIB is also a collective process. Thus the atomic decoherence should not influence the phase preservation for a CRIB-type memory either. Note that in the context of quantum communication, only the photons re-emitted from the memories and detected contribute to the signal. Hence, in our case, atomic decoherence which acts only as a loss will affect the efficiency of a given protocol, but not the effective fidelity. In that sense, the visibility of the interference fringe can be considered as a measure of the effective fidelity of the storage and retrieval in the memory. In conclusion we show that phase coherence is preserved to a high degree in the storage and retrieval of light pulses in spatially separated solid state atomic ensembles using photon echo techniques. This phase coherence is revealed in interference patterns that show a visibility of 90.5% (±1.8%), close to the technical limit of 92% set by phase noise caused by the cooling system. Extending our studies to three stored temporal modes only leads to a slight decrease in visibility, mainly due to the nonlinear character of the photon echo storage process. Atomic decoherence due to interactions with the environment does not affect the visibility, i.e. it causes only a loss of signal amplitude, but not a loss of phase coherence of the reemitted optical pulses. This is explained by the collective character of the photon echo process. These results are interesting in the framework of proposed novel quantumrepeater architectures, where storage and retrieval with high phase coherence preservation and in multiple modes is highly advantageous. We would like to thank G. Fernandez, M. Legré and C. Barreiro for assistance and W.Tittel for discussions. We are grateful for support by the Swiss NCCR Quantum Photonics and by the EU Integrated Project Qubit Applications. FIG. 2 : 2(a) Intensity of photon echoes for constructive (black curve) and destructive (red curve) interference. (b) Interference fringe: The area under the interfering photon echoes generated in two spatially separated WGs is shown as a function of phase difference. The storage time was set to 1.6 µs. Each point is averaged over 10 echoes and the measurement of a whole fringe takes about 10 minutes. For this particular fringe, a visibility of V=91.5% is reached (averaging over many measurements gives a visibility of 90.5 %), limited by phase noise caused by vibrations in the cooling system. FIG. 3 : 3Multi-mode storage: Constructive and destructive photon echo interference is shown for a storage time of 2 µs for two and 2.3 µs for three stored modes (values for the longest stored mode). Phase coherence is in the two cases preserved to a very high degree as indicated by high fringe visibilities. FIG. 4 : 4Interference fringe visibility (filled circles) is shown as a function of the memory storage time. Atomic decoherence strongly acts on the amplitude of the echo signal, as shown here for WG II (open circles), but leaves the visibility unaffected. The dotted line shows the average visibility of 90.5%. For each storage time we readjusted all experimental parameters, as the optical coherence times are different in the two WGs. . H J Briegel, Phys. Rev. Lett. 815932H. J. Briegel et al., Phys. Rev. Lett. 81, 5932 (1998). . N Gisin, R Thew, Nature Phot. 1165N. Gisin and R. Thew, Nature Phot. 1, 165 (2007). . L.-M Duan, Nature. 414413L.-M. Duan et al., Nature 414, 413 (2001). . O A Collins, Phys. Rev. Lett. 9860502O. A. Collins et al., Phys. Rev. Lett. 98,060502 (2007). . C Simon, Phys. Rev. Lett. 98190503C. Simon et al., Phys. Rev. Lett. 98, 190503 (2007). . T Chanelière, Nature. 438833T. Chanelière et al., Nature 438, 833 (2005). . M D Eisaman, Nature. 438837M. D. Eisaman et al., Nature 438, 837 (2005). . C W Chou, Science. 3161316C. W. Chou et al., Science 316, 1316 (2007). . A D Boozer, Phys. Rev. Lett. 98193601A. D. Boozer et al., Phys. Rev. Lett. 98, 193601 (2007). . P Maunz, 10.1038/nphys644Nature Phys. P. Maunz et al., Nature Phys. doi:10.1038/nphys644. . N A Kurnit, Phys. Rev. Lett. 13N. A. Kurnit et al., Phys. Rev. Lett. 13, 567, (1964). . T W Mossberg, Opt. Lett. 777T. W. Mossberg, Opt. Lett. 7, 77 (1982). . H Lin, Opt. Lett. 201658H. Lin et al., Opt. Lett. 20, 1658, (1995). . S A Moiseev, S Kröll, Phys. Rev. Lett. 87173601S. A. Moiseev and S. Kröll , Phys. Rev. Lett. 87, 173601 (2001). . M Nilsson, S Kröll, Opt. Comm. 247393M. Nilsson and S. Kröll, Opt. Comm. 247, 393 (2005). . A L Alexander, quant-ph/0612169Phys. Rev. Lett. 9643602A. L. Alexander et al., Phys. Rev. Lett. 96, 043602 (2006), G. Hétet et al., quant-ph/0612169. . Y Sun, J. Lumin. 98Y. Sun et al., J. Lumin. 98,281, (2002). . J J Longdell, Phys. Rev. Lett. 9563601J. J. Longdell et al., Phys. Rev. Lett. 95, 063601 (2005). . M U Staudt, Phys. Rev. Lett. 98113601M. U. Staudt et al., Phys. Rev. Lett. 98, 113601 (2007). . M Arend, Opt. Lett. 181789M. Arend et al., Opt. Lett. 18, 1789 (1993) . A L Alexander, J. of Lumin. 12794A. L. Alexander et al., J. of Lumin. 127, 94 (2007). . I Baumann, Applied Phys. A. 6433I. Baumann et al., Applied Phys. A 64, 33 (1997). . T Böttger, Phys. Rev. B. 7375101T. Böttger et al., Phys. Rev. B 73, 075101 (2006). . T Wang, Phys.Rev.A. 60757T.Wang et al., Phys.Rev.A 60 R757 (1999) . I D Abella, Phys. Rev. 141I. D. Abella et al., Phys. Rev. 141, 391, (1966). . R H Dicke, Phys. Rev. 9399R. H. Dicke, Phys. Rev. 93, 99 (1954).
[]
[ "Unification of Gravity and Electromagnetism and Cosmology", "Unification of Gravity and Electromagnetism and Cosmology" ]
[ "Partha Ghose \nThe National Academy of Sciences\n5 Lajpatrai Road211002AllahabadIndia, India\n" ]
[ "The National Academy of Sciences\n5 Lajpatrai Road211002AllahabadIndia, India" ]
[]
It is first argued that radiation by a uniformly accelerated charge in flat space-time indicates the need for a unified geometric theory of gravity and electromagnetism. Such a theory, based on a metric-affine U 4 manifold, is constructed with the torsion pseudo-vector Γ µ linking gravity and electromagnetism. This conceptually simple extension results in (i) Einstein's equations being modified by a vacuum energy Γ µ Γ ν and a scalar field Γ = Γ µ Γ µ whose zero-mode is a cosmological constant Λ representing 'dark energy', (ii) most of the salient features of 'dark matter'-like phenomena, (iii) a modified electrodynamics satisfying Heaviside duality, (iv) a finite and small Casimir Effect, and at the same time, (v) the empirical Schuster-Blackett-Wilson relation for the amazingly universal gyromagnetic ratio of slowly rotating, neutral astrophysical bodies.
null
[ "https://arxiv.org/pdf/1605.08263v3.pdf" ]
118,729,338
1605.08263
bc4136a0d9de853d40f2ece57d7f0ff0bc290e25
Unification of Gravity and Electromagnetism and Cosmology 30 May 2016 Partha Ghose The National Academy of Sciences 5 Lajpatrai Road211002AllahabadIndia, India Unification of Gravity and Electromagnetism and Cosmology 30 May 2016 It is first argued that radiation by a uniformly accelerated charge in flat space-time indicates the need for a unified geometric theory of gravity and electromagnetism. Such a theory, based on a metric-affine U 4 manifold, is constructed with the torsion pseudo-vector Γ µ linking gravity and electromagnetism. This conceptually simple extension results in (i) Einstein's equations being modified by a vacuum energy Γ µ Γ ν and a scalar field Γ = Γ µ Γ µ whose zero-mode is a cosmological constant Λ representing 'dark energy', (ii) most of the salient features of 'dark matter'-like phenomena, (iii) a modified electrodynamics satisfying Heaviside duality, (iv) a finite and small Casimir Effect, and at the same time, (v) the empirical Schuster-Blackett-Wilson relation for the amazingly universal gyromagnetic ratio of slowly rotating, neutral astrophysical bodies. The Equivalence Principle The Equivalence Principle, together with the requirement of covariance of the laws of physics under the most general coordinate transformations, constitutes the physical basis of Einstein's General Theory of Relativity. It consists of two statements: Universality of Free Fall or Weak Equivalence Principle (WEP) All test bodies fall in a gravitational field with the same acceleration regardless of their mass or internal composition. This is Galileo's law of falling bodies which is incorporated in Newton's theory of gravity through the equality of gravitational and inertial mass which has been confirmed to high precision by the Eötvös experiment. Einstein enunciated a stronger form of the principle which states that the motion of a test particle in a locally homogeneous gravitational field is physically indistinguishable from that of the particle at rest in a uniformly accelerating coordinate system. It has also been stated in the form [1]: Einstein's Equivalence Principle (EEP) For every infinitesimally small world region in which space-time variations of gravity can be neglected, there always exists a coordinate system in which gravitation has no influence either on the motion of test particles or any other physical process. This means that a homogeneous gravitational field in an infinitesimal world region R (strictly speaking only at the centre of mass of an Einstein chamber or lift) can be 'transformed away' relative to an observer in free fall in that field. However, for practical tests of the principle it is often difficult to specify the region R over which the gravitational field is strictly uniform and can be transformed away. This is why the following statement is preferred by some [2]: For every infinitesimally small world region in which space-time variations of gravity can be neglected, there always exists a coordinate frame in which, from the point of view of the co-moving observer, gravitation appears to have no influence on the motion of test particles, or on any other natural phenomenon, but only as to local measurements that ignore the motion of test objects with respect to the source(s) of the field. In order to have a clear understanding of Einstein's own statement of the principle, let us look at how he actually argued in his 1916 paper [3]. Let us consider a Galilean frame K relative to which test particles of different masses are at rest or in uniform motion in a straight line in a world region R which is far removed from other masses, i.e. free of any gravitational field or any other force, and is flat and Minkowskian. Let K ′ be a second co-ordinate system which has an arbitrary uniform acceleration relative to K. Relative to K ′ all the particles experience the same acceleration in the opposite direction independent of their material compositions and physical conditions. To an observer O ′ at rest relative to K ′ , K ′ has no acceleration. Can O ′ conclude that she is in an actually accelerated reference system? The answer is 'no' because the common accelerated motion of the freely moving masses relative to K ′ can be equally well conceptualized by saying that K ′ is not accelerated and that in the world region R there is a homogeneous gravitational field which generates the accelerated motion relative to K ′ . This conception is feasible because we know that the gravitational field has the remarkable property of imparting the same acceleration to all bodies. Hence, from the physical point of view the two frames K and K ′ can both be regarded as Galilean with the same legitimacy, i.e., they are to be treated as equivalent systems of reference for a description of physical phenomena. Thus, acceleration is shorn of its absolute character and, like uniform velocity in a straight line, rendered relative. In the full theory, space-time is no longer Minkowskian and assumes a dynamical nature whose geometry is determined by the actual distribution of masses, and forces disappear altogether from the theory. Minkowski space-times linger in the theory only as tangent planes to a pseudo-Riemannian manifold. This is a very important point that will be exploited in what follows. According to Einstein, the physical equivalence of K and K ′ also follows from an epistemological argument due to Mach (a component of what is known as Mach's Principle). In Newtonian physics a uniform velocity in a straight line is relative and cannot be physically distinguished from rest, but acceleration is absolute. This is because space against which acceleration can be measured is absolute. But absolute space is not observable. Newton was aware of this problem as his comments on the famous rotating bucket experiment clearly show. Mach put forward the epistemological principle that something can be said to be the cause of some effect only when that thing is an observable fact of experience. In other words, the law of causality can be given a meaningful statement about the world of experience only when observable facts alone appear as causes and effects. Since absolute space is not an observable fact of experience, there is no observable ground on which K and K ′ can be distinguished. Hence both can be treated as inertial or Galilean frames with equal legitimacy, and what appears to be a uniform gravitational field relative to K ′ disappears relative to K. With this clearly in mind, let us now pass on to the case of charged particles. EEP and Electrically Charged Particles According to standard classical electrodynamics texts, an accelerated charge invariably radiates electromagnetic waves. However, there is a "perpetual problem" as to whether a uniformly accelerated charge radiates in flat space-time [4]. In the non-relativistic limit the equation of motion of a charged particle is m a = F ext + F react(1) where the first term is an external force on the particle due to a gravitational or electromagnetic field and the second term is the radiation reaction force given by F react = q 2 6πǫ 0 c 3˙ a + O(v/c).(2) Since˙ a = 0 for uniform acceleration, F react = 0 and it has been argued by many that this means there is no radiation [1,4,5,6]. Now, this view is consistent with EEP because of the following reason. Suppose there are two particles of the same mass but only one of them is charged. Then, if they are accelerated by the same gravitational force, the charged particle will radiate but not the neutral particle. Consequently, the charged particle will lose energy and decelerate compared to the neutral particle. That will violate WEP (and hence EEP) which requires both of them to have the same acceleration. One way to reconcile standard electrodynamics with EEP is to argue that the equations of standard electrodynamics actually do not imply that uniformly accelerated charges radiate, as briefly shown above. However, this claim has been in a mire of controversy [7]. The radiation reaction or self-force, also known as the Lorentz-Abraham-Dirac (LAD) force, is known to have pathological solutions like pre-acceleration and run-away solutions. A recent review of the field will be found in Hammond [8]. In 1955 Bondi and Gold claimed that within the context of General Relativity a static charge in a static gravitational field cannot radiate energy, thus appearing to violate a particular version of the equivalence principle [9]. They resolved this paradox by showing that hyperbolic motion requires a homogeneous gravitational field of infinite extension and that such a field does not exist in nature. The ensuing debate by DeWitt and Brehme [10], Fulton and Rohrlich [11], Boulware [12], Parrott [13] and others has been reviewed by Grøn [14] and Lyle [15], and the matter still remains unresolved. One of the main concerns is whether the controversial radiation is consistent with EEP. Let us turn the question around and ask: under what conditions can radiation by a uniformly accelerated charge in flat space-time be reconciled with EEP without requiring the controversial radiation reaction force? The first point to note is that since General Relativity requires acceleration to be relative, and only an accelerated charge can radiate, electromagnetic radiation must also be relativized. In other words, radiation and gravity must be generated and also transformed away by the same coordinate transformation. This requires some sort of unification of electrodynamics and gravity. To elaborate the point consider a test particle A with charge in an infinitesimally small world region R which is sufficiently far removed from all other bodies and hence flat and Minkowskian. Let K be a Galilean frame relative to which A is at rest. Relative to another frame K ′ which has an arbitrary uniform acceleration relative to K, A obviously has a uniform acceleration and radiates. As we have seen, an observer O ′ at rest relative to K ′ can legitimately consider K ′ to be at rest and Galilean, and conceptualize the observed radiation by a uniformly accelerated A by saying that in addition to the usual homogeneous gravitational field, the world region R possesses another homogeneous gravitational field of electromagnetic origin which keeps the acceleration of A uniform as it radiates. The radiation and the associated gravitational acceleration must be such as to disappear relative to K. Now consider the Poynting theorem ∂ρ ∂t + ∇. E × B = 0 (3) where ρ = 1 2 [ǫ 0 E. E + 1 µ 0 B. B] is the energy density of the electromagnetic field in R and S = 1 µ 0 ( E × B) is the Poynting vector. This means div S = − ∂ρ ∂t ≡ P,(4) and div S > 0 only if P > 0. Hence, a positive ρ must exist for radiation to occur. Let us now consider the special case of a single charged body A which is both stationary and accelerating. For example, a charged body in a homogeneous gravitational field can be held fixed at a point relative to a Galilean frame K ′ , and yet it is also subject to a proper gravitational acceleration. If we set v A = 0 relative to K ′ , we get from Poynting's theorem div S = − ∂ρ ∂t = 0 (5) in a static condition. Hence, such a charged body does not radiate relative to O ′ . However, the body will be accelerated relative to K, and hence will radiate relative to it. Thus, like gravitation, radiation must be relativized once the principle of relativity is extended to all coordinate frames no matter how they move relative to one another. Some of the debate in the literature stems from a failure to see this point. It must be emphasized that one is not arguing for an equivalence principle for electromagnetism as a whole because, unlike mass, electric charge q can be positive and negative, and the q/m ratio is not universal. The issue is whether radiation by a uniformly accelerated charge requires a unification of gravity and electromagnetism. Before concluding this section, it would be worthwhile to recall how Einstein summarized the situation regarding his Equivalence Principle and his theory of gravity [21]: In fact, through this conception we arrive at the unity of the nature of inertia and gravitation. For according to our way of looking at it, the same masses may appear to be either under the action of inertia alone (with respect to K) or under the combined action of inertia and gravitation (with respect to K ′ ). The above analysis shows that it would be possible to make a similar statement about EEP and a unified theory of gravity and electromagnetism, if it were to exist, namely that through the above considerations one would arrive at the unity of the nature of inertia, gravitation and electromagnetism. For according to this way of looking at it, the same bodies may appear to be either under the action of inertia alone or under the combined action of inertia and unified gravity and electromagnetism. We will now proceed to construct such a unified theory. Unification of Gravity and Electromagnetism A proper understanding of how gravity and electromagnetism can be simultaneously generated or transformed away by a general coordinate transformation requires a proper unified theory of the two fields at the classical level. Goenner has written a history of such unified theories [16]. They were abandoned in the past because of several reasons. First, they attempted to solve 'all problems regarding the elementary particles of matter with the help of classical fields which are everywhere regular (free of singularities)' [17], which proved unsuccessful. Second, quantum mechanics and nuclear physics were discovered, and the hope of achieving any fundamental understanding in purely classical terms was given up. Third, the electromagnetic and gravitational coupling strengths are enormously different, and any symmetry between these fields must be badly broken, but the idea of broken symmetries that can be restored in some regime was then unknown. We will take the point of view that unlike the nuclear interactions, gravity and electromagnetism are the only two long range fields that are already very well described in classical terms, and hence a search for their unity in purely classical terms without being ambitious to solve 'all problems regarding the elementary particles of matter with the help of classical fields which are everywhere regular (free of singularities)' would be worthwhile, particularly because no attempt to quantize gravity has been successful so far. The immediate motivation for the search for unity, however, comes from the requirement of EEP in the presence of charged particles as discussed in the previous section. After Weyl's and Kaluza's attempts at unification, it was Eddington [18], supported by Einstein [19,20], who first proposed to replace the metric as a fundamental concept by a non-symmetric affine connection Γ and a non-symmetric metric g which can then be split into a symmetric and an anti-symmetric part. Just as passing beyond Euclidean geometry gravitation makes its appearance, so going beyond Riemannian geometry electromagnetism appears naturally as the anti-symmetric part of the metric without requiring any higher dimensional space. Let M(Γ, g) be a smooth U 4 manifold with signature (−, +, +, +) and endowed with a non-symmetric linear connection Γ and a non-symmetric metric g. If one splits the non-symmetric connection into a symmetric and an anti-symmetric part, Γ λ µν = Γ λ (µν) + Γ λ [µν] ,(6)Γ λ (µν) = 1 2 Γ λ µν + Γ λ νµ ,(7)Γ λ [µν] = 1 2 Γ λ µν − Γ λ νµ ≡ Q λ µν ,(8)then Γ λ [µν] is called the Cartan torsion tensor and Γ µ = Γ λ [µλ] the torsion pseudovector. One can also split the non-symmetric metric g µν into the symmetric and anti-symmetric tensor densities s µν = 1 2 √ −g (g µν + g νµ ) ≡ 1 2 (ḡ µν +ḡ νµ ) = 1 2 √ −gg (µν) ,(9)a µν = 1 2 √ −g (g µν − g νµ ) ≡ 1 2 (ḡ µν −ḡ νµ ) = 1 2 √ −gg [µν] .(10) To restrict the number of possible covariant terms in a non-symmetric theory [21], Einstein and Kaufman imposed transposition invariance and Λ-transformation invariance on the theory. Let us first note the definitions of these symmetries. λ-transformation or projective symmetry Define the transformations Γ λ * µν = Γ λ µν + δ λ µ λ , ν , g µν * = g µν ,(11) where λ is an arbitrary function of the coordinates. Then the contracted curvature tensor (12) which is the generalization of the Ricci tensor R µν to the non-symmetric theory, is λ-transformation invariant. What this means is that a theory characterized by E µν cannot determine the Γ-field completely but only up to an arbitrary function λ. Hence, in such a theory, Γ and Γ * represent the same field. Further, this λ-transformation produces a non-symmetric Γ * from a Γ that is symmetric or anti-symmetric in the lower indices. Hence, the symmetry condition for Γ loses objective significance. This sets the ground for a genuine unification of gravity and electromagnetism, the former determined by the symmetric part and the latter by the antisymmetric part of the action. E µν = Γ λ µν, λ − Γ λ µλ, ν + Γ ξ µν Γ λ ξλ − Γ ξ µλ Γ λ ξν Transposition symmetry LetΓ λ µν = Γ λ νµ andg µν = g νµ . Then terms that are invariant under the simultaneous replacements of Γ λ µν and g µν byΓ λ νµ andg νµ are called transposition invariant. For example, the tensor E µν (12) is not transposition invariant because it is transposed tõ E νµ = Γ λ νµ, λ − Γ λ λµ, ν + Γ ξ νµ Γ λ λξ − Γ ξ λµ Γ λ νξ .(13) However, if new quantities U λ µν = Γ λ µν − Γ α µα δ λ ν , U λ µλ = −3Γ λ µλ Γ λ µν = U λ µν − 1 3 U α µα δ λ ν ,(14) are introduced, then the contracted curvature tensor expressed in terms of the U λ µν , E µν (U) = U λ µν, λ − U λ µβ U β λν + 1 3 U λ µλ U β βν ,(15) is transposition invariant. However, as Pauli pointed out, EEP is entirely missing from a unified theory based on E µν (U) [17]. He also observed that such theories 'are in disagreement with the principle that only irreducible quantities should be used in field theories' and no cogent mathematical reasons were given as to why the decomposition of the quantities (E(U), g, Γ) used in the theory do not occur. A variant of Bose's 1953 theory In 1953 S. N. Bose [22] proposed a variation of Einstein's idea in which, unlike Einstein, he did not set the torsion pseudovector Γ µ to zero and also used irreducible quantities. We will develop a variant of Bose's theory with additional λ-transformation symmetry or 'projective symmetry' as it is called now. A matter Lagrangian L m (ψ, g, Γ) obtained by minimally coupling matter fields ψ to the connection is not generally projective invariant. Hence, once the matter Lagrangian is introduced, projective symmetry is explicitly broken. We will first formulate a projective and transposition invariant theory to unify gravity and electromagnetism and then break projective invariance explicitly. 'Spacetime' will emerge with its geometrical properties determined by the stress-energy of matter (as in the Einstein theory) as well as by the torsion tensor Γ µ Γ ν . Instead of introducing the quantities U λ µν to have transposition invariance, Bose achieved transposition invariance by writing the invariant action in the form 2κI = 1 2 ḡ µν E µν +ḡ νµẼ νµ + a s µν Γ µ Γ ν + b a µν (Γ µ,ν − Γ ν,µ )(16) where κ = 8πG/c 4 , and a, b are arbitrary dimensionless constants, but this action is not projective invariant unless a = 0 because Γ λ * µλ = Γ λ µλ + λ , µ , Γ λ * [µλ] = Γ * µ = Γ µ + λ , µ ,(17) and hence Γ µ,ν − Γ ν,µ is projective invariant but not Γ µ Γ ν . As shown in the Appendix (Part 1), the action (16) can be expressed in terms of the Ricci tensor R µν = Γ λ (µν), λ − Γ λ (µλ), ν + Γ ξ (µν) Γ λ (ξλ) − Γ ξ (µλ) Γ λ (ξν)(18) and its antisymmetric counterpart Q λ µν; λ = Q λ µν, λ − Q λ µξ Γ ξ (λν) − Q λ ξν Γ ξ (µλ) + Q ξ µν Γ λ (ξλ) ,(19)Q λ µν = Γ λ [µν] + 1 3 δ λ µ Γ ν − 1 3 δ λ ν Γ µ , Q λ µλ = 0 (20) as 2κI = s µν R µν − Q λ µξ Q ξ λν + a + 1 3 Γ µ Γ ν + a µν Q λ µν; λ + b − 1 6 [Γ µ,ν − Γ ν,µ ] ≡ s µν R µν − Q λ µξ Q ξ λν + xΓ µ Γ ν + a µν Q λ µν; λ − y(Γ µ,ν − Γ ν,µ )(21) with x = a + 1 3 , y = −(b − 1 6 ). Then using the action principle δI = δ L d 4 x = 0, and varying s µν and a µν independently while keeping the connections fixed give the field equations R µν − Q λ µξ Q ξ λν + xΓ µ Γ ν = R µν − Γ λ [µξ] Γ ξ [λν] + aΓ µ Γ ν = 0,(22)Q λ µν; λ − y(Γ µ,ν − Γ ν,µ ) = 0.(23) Notice that torsion contributes to the first equation (22) (gravity) but equation (23) has no contribution from the symmetrical part. If one sets a = 0, one obtains equations that are projective and transposition invariant. We will assume this to be the case to start with. Thus, there is only one free parameter in the theory, namely b which the symmetries permit. We will see in what follows that this is the optimal formulation that is consistent with EEP. Once the matter Lagrangian is introduced, projective symmetry will be broken, and a term proportional to Γ µ Γ ν will occur in eqn. (22) with a coefficient determined by the strength a of the symmetry violation. Note that the symmetries of the theory do not require Γ µ to vanish. The consequences will be explored in the following. The gravitational sector Since Γ µ = Γ λ [µλ] , one can write Γ λ [µξ] = 1 3 Γ µ δ λ ξ − Γ ξ δ λ µ(24) and hence Γ λ [µξ] Γ ξ [λν] = − 1 3 Γ µ Γ ν .(25) In the presence of matter eqn. (22) can therefore be writen as R µν + xΓ µ Γ ν = κT µν(26) This equation implies that R = −xΓ + κT where R = g (µν) R µν , T = g (µν) T µν , Γ = g (µν) Γ µ Γ ν and hence it can be rewritten in the form G µν + g (µν) λ = κT µν − xΓ µ Γ ν .(27)G µν = R µν − 1 2 g (µν) R, λ = 1 2 R = 1 2 (κT − xΓ) .(28) With the choice (24) the traceless torsion Q λ µν = 0 and eqn.(23) reduces to Γ µ,ν − Γ ν,µ = 0.(29) Hence, Γ µ is an irrotational pseudovector. As we will see in the section on electromagnetism, Γ 0 is essentially a magnetic monopole density, and hence there is a Dirac string and space is not simply connected. Let S = R 3 \ {(0, 0, z ≤ 0)|z ∈ R} be the usual 3-dimensional space with the negative z-axis removed. Then the curl-free vector Γ = −∇Φ, ∇ 2 Φ = 0 has vortex solutions Γ = e φ /r where e φ is a unit vector, and the integral over a unit circular path C enclosing the origin is C Γ.e φ dφ = ±2π.(30) Hence, Γ is quantized in units of 2π though it is still conservative in every subregion of S that does not include the origin and the Dirac string. Similarly, there will be quantized vortices in the (x, t), (y, t) planes. The implications of all this in classical optics and weak gravitational fields will be elaborated in separate papers to follow [23]. The equations of connection In order to derive the equations connecting the metric g and the connections Γ, one can use a generalized variation (as shown in the Appendix (Part 2)) to derive the equations g µν , λ + g µα Γ ′ ν λα + g αν Γ ′ µ α λ = g µν P λ(31) where Γ ′ ν λα = Γ ν (λα) + 1 4 √ −g g [λβ] k β δ ν α + λ ↔ α + 1 4 √ −g g [λβ] k β δ ν α − λ ↔ α + Q ν λα + 1 √ −g (g λβ k β δ ν α − g αβ k β δ ν λ ),(32)Γ ′ µ α λ = Γ µ (αλ) + 1 4 √ −g g [λβ] k β δ µ α + λ ↔ α + 1 4 √ −g g [λβ] k β δ µ α − λ ↔ α + Q µ α λ + 1 √ −g (g βα k β δ µ λ − g λβ k β δ µ α )(33) are the new connections and P λ = 1 2 √ −g g , λ g + 3g [λβ] k β .(34) These new connections Γ ′ are clearly not metric compatible. One can define the tensors K ν λα = 1 4 √ −g g [λβ] k β δ ν α + λ ↔ α + K ν λα , K ν λα = 1 4 √ −g g [λβ] k β δ ν α − λ ↔ α + Q ν λα + 1 √ −g (g λβ k β δ ν α − g αβ k β δ ν λ ) K µ αλ = 1 4 √ −g g [λβ] k β δ µ α + λ ↔ α + K µ α λ K µ α λ = 1 4 √ −g g [λβ] k β δ µ α − λ ↔ α + Q µ α λ + 1 √ −g (g βα k β δ µ λ − g λβ k β δ µ α ),(35) where Ks are called contorsion tensors. Since the path-free notion of parallelism must hold locally, i.e. in the infinitesimal neighbourhood of every point on the manifold with coordinates, it is sufficient to impose the metric compatibility condition locally. Then, P λ = 0, g µν , λ + g µα Γ ′ ν λα + g αν Γ ′ µ α λ = 0,(36) and it follows from eqn. (34) that 3g [λβ] k β = − g , λ g .(37) The electromagnetic sector Using the variational principle, it can also be shown (see Appendix (Part 2)) that the relevant part of the Lagrangian density for electrodynamics is L em = 1 2κ ya µν , λ (Γ µ δ λ ν − Γ ν δ λ µ ) + xs µν Γ µ Γ ν ,(38) and varying it w.r.t. Γ µ , one gets a µν ; ν = θs µν Γ ν = 1 2 θ √ −gΓ µ , θ = − x y ,(39)Γ α (λα) = 1 2 g , λ g + g [λβ] k β .(40) Eqn. (39) can be written in the formF µν ; ν = l µ (41) withF µν = ζa µν ,(42)l µ = 1 2 θ √ −gζΓ µ ,(43) where the constant ζ has the dimension of kg/C. The most natural choice would be ζ = 1/ √ 4πǫ 0 G where G is the gravitational constant and ǫ 0 is the absolute permittivity of space. Its current value is 1.16 × 10 10 kg.C −1 . There is, as we will see, astrophysical evidence that the value of the scale parameter ζ is indeed 1/ √ 4πǫ 0 G. It follows from eqn. (41) that l µ , µ = 0. Since l µ is a pseudovector, the fieldF µν is a pseudo tensor. Defining the fieldsF 0i = −B i ,F ij = 1 c ǫ ijk E k ,(44) one gets from (41) and the definition l µ = (−µ 0 ρ m , −j m /cǫ 0 ) the equations ∇ × E + ∂B ∂t = − 1 ǫ 0 j m , ∇.B = µ 0 ρ m .(45) Hence, by comparison with electrodynamics, we can interpret l µ as a magnetic current density and F µν as the dual of the electromagnetic field. The magnetic charge is conserved because the divergence of the magnetic current l µ vanishes identically. Eqn. (29) imposes a certain constraint on this magnetic current which will be explored elsewhere [23]. Equations (41), and hence also equations (45), can be written in the form F µν, λ + F νλ, µ + F λµ, ν = ǫ µνλρ l ρ ,(46)F µν = 1 2 ǫ µνλρ F λρ ,(47) where ǫ µνλρ is the Levi Civita tensor density with components ±1. This equation reduces to the well known Bianchi identity in electrodynamics only if Γ µ = 0. It can be easily checked (using the relations (10)) that the electric current density j µ in this theory is given by j µ = 1 3! ǫ µνλρ F νλ, ρ +F λρ, ν +F ρν, λ(48) = 1 3! ζǫ µνλρ (a νλ, ρ + a λρ, ν + a ρν, λ ) (49) = F µν , ν .(50) Writing j µ = (−ρ q /cǫ 0 , −µ 0 j q ) and using the definitions F 0i = − E i c , F ij = −ǫ ijk B k ,(51) one then obtains ∇ × B − 1 c 2 ∂E ∂t = µ 0 j q , ∇.E = 1 ǫ 0 ρ q .(52) These equations can also be written in the form F µν, λ +F νλ, µ +F λµ, ν = ǫ µνλρ j ρ . E → cB, cB → −E, (ρ q , j q ) → (ρ m , j m ), (ρ m , j m ) → (−ρ q , −j q ).(54) This symmetry is therefore a consequence of the unified theory. One can define potentials in the usual way through the relations F µν = ∂ µ A ν − ∂ ν A µ ,(55) but they turn out to be singular in this theory. According to this definition, a static magnetic field due to a charge g is given by B = g r r 3 = ∇ × A.(56) But eqn. (45) contradicts this. It is well known that the solution is the famous Dirac potential [27] which can be written in spherical polar coordinates as A φ = g r tan θ 2φ , A r = A θ = 0 (57) whose solution gives B r = g r r 3 , B φ = B θ = 0.(58) This potential is singular along the negative z axis characterized by θ = π, called the Dirac string which is a semi-infinite line of magnetic dipoles ending in a monopole at the origin. In the field theory under consideration, the potential will be non-holomorphic in general, and instead of a single monopole, there will be a magnetic density at the origin. Everywhere other than where the potential is non-holomorphic, it will be the standard electromagnetic potential. Hence, the potential due to a magnetic moment µ far from the origin is given by the standard expression A i = µ 0 c 4π ( r × µ) i r 3 = − E i dx 0 .(59) Since the potential is gauge dependent, its singularity can be chosen to take any convenient form. We will discuss the classical 'quantization' (in the sense of discretization) of charge and angular momentum in classical optics in an accompanying paper [23]. Einstein's Equivalence Principle Let us now see if this unified theory incorporates EEP. Although the connections in the theory are non-symmetric, the geodesic of a test particle in a coordinate frame is determined in this theory by a λ = d 2 x λ ds 2 = −Γ ′λ µν dx µ ds dx ν ds = − Γ λ (µν) + 1 8 a µβ k β δ λ ν + µ ↔ ν u µ u ν ≡ − Γ λ (µν) +Γ λ (µν) u µ u ν (60) because the antisymmetric part of the connection does not contribute to the geodesic. Thus, the acceleration is universal and consists of two components: the first component is the familiar Einstein gravitational acceleration and the second component is an additional gravitational acceleration that is induced by the antisymmetric part a µν of the metric tensor density and the vector k β = θs βν Γ ν . Since (Γ λ (µν) +Γ λ (µν) ) is symmetric in the lower indices, the Weyl theorem [28] guarantees that it can be made to vanish in the neighbourhood of a point on the manifold by a coordinate transformation, and hence all observable effects of gravity, including those originating in electromagnetism, can be transformed away in that neighbourhood by a coordinate transformation. This can be explicitly demonstrated as follows. First note that combining eqns. (37) and (40), one has locally Γ α (λα) = g [λβ] k β .(61) Now, let p be a point on the manifold M(Γ, g) and let (U, χ = (x µ )) and (U ′ , χ ′ = (x ′µ )) be two intersecting local charts with p ∈ U, x µ (p) = 0, x ′µ (p) = 0 and x µ = x ′µ − 1 2 Γ µ (νρ) x ′ν x ′ρ(62) because torsion does not contribute to this change of coordinates. Writing g µν (p) = η µν + · · · and remembering that Γ λ (µν) (p) can be made to vanish by Weyl's theorem, one gets on using equations (36) and (61), g ′µν (x ′λ ) = η µν + g ′µν ,λ (p)x ′λ + O(x ′2 ) = η µν + −g µα Γ ′ ν λα − g αν Γ ′ µ α λ p x ′λ + O(x ′2 ) (63) = η µν − g µν 1 4 √ −g g [λβ] k β δ ν α + λ ↔ α p x ′λ + −η (µα) K ν [λα] − η (αν) K µ [α λ] p x ′λ + O(x ′2 ) ≡ η µν 1 + 1 4 √ −g Γ δ (λδ) δ ν α + Γ δ (αδ) δ ν λ x ′λ p − [(K µν λ + K νµ λ )] p x ′λ + O(x ′2 ) (64) = η µν + O(x ′2 ).(65) Thus, the additional gravity induced by electromagnetism can also be transformed away locally. Thus, both gravity and electromagnetism are geometric structures of the theory, gravity corresponding to the symmetric Riemannian part of the manifold and electromagnetism to its antisymmetric part together with Γ µ = 0 which implies a magnetic current density. This ensures that the acceleration of a charged test particle by a locally homogeneous gravitational field produced by 'electro-gravity' is physically indistinguishable from that of a free test particle at rest in a comoving coordinate system, which is the physical content of EEP. Some Predictions (i) Spherically symmetric and static solution Outside a spherically symmetric body of mass M, the gravitational field equation is given by eqn. (26) with T µν = 0. A spherically symmetric and static solution of the equation requires the line element to be given by ds 2 = −c 2 e 2m dt 2 + e 2n dr 2 + r 2 dφ 2 + r 2 sin 2 φ dθ 2(66) where m and n are functions of r, not t. Thus, all off-diagonal elements of g µν vanish and the diagonal elements are all time independent. In order to calculate the elements of the Ricci tensor R µν , one must first calculate the Christoffel symbols from g µν by using the connection between them, which in this theory is given by eqn. (36). Since the off-diagonal elements of g µν vanish, all g [µν] = 0 and eqn. (36) reduces to g (µν) , λ + g (µα) Γ ′ ν (λα) + K ν λα + g (αν) Γ ′ µ (α λ) + K µ α λ = 0,(67) Interchanging µ and ν, we get g (µν) , λ + g (να) Γ ′ µ (λα) + K µ λα + g (αµ) Γ ′ ν (α λ) + K ν α λ = 0.(68) Adding these two equations and using the antisymmetry of K in the lower indices, one has g (µν) , λ + g (µα) Γ ′ ν (αλ) + g (να) Γ ′ µ (αλ) = 0,(69) from which it follows (on cyclically varying the free indices and summing)) that Γ ′ λ (µν) = 1 2 g (λβ) g (µβ), ν + g (νβ), µ − g (µν), β = 0 (70) which is the standard expression in Riemannian geometry. Putting Γ i = 0 for a static solution and following the standard procedure [29], one obtains the equations R 00 = e 2m−2n −m ′′ + m ′ n ′ − m ′2 − 2m ′ r = −xΓ 2 0 = 0,(71)R 11 = m ′′ − m ′ n ′ + m ′2 − 2n ′ r = 0,(72)R 22 = e −2n (1 + m ′ r − n ′ r) − 1 = 0,(73)R 33 = e −2n (1 − n ′ r + m ′ r) − 1) sin 2 φ = 0,(74) where the prime denotes a derivative with respect to r. It follows from (71) and (72) that e 2m−2n dm dr + dn dr = x 2 rΓ 2 0 .(75) Integrating with respect to r and remembering that as r → ∞, m → 0 and n → 0 to ensure an asymptotically flat metric, one gets e −2n − e 2m = x ∞ 0 rΓ 2 0 dr.(76) Since m and n are very small in the asymptotic region, one can write e 2m ≃ 1+2m and e −2n ≃ 1−2n, and hence we have m + n = − x 2 ∞ 0 rΓ 2 0 dr ≡ −β ∞ = 0(77) where β ∞ is a constant. It follows from this and eqn. (73) that n ′ = −m ′ and hence e 2(m+β∞) (1 + 2m ′ r) = 1. Since d dr re 2(m+β∞) = e 2(m+β∞) (1 + 2m ′ r) = 1, one gets on integrating with respect to r e 2m = 1 + α r e −2β∞(80) where α is an integration constant. Therefore, e 2n = e −2(β∞+m) = 1 + α r −1 .(81) Choosing α = −2GM/c 2 , one obtains ds 2 = −c 2 1 − 2GM c 2 r e −2β∞ dt 2 + 1 − 2GM c 2 r −1 dr 2 + r 2 dφ 2 + r 2 sin 2 φ dθ 2 ,(82) which reduces to the Schwarzschild metric in the limit Γ 0 = 0. Notice that even in the absence of the conventional stress-energy tensor T µν (in this case T 00 = ρc 2 = 0 and hence M = 0) which is the source of Einstein gravity, the metric is modified-the nonlocally measured speed of light c ′ = ce −β∞ is different from its local value c, or equivalently, the the time interval is changed from dt to e −β∞ dt. Eqn. (77) shows that β ∞ > 0 if the abitrary parameter x(= a + 1 3 ) > 0. Thus, c ′ < c requires that x > 0. Notice that 1 − 2GM c 2 r = 1 + 2φ c 2 ,(83)φ = − GM r ,(84) φ being the gravitational potential of the mass M at the centre of the body. Hence, definingφ = −β ∞ c 2 , we have ds 2 = 1 + 2φ c 2 e 2φ/c 2 dt 2 + · · · (85) = 1 + 2φ c 2 + 2φ c 2 + 1 2! 4φ 2 c 4 + · · · dt 2 + · · ·(86) Forφ/c 2 ≪ 1, the higher order terms inφ/c 2 can be ignored. Thus, in the limit of weak gravity,φ is an additional gravitational potential which is not due to any additional matter-it is produced by the torsion pseudovector Γ µ in the U 4 manifold and is present even when the stress-energy tensor of matter is absent. It plays a dual role-it produces the magnetic current density in the electromagnetic sector and an additional gravitational potential in the gravity sector. It is the agent that links gravity and electromagnetism. Sinceφ is a constant, in analogy with Newtonian gravity it is possible to write it as φ = − GM eff (r) r , M eff (r) ∼ r,(87) where M eff (r) is an effective mass. For a spherical galaxy having an effective mass of density ρ eff (r), the effective mass inside a radius r is M eff (r) = 4 3 πr 3 ρ eff (r),(88) and hence ρ eff (r) ∼ r −2 so that M eff (r) ∼ r. It follows at once from this that the velocity of a test particle (a star) at a distance r from the centre of such a galaxy is v = GM eff (r) r = constant.(89) Thus, the theory predicts flat rotation curves of stars at large distances from the centre of such a galaxy. This is, of course, true only in the weak gravity limit. Eqn. (86) predicts higher order corrections. Since such a galaxy is not made of ordinary matter, it must be non-luminous. Let Γ 2 0 (r) be a bell-shaped function like Γ 2 0 (0)e −µr 2 , µ > 0. Then, β ∞ = xΓ 2 0 (0) 2 ∞ 0 re −µr 2 dr = xΓ 2 0 (0) 4µ = −φ c 2 .(90) and β(r) = xΓ 2 0 (0) 2 r 0 re −µr 2 dr = β ∞ 1 − e −µr 2 = −φ (r) c 2 .(91) Thus, there is a gravitational potentialφ(r) around a spherical static body whose asymptotic value isφ. One can interpret this as a dark halo of gravitational tidal field acting like a weak gravitational lens. Since Γ = Γ ν Γ ν = Γ 2 0 when Γ i = 0 and T = 0 for the vacuum solution, the value of the scalar field λ, defined by eqn. (28), at the origin determines the constant β ∞ : β ∞ = 1 2µ λ(0).(92) (ii) The Robertson-Walker metric and cosmlogy The metric of a homogeneous and isotropic universe is given by the metric [30] ds 2 = −c 2 dt 2 + a 2 (t) dr 2 1 − kr 2 + r 2 dΩ 2 (93) where a(t) is the scale factor and k = ±1 or 0. The modified Einstein equations (26) shows that even when when T µν = 0, there is a stress-energy tensor T (0) µν = −xΓ µ Γ ν ,(94)T (0) ii = −xΓ 2 i ≡ − xγ κ(95) with γ = κΓ 2 1 = κΓ 2 2 = κΓ 2 3 to ensure isotropy. The modified Friedmann equations arė a 2 a 2 = κ 3 ρc 2 − kc 2 a 2 + xγ 3κ ,(96)a a = − κ 6 ρc 2 + 3p + xγ 3κ .(97) The first of these equations can be written in the forṁ a 2 a 2 = κ 3 ρc 2 − kc 2 a 2 + Λc 2 3 + · · · Λ = xγ 0 κc 2 (98) where Λ is the cosmological constant, γ 0 is the zero-mode of the field γ, and the dots represent the higher modes of γ. The equation of state for the zero mode of the field γ is ρc 2 = −p = x(γ) 0 /κ, i.e. w = −1 and represents 'dark energy' [31]. When ρ and k are both zero, i.e. the universe is empty and spatially flat, one obtains the de Sitter solution only if the higher order modes of γ are negligible. Thus, the scalar field Γ not only determines a cosmological constant accounting for some 'dark energy', it also determines an additional gravitational potential of non-material origin responsible for 'dark matter'-like phenomena [32]. In addition, the field Γ also predicts perturbations to a homogeneous and isotropic cosmic microwave background radiation caused by its higher order modes. (iii) Casimir Effect The Casimir effect is usually explained as the change in the spectrum of zero-point fluctuations of quantum fields brought about by material boundaries [33]. This change requires the occurrence of vacuum energy in the first place, and quantum field theory is the only theory known so far to give rise to it. However, the calculations give a result some 10 122 times larger than the observed value, though contributions from other fields may lower this value [34]. An alternative to quantum field theory that naturally entails vacuum energy is the unified classical theory under consideration, as shown above. The question that arises therefore is whether this theory can explain the Casimir effect. The answer is positive and a simple argument will now be given. Consider the time-time component of the stress-energy tensor (eqn. (94)) T (0) 00 = −xΓ 2 0 ≡ − xǫ κ .(99) Consider a gedanken Casimir cavity that allows only the zero or lowest-mode of the vacuum field Γ 0 inside. Writing ǫ = ǫ 0 + ǫ ′ where ǫ 0 is the energy density inside the cavity, let us consider the difference x κ ǫ ′ = x κ (ǫ − ǫ 0 ) = 0.(100) This is therefore an expression for the Casimir effect. Cosmological observations indicate that ǫ ′ is extremely small and positive. (iv) Schuster-Blackett-Wilson Relation It is well known that for weak gravitational fields one can write g (µν) = η µν + h µν where h is a perturbation on the Minkowski metric η, and the gravitational field equations reduce to Maxwell-like forms 2F µν g = ∂ µ A ν g − ∂ ν A µ g ,(101)∂ µ F µν g = −4πGj ν ,(102) where A g µ = (−A g 0 , A g ), A g o = φ g . In analogy with electrodynamics, one can also define the fields E g and B g by E g = −∇Φ g − 1 c ∂ ∂t 1 2 A g ,(103)B g = ∇ × A g ,(104) with A g i = − 1 2 c 2 h (0i) = − 1 2 c 2 g (0i) = −2 E gi dx 0 .(105) These are called respectively the gravitoelectric and gravitomagnetic fields [35]. The main difference with Newtonian gravity is the existence of the gravitomagnetic field B g . Some fundamental differences with electrodynamics are reflected in the minus sign in (102) (gravity is always attractive) and factors of 2 because gravitational waves are radiated by quadrupoles. In the unified theory under consideration the universal dimensional parameter connecting the electromagnetic field to the metric components (and therefore to the gravitational field) being ζ|â| (eqn. (42)), the ratio of the gravitational and electromagnetic accelerations E gi and E i is fixed to be E gi E i = 1 ζ .(106) It follows from eqns (59) and (105) therefore that A g i A i = 2 ζ .(107) Now, the gravitomagnetic vector potential A g i at a large distance from a small rotating body of angular momentum J is given by [36,37] A g i = − G c ( r × J ) i r 3 .(108) and the electromagnetic potential at a large distance from a magnetic moment µ is given by (59). If θ J and θ µ are the angles between the radius vector r and J and r and µ respectively, and β = (sinθ J /sinθ µ ), eqn. (107) predicts that the gyromagnetic ratio of the body is given by γ = | µ| | J| ≃ 1 2ζ β = √ G 2 √ k β = 4.3 × 10 −11 β C/kg,(109) where k = 1/4πǫ 0 . This is the empirical Schuster-Blackett-Wilson relation if the factor β is of order unity. This empirical law is valid for an amazingly wide variety of astronomical bodies [38,39]. Einstein had proposed a similar relationship in 1924 to account for terrestrial and solar magnetism [40]. An immediate implication of this is that slowly rotating spherical and electrically neutral bodies generate both gravitational and magnetic fields. This provides a possible unified theoretical basis of the origin of cosmic magnetic fields that pervade the universe and of the intense magnetic fields near rotating black holes, connected with quasars and gamma-ray bursts for whose origin the Schuster-Blackett-Wilson relation has been used as a mechanism for non-mimimal gravitationalelectromagnetic coupling (NMGEC) [41,42,43]. Furthermore, the unified theory unequivocally predicts the presence of primordial magnetic fields and curlless magnetic currents (29) which should have important consequences for CMBR anisotropis [44] and other cosmic phenomena. Concluding Remarks I have argued that radiation by uniformly accelerated charged particles is a strong indication to look for a geometric unification of gravity and electromagnetism right at the classical level. Such a theory, based on a metric-affine U 4 manifold, has been constructed. It is a variation of the theory proposed by S. N. Bose in 1953, and some of its physical implications have been worked out. The theory differs from the one proposed by Einstein [21] mainly in admitting a non-vanishing torsion pseudovector Γ µ which links gravity and electromagnetism. As we have seen, this leads to a modification of Einstein gravity with a cosmological constant which is the zero-mode of the scalar field Γ = Γ µ Γ µ , implying an accelerated expansion of the universe, and hence to a simple understanding of 'dark energy' and CMBR perturbations, albeit qualitative at this stage. A spherically symmetric and static solution of the modified Einstein equation leads to a qualitative understanding of also 'dark matter'-like phenomena such as non-material and hence non-luminous halos surrounding galaxies, flat rotation curves and weak gravitational lensing. In addition, it also predicts a Casimir Effect without quantum zero-point fluctuations. The theory predicts Maxwell's equations with a magnetic current proportional to Γ µ in addition to an electric current, thus satisfying Heaviside duality, even in the absence of any matter. The charges are therefore of a geometric or topological nature, and Maxwell's equations acquire a new significance. Thus, the proposed unification of gravity and electromagnetism offers a conceptually simple understanding of a wide range of physical phenomena including both 'dark matter' and 'dark energy', two of the most outstanding problems in physics today. At the same time it also predicts the long conjectured Schuster-Blackett-Wilson relation for the gyromagnetic ratio of a striking number of rotating astrophysical bodies, opening up the possiblity of further investigations into the origin of cosmic magnetic fields and their effects on CMBR anisotropies. The implications for classical optics and weak gravitational fields have also been worked out and will be elaborated in separate papers to follow. Acknowledgement The author is grateful to Anirban Mukherjee for helpful discussions and to the National Academy of Sciences, India for a grant. Appendix Part 1 The generalized Ricci curvature tensor on M(Γ, g) is E µν = Γ λ µν, λ − Γ λ µλ, ν + Γ ξ µν Γ λ ξλ − Γ ξ µλ Γ λ ξν .(110) By transposition it is converted intõ E νµ = Γ λ νµ, λ − Γ λ λµ, ν + Γ ξ νµ Γ λ λξ − Γ ξ λµ Γ λ νξ .(111) If one imposes the metricity condition P λ = 0, it follows from eqn. (129) that Γ α (λα) = −g [λβ] k β . Finally, we give the matrix forms of the electromagnetic field tensors used: F µν =     0 −E x /c −E y /c −E z /c E x /c 0 −B z B y E y /c B z 0 −B x E z /c −B y B x 0     ,F µν = 1 2 ǫ µνλρ F λρ =     0 −B x −B y −B z B x 0 E z /c −E y /c B y −E z /c 0 E x /c B z E y /c −E x /c 0     clear from this that the dual fieldsF do not satisfy the standard Bianchi identity because j µ = 0. The Maxwell equations thus acquire a new geometric significance. (The matrix representations of the fields F µν andF µν used in this paper are given at the end of the Appendix.) Thus, the full set of Maxwell's equations in the presence of electric and magnetic currents are equivalently described by one of the following combinations of equations: eqns (45) and (52); or eqns. (41) and (50); or eqns (46) and (53); or eqns (41) and (53); or eqns (46) and (50). whereHence,Part 2LetThus, H is free of the partial derivatives of Γ λ (µν) , Q λ µν and Γ µ , and the four-divergence term in the action integral is equal to a surface integral at infinity on which all arbitrary variations are taken to vanish. Now, it follows from the definition of Q λ µν that Q λ µλ = 0, and hence all the 24 components of Q λ µν are not independent. Remembering that these four relations must always hold good in the variations of the elements Γ λ (µν) , Q λ µν , Γ µ , one can use the method of undetermined Lagrange multipliers k µ to derive the equations of connection by varying the functionnamely by requiringIt is easy to see that variations of H w.r.t Γ λ (µν) , Q λ µν and Γ µ give respectively the three equationsand ya µν , ν + xs µν Γ ν = 0.It follows from these equations thatwhich implyAdding(120)and(121), we getwhereḡ µν = √ −gg µν (ref eqn. 10). Multiplying (126) byḡ µν and using the results g µνḡ µλ = δ ν λ ,ḡ µνḡ λν = δ µ λ , Q λ αλ = 0, (127) g µα g αβ k β = k µ , g αν g βα k β = k ν ,we first observe thatHence, (126) takes the formΓ ′ µ α λ = Γ µ (αλ) + 1 4 √ −g g[λβ]k β δ µ α + λ ↔ α + 1 4 √ −g g[λβ]k β δ µ α − λ ↔ α + Q µ α λ + 1 √ −g (g βα k β δ µ λ − g λβ k β δ µ α ), W Pauli, Theory of Relativity. G. FieldPergamon Press Ltd51W. Pauli, Theory of Relativity, translated grom German by G. Field, Pergamon Press Ltd. (1958), section 51. . Paul Zelinsky, Adam Crowl, private communication). See also [14Paul Zelinsky and Adam Crowl (private communication). See also [14]. . A Einstein, Ann. der Phys. 3547A. Einstein, Ann. der Phys. 354 (7), 769-822 (1916). . M Born, Ann. Phys. Lpz. 3039M. Born, Ann. Phys. Lpz. 30, 39 (1909). G Nordström, Proc. Roy. Acad. Amsterdam. Roy. Acad. Amsterdam22G. Nordström, Proc. Roy. Acad. Amsterdam 22, 145-149 (1920). (1920), . R P Feynman, F B Morinigo, W Wagner, B Hatfield, Fenman Lectures on Gravitation, Frontiers in Physics Series. Westview PressR. P. Feynman, F. B. Morinigo, W. Wagner, B. Hatfield, Fenman Lectures on Gravitation, Frontiers in Physics Series, Westview Press (2002). Electromagnetic Fields and Interactions, Blaisdell Book in the Pure and Applied Sciences. R Becker, Dover PublicationsR. Becker, Electromagnetic Fields and Interactions, Blaisdell Book in the Pure and Applied Sciences, Dover Publications (1982). . R T Hammond, J Elec, Theoret, Phys. 723R. T. Hammond, Elec J of Theoret. Phys. 7 No. 23, 221-258 (2010). . H Bondi, T Gold, Proc. R. Soc. London Sect. 229416H. Bondi and T. Gold, Proc. R. Soc. London Sect. A229, 416 (1955). . B S De Witt, R W Brehme, Ann. Phys. (N. Y. 9B. S. De Witt and R. W. Brehme, Ann. Phys. (N. Y.), 9, 220-259, (1960). . T Fulton, F Rohrlich, Ann. Phys. (N.Y.). 9T. Fulton and F. Rohrlich, Ann. Phys. (N.Y.) 9, 499-517 (1960). . D G Boulware, Ann. Phys. (N. Y.). 124D. G. Boulware, Ann. Phys. (N. Y.) 124, 169-188 (1980). . S Parrott, Found. Phys. 32S. Parrott, Found. Phys. 32, 407-440 (2002); . //Http, arxiv: 9303025gr-qc//http: arxiv: 9303025 [gr-qc] 2001. O Grøn, 10.1155/2012/528631Electrodynamics of Radiating Charges. 29and references thereinO. Grøn, Electrodynamics of Radiating Charges, Adv. in Math. Phys. 2012, Article ID 528631, 29 pages, (2012). doi:10.1155/2012/528631 and references therein. Uniformly Accelerating Charged particles: A Threat to the Equivalence Principle. S N Lyle, Fundamental Theories of Physics. 158Springerand references thereinS. N. Lyle, Uniformly Accelerating Charged particles: A Threat to the Equivalence Principle, Fundamental Theories of Physics vol 158, (Springer 2008) and references therein. On the History of Unified Field Theories', Living Reviews in Relativity. H F M Goenner, Max Planck Institute for Gravitational Physics. H. F. M. Goenner, 'On the History of Unified Field Theories', Living Reviews in Relativity, Max Planck Institute for Gravitational Physics (2004). . A S Eddington, Proc. R. Soc. London, Ser. A. 99104122A. S. Eddington, Proc. R. Soc. London, Ser. A 99, 104122, (1921). A Einstein, Sitzungsber. Preuss. Akad. Wiss. 5. 22A. Einstein, Sitzungsber. Preuss. Akad. Wiss. 5, 32-38, (1923), ibid 22, 414-419, (1925). Note 23 for other references. See Ref, See Ref. [1], Note 23 for other references. The Meaning of Relativity. A Einstein, Methuen & Co., LondonSixth EditionA. Einstein, The Meaning of Relativity, Methuen & Co., London, Sixth Edition, 1956. . S N Bose, Le Jour De Phys, Le Radium, 14ParisS. N. Bose, Le Jour de Phys et le Radium (Paris) 14, 641-644 (1953). . P Ghose, A Mukherjee, in preparationP. Ghose and A. Mukherjee, in preparation. . G Heaviside, Phil. Trans. Roy. Soc.(London) A. 183423G. Heaviside, Phil. Trans. Roy. Soc.(London) A 183, 423 (1893). . I Larmor, Collected Papers. I. Larmor, Collected Papers, London (1928); . G I Rainich, Trans. Am. Math. Soc. 27106G. I. Rainich, Trans. Am. Math. Soc. 27, 106 (1925). . L Silberstein, Ann. Phys. 327783L. Silberstein, Ann. Phys. 327, 579 (1907); 329, 783 (1907). . I Bialynicki-Birula, Z Bialynicka-Birula, J. Phys. A: Math. Theor. 4653001I. Bialynicki-Birula and Z. Bialynicka-Birula, J. Phys. A: Math. Theor. 46, 053001 (2013). . P A M Dirac, Proc. Roy. Soc. 13360P. A. M. Dirac, Proc. Roy. Soc. A133, 60 (1931). . H Weyl, Space, Time, Matter, Methuen, London4th German ednH. Weyl, Space, Time, Matter, 4th German edn, Methuen, London (1922). A Mathematical Derivation of the General Relativistic Schwarzschild Metric. D Simpson, East Tennessee State University (USA) thesisD. Simpson, 'A Mathematical Derivation of the General Relativistic Schwarzschild Metric', East Tennessee State University (USA) thesis (2007), http://www.etsu.edu/cas/math/documents/theses/simpson-thesis.pdf. C W Misner, K S Thorne, J A Wheeler, Gravitation. San FranciscoFreemanC. W. Misner, K. S. Thorne and J. A. Wheeler, Gravitation, Freeman, San Francisco, (1973). . M Li, Xiao-Dong Li, S Wang, Y Wang, Front. of Phys. 8828M. Li, Xiao-Dong Li, S. Wang, Y. Wang, Front. of Phys. 8, 828 (2013). Dark Matter: A Primer. K Garrett, G Düda, ID 968283Adv. in Astronomy. K. Garrett and G. Düda, 'Dark Matter: A Primer', Adv. in Astronomy 2011, Article ID 968283 (2011). . H B G Casimir, Proc. Kon. Nederland. Akad. Wetensch. 51793H. B. G. Casimir, Proc. Kon. Nederland. Akad. Wetensch. B51, 793 (1948). J Martin, arxiv 1205. 133365J. Martin, Comptes Rendus Physique 13, 566 (2012). arxiv 1205.3365 (2012). and references therein. B Mashhoon, arXiv:gr-qc/00110143Proceedings of the XXIII Spanish Relativity Meeting on Reference Frames and Gravitomagnetism. J. F. Pascual, -S Anchez, L. Floria, A. San Miguel and F. VicenteWorld ScientificB. Mashhoon, arXiv:gr-qc/0011014 3 Nov 2000 and references therein; Proceedings of the XXIII Spanish Relativity Meeting on Reference Frames and Gravitomagnetism, Edited by J. F. Pascual, -S Anchez, L. Floria, A. San Miguel and F. Vicente, pp. 121-32, (World Scientific, Sigapore) (2001). The Lense-Thirring Effect: From the Basic Notions to the Observed Effects. C Lammerzahl, G Neugebauer, Lecture Notes in Physics. C. Lammerzahl, C. W. F. Francis, and F. W. Hehl562BerlinC. Lammerzahl and G. Neugebauer, 'The Lense-Thirring Effect: From the Basic Notions to the Observed Effects', Lecture Notes in Physics 562, C. Lammerzahl, C. W. F. Francis, and F. W. Hehl (Eds.), pp. 31-51, Springer-Verlag: Berlin Heidelberg (2001). Gravitomagnetism (GEM): A Group Theoretic Approach. J R Medina, Drexel University ThesisJ. R. Medina, Gravitomagnetism (GEM): A Group Theoretic Approach, Drexel University Thesis (2006); https://idea.library.drexel.edu/islandora/object/idea:1123. . A Schuster, Proc. Lond. Phys. Soc. 121. P. M. S. Blackett24658NatureA. Schuster, Proc. Lond. Phys. Soc. 24 (1912), 121. P. M. S. Blackett, Nature 159 (1947) 658. . H A Wilson, Proc. Roy.Soc. A. 104H. A. Wilson, Proc. Roy.Soc. A 104, 451-455 (1923). Saul-Paul Sirag, Gravitational Magnetism: an Update, Vigier III Symposium. 278Saul-Paul Sirag, Nature 278 (1979), 535; Gravitational Magnetism: an Update, Vigier III Symposium. A Einstein, Schw. Naturf. Ges. Verh. 105 Pt. 85. S. Saunders and H. R. BrownOxford: Clarendon2A. Einstein, Schw. Naturf. Ges. Verh. 105 Pt. 2 1924, 85. S. Saunders and H. R. Brown, Philosophy of Vacuum (1991) (Oxford: Clarendon). . R Opher, U F Wichowski, Phys. Rev. Lett. 78R. Opher and U. F. Wichowski, Phys. Rev. Lett. 78 (1997), 787-790. . R S Souza, R Opher, J. Cosmol. Astropart. Phys. 0222R. S. de Souza and R. Opher, J. Cosmol. Astropart. Phys. 02, (2010), 022. . R S Souza, R Opher, Phys. Lett. B. 705292R. S. de Souza and R. Opher, Phys. Lett. B 705 (2011), 292. . A Lewis, arXiv:astro-ph/0406096Phys.Rev. 7043011A. Lewis, Phys.Rev. D70, 043011 (2004); arXiv: astro-ph/0406096, 2004.
[]
[ "Integrability of three dimensional models: cubic equations", "Integrability of three dimensional models: cubic equations" ]
[ "Sh Khachatryan \nYerevan Physics Institute Alikhanian\nBr. 20036YerevanArmenia\n", "A Ferraz \nInternational Institute for Physics Natal\nBrazil\n", "A Klümper \nWuppertal University\nGaußstraße 20Germany\n", "A Sedrakyan \nYerevan Physics Institute Alikhanian\nBr. 20036YerevanArmenia\n\nInternational Institute for Physics Natal\nBrazil\n" ]
[ "Yerevan Physics Institute Alikhanian\nBr. 20036YerevanArmenia", "International Institute for Physics Natal\nBrazil", "Wuppertal University\nGaußstraße 20Germany", "Yerevan Physics Institute Alikhanian\nBr. 20036YerevanArmenia", "International Institute for Physics Natal\nBrazil" ]
[]
We extend basic properties of two dimensional integrable models within the Algebraic Bethe Ansatz approach to 2+1 dimensions and formulate the sufficient conditions for the commutativity of transfer matrices of different spectral parameters, in analogy with Yang-Baxter or tetrahedron equations. The basic ingredient of our models is the R-matrix, which describes the scattering of a pair of particles over another pair of particles, the quark-anti-quark (meson) scattering on another quark-anti-quark state. We show that the Kitaev model belongs to this class of models and its R-matrix fulfills well-defined equations for integrability.PACS numbers:The importance of 2D integrable models 1-5 in modern physics is hard to overestimate. Being initially an attractive tool in mathematical physics they became an important technique in low dimensional condensed matter physics, capable to reveal non-perturbative aspects in many body systems with great potential of applications. The basic constituent of 2D integrable systems is the commutativity of the evolution operators, the transfer matrices of the models of different spectral parameters. This property is equivalent to the existence of as many integrals of motion as number of degrees of freedom of the model. It appears, that commutativity of transfer matrices can be ensured by the Yang-Baxter (YB) equations 3-5 for the R-matrix and the integrability of the model is associated with the existence of the solution of YB-equations.Since the 80s of last century there was a natural desire to extend the idea of integrability to three dimensions 6 , which resulted in a formulation of the so-called tetrahedron equation by Zamolodchikov 7 . The tetrahedron equations (ZTE) were studied and several solutions have been found until now7,8,10,13,14,17,19. However, earlier solutions either contained negative Boltzmann weights or were slight deformations of models describing free particles. Only in a recent work 15 non-negative solutions of ZTE were obtained in a vertex formulation, and these matrices can be served as Boltzmann weights for a 3D solvable model with infinite number of discrete spins attached to the edges of the cubic lattice. In this sense it is remarkable to note that among the general solutions obtained in this paper it is also possible to detect Rmatrices with real and non-negative entries which can be considered as Boltzmann weights in the context of the 3D statistical solvable models with 1/2-spins attached to the vertexes of 3D cubic lattice.Although initially the tetrahedron equations were formulated for the scattering matrix S of three infinitely long straight strings in a context of 3D integrability they can also be regarded as weight functions for statistical models. In a Bethe Ansatz formulation of 3D models their 2D transfer matrices of the quantum states on a plane 8,14,17 can be constructed via three particle Rmatrix 9,14,21 , which, as an operator, acts on a tensorial cube of linear space V, i.e. R : V ⊗V ⊗V → V ⊗V ⊗V 11 .Another approach to 3D integrability based on Frenkel-Moore simplex equations 23 also uses three-state Rmatrices. They are higher dimensional extension of quantum Yang-Baxter equations without spectral parameters. However these equations are less examined 24 .Motivated by the desire to extend the integrability conditions in 3D to other formulations we consider a new kind of equations with the R-matrices acting on a quartic tensorial power of linear spaces Vwhich can be represented graphically as inFig.1a. The R-matrix can be represented also in the form displayed inFig.1b, where the final spaces are permuted (V 1 and V 2 with V 3 and V 4 , respectively): R 1234 =Ř 1234 P 13 P 24 . Explicitly it can be written as followsIdentifying the space V 1 ⊗ V 2 and V 3 ⊗ V 4 with the quantum spaces of quark-anti-quark pairs connected with a string one can regard this R-matrix as a transfer matrix for a pair of scattering mesons. Within a terminology used in the algebraic Bethe Ansatz for 1+1 integrable models this R-matrix can be viewed also as a matrix, which has two quantum states and two auxiliary states. The space of quantum states Φ t = ⊗ (n,m)∈L V n,m of the system on a plane is defined by a direct product of linear spaces V n,m of quantum states on each site (n, m) of the lattice L (seeFig.2a). We fix periodic boundary conditions on both directions: V n,m+L = V n,m and V n+L,m = V n,m . The time evolution of this state is determined by the action of the operator/transfer matrix T : Φ t+1 = Φ t T , which is a product of local evolution opera-
null
[ "https://arxiv.org/pdf/1502.04055v1.pdf" ]
117,324,467
1502.04055
39a9e18b8b93246f2b34d749ab2ce6f2dbe494e2
Integrability of three dimensional models: cubic equations 13 Feb 2015 Sh Khachatryan Yerevan Physics Institute Alikhanian Br. 20036YerevanArmenia A Ferraz International Institute for Physics Natal Brazil A Klümper Wuppertal University Gaußstraße 20Germany A Sedrakyan Yerevan Physics Institute Alikhanian Br. 20036YerevanArmenia International Institute for Physics Natal Brazil Integrability of three dimensional models: cubic equations 13 Feb 2015(ΩDated: February 16, 2015) We extend basic properties of two dimensional integrable models within the Algebraic Bethe Ansatz approach to 2+1 dimensions and formulate the sufficient conditions for the commutativity of transfer matrices of different spectral parameters, in analogy with Yang-Baxter or tetrahedron equations. The basic ingredient of our models is the R-matrix, which describes the scattering of a pair of particles over another pair of particles, the quark-anti-quark (meson) scattering on another quark-anti-quark state. We show that the Kitaev model belongs to this class of models and its R-matrix fulfills well-defined equations for integrability.PACS numbers:The importance of 2D integrable models 1-5 in modern physics is hard to overestimate. Being initially an attractive tool in mathematical physics they became an important technique in low dimensional condensed matter physics, capable to reveal non-perturbative aspects in many body systems with great potential of applications. The basic constituent of 2D integrable systems is the commutativity of the evolution operators, the transfer matrices of the models of different spectral parameters. This property is equivalent to the existence of as many integrals of motion as number of degrees of freedom of the model. It appears, that commutativity of transfer matrices can be ensured by the Yang-Baxter (YB) equations 3-5 for the R-matrix and the integrability of the model is associated with the existence of the solution of YB-equations.Since the 80s of last century there was a natural desire to extend the idea of integrability to three dimensions 6 , which resulted in a formulation of the so-called tetrahedron equation by Zamolodchikov 7 . The tetrahedron equations (ZTE) were studied and several solutions have been found until now7,8,10,13,14,17,19. However, earlier solutions either contained negative Boltzmann weights or were slight deformations of models describing free particles. Only in a recent work 15 non-negative solutions of ZTE were obtained in a vertex formulation, and these matrices can be served as Boltzmann weights for a 3D solvable model with infinite number of discrete spins attached to the edges of the cubic lattice. In this sense it is remarkable to note that among the general solutions obtained in this paper it is also possible to detect Rmatrices with real and non-negative entries which can be considered as Boltzmann weights in the context of the 3D statistical solvable models with 1/2-spins attached to the vertexes of 3D cubic lattice.Although initially the tetrahedron equations were formulated for the scattering matrix S of three infinitely long straight strings in a context of 3D integrability they can also be regarded as weight functions for statistical models. In a Bethe Ansatz formulation of 3D models their 2D transfer matrices of the quantum states on a plane 8,14,17 can be constructed via three particle Rmatrix 9,14,21 , which, as an operator, acts on a tensorial cube of linear space V, i.e. R : V ⊗V ⊗V → V ⊗V ⊗V 11 .Another approach to 3D integrability based on Frenkel-Moore simplex equations 23 also uses three-state Rmatrices. They are higher dimensional extension of quantum Yang-Baxter equations without spectral parameters. However these equations are less examined 24 .Motivated by the desire to extend the integrability conditions in 3D to other formulations we consider a new kind of equations with the R-matrices acting on a quartic tensorial power of linear spaces Vwhich can be represented graphically as inFig.1a. The R-matrix can be represented also in the form displayed inFig.1b, where the final spaces are permuted (V 1 and V 2 with V 3 and V 4 , respectively): R 1234 =Ř 1234 P 13 P 24 . Explicitly it can be written as followsIdentifying the space V 1 ⊗ V 2 and V 3 ⊗ V 4 with the quantum spaces of quark-anti-quark pairs connected with a string one can regard this R-matrix as a transfer matrix for a pair of scattering mesons. Within a terminology used in the algebraic Bethe Ansatz for 1+1 integrable models this R-matrix can be viewed also as a matrix, which has two quantum states and two auxiliary states. The space of quantum states Φ t = ⊗ (n,m)∈L V n,m of the system on a plane is defined by a direct product of linear spaces V n,m of quantum states on each site (n, m) of the lattice L (seeFig.2a). We fix periodic boundary conditions on both directions: V n,m+L = V n,m and V n+L,m = V n,m . The time evolution of this state is determined by the action of the operator/transfer matrix T : Φ t+1 = Φ t T , which is a product of local evolution opera- We extend basic properties of two dimensional integrable models within the Algebraic Bethe Ansatz approach to 2+1 dimensions and formulate the sufficient conditions for the commutativity of transfer matrices of different spectral parameters, in analogy with Yang-Baxter or tetrahedron equations. The basic ingredient of our models is the R-matrix, which describes the scattering of a pair of particles over another pair of particles, the quark-anti-quark (meson) scattering on another quark-anti-quark state. We show that the Kitaev model belongs to this class of models and its R-matrix fulfills well-defined equations for integrability. PACS numbers: The importance of 2D integrable models [1][2][3][4][5] in modern physics is hard to overestimate. Being initially an attractive tool in mathematical physics they became an important technique in low dimensional condensed matter physics, capable to reveal non-perturbative aspects in many body systems with great potential of applications. The basic constituent of 2D integrable systems is the commutativity of the evolution operators, the transfer matrices of the models of different spectral parameters. This property is equivalent to the existence of as many integrals of motion as number of degrees of freedom of the model. It appears, that commutativity of transfer matrices can be ensured by the Yang-Baxter (YB) equations 3-5 for the R-matrix and the integrability of the model is associated with the existence of the solution of YB-equations. Since the 80s of last century there was a natural desire to extend the idea of integrability to three dimensions 6 , which resulted in a formulation of the so-called tetrahedron equation by Zamolodchikov 7 . The tetrahedron equations (ZTE) were studied and several solutions have been found until now 7,8,10,13,14,17,19 . However, earlier solutions either contained negative Boltzmann weights or were slight deformations of models describing free particles. Only in a recent work 15 non-negative solutions of ZTE were obtained in a vertex formulation, and these matrices can be served as Boltzmann weights for a 3D solvable model with infinite number of discrete spins attached to the edges of the cubic lattice. In this sense it is remarkable to note that among the general solutions obtained in this paper it is also possible to detect Rmatrices with real and non-negative entries which can be considered as Boltzmann weights in the context of the 3D statistical solvable models with 1/2-spins attached to the vertexes of 3D cubic lattice. Although initially the tetrahedron equations were formulated for the scattering matrix S of three infinitely long straight strings in a context of 3D integrability they can also be regarded as weight functions for statistical models. In a Bethe Ansatz formulation of 3D models their 2D transfer matrices of the quantum states on a plane 8,14,17 can be constructed via three particle Rmatrix 9,14,21 , which, as an operator, acts on a tensorial cube of linear space V, i.e. R : V ⊗V ⊗V → V ⊗V ⊗V 11 . Another approach to 3D integrability based on Frenkel-Moore simplex equations 23 also uses three-state Rmatrices. They are higher dimensional extension of quantum Yang-Baxter equations without spectral parameters. However these equations are less examined 24 . Motivated by the desire to extend the integrability conditions in 3D to other formulations we consider a new kind of equations with the R-matrices acting on a quartic tensorial power of linear spaces V R 1234 : V 1 ⊗ V 2 ⊗ V 3 ⊗ V 4 → V 1 ⊗ V 2 ⊗ V 3 ⊗ V 4 , (0.1) which can be represented graphically as in Fig.1a. The R-matrix can be represented also in the form displayed in Fig.1b, where the final spaces are permuted (V 1 and V 2 with V 3 and V 4 , respectively): R 1234 =Ř 1234 P 13 P 24 . Explicitly it can be written as follows R β1β2β3β4 α1α2α3α4 =Ř β3β4β1β2 α1α2α3α4 . (0.2) Identifying the space V 1 ⊗ V 2 and V 3 ⊗ V 4 with the quantum spaces of quark-anti-quark pairs connected with a string one can regard this R-matrix as a transfer matrix for a pair of scattering mesons. Within a terminology used in the algebraic Bethe Ansatz for 1+1 integrable models this R-matrix can be viewed also as a matrix, which has two quantum states and two auxiliary states. The space of quantum states Φ t = ⊗ (n,m)∈L V n,m of the system on a plane is defined by a direct product of linear spaces V n,m of quantum states on each site (n, m) of the lattice L (see Fig.2a). We fix periodic boundary conditions on both directions: V n,m+L = V n,m and V n+L,m = V n,m . The time evolution of this state is determined by the action of the operator/transfer matrix T : Φ t+1 = Φ t T , which is a product of local evolution opera- α 4 α 3 α 1 V 2 V1 V 3 V 4 1 β 2 β 4 β 3 β V 2 α 2 α α α α 1 2 3 4 β β β β 2 4 3 1 R α α α α 1 2 3 4 β β β β 2 4 3 1 R V1 V 2 V1 V 3 V 4 α 1 α 3 α 4 α 2 1 β 2 β V 4 4 β V 3 3 β a) b) FIG. 1: Four particle R-matrix tors, R-matrices as follows. First we fix a chess like structure of squares on a lattice L and associate to each of the black squares a R-matrixŘ (n+1,m)(n+1,m+1)(n,m)(n,m+1) , which acts on a product of four spaces at the sites. In this way the whole transfer matrix becomes T = T rΠ L/2 n=1 Π L/2 m=1Ř(2n,2m)(2n,2m+1)(2n−1,2m)(2n−1,2m+1) · Π L/2 m=1Ř(2n+1,2m−1)(2n+1,2m)(2n,2m−1)(2n,2m) , (0.3) where the Trace is taken over states on boundaries. The indices of the R-matrices in the first and second lines of this product just ensure chess like ordering of their action. In Fig.2b we present this product graphically. First we identify the second pair of states (2n − 1, 2m), (2n − 1, 2m + 1) (in first row) and (2n+1, 2m−1)(2n+1, 2m) (in second row) of R-matrices with the corresponding links on the lattice. Then we rotate the box of the R-matrix by π/4 in order to ensure the correct order for their action in a product. In the same way we define the second list of the transfer matrix, which will act in the order T B T A . Fig.2c presents a vertical 2D cut of two lists of the product T B T A drawn from the side. The π/4 rotated lines mark the spaces V n,m attached to sites (n, m) of the lattice. Though transfer matrix (0.3) is written inŘ formalism, it can easily be converted to the product of R-matrices. The arrangement of R-matrices in the first row (first plane of the transfer matrix T B ) acts on the sites of dark squares of the lattice while R-matrices in the second row (second plane of the transfer matrix T A ) act on the sites of the white squares. Being an evolution operator the transfer matrix should be linked to time. According to the general prescription 4,5 the transfer matrix T (u) is a function of the so-called spectral parameter u and the linear term 00000 00000 00000 00000 00000 11111 11111 11111 11111 111110000 0000 0000 0000 1111 1111 1111 1111 00000 00000 00000 00000 00000 11111 11111 11111 11111 11111 0000 0000 0000 0000 1111 1111 1111 1111 00000 00000 00000 00000 00000 11111 11111 11111 11111 11111 0000 0000 0000 0000 0000 0000 1111 1111 1111 1111 1111 1111 00000 00000 00000 00000 00000 11111 11111 11111 11111 11111 0000 0000 0000 0000 1111 1111 1111 H r , r > 1 are integrals of motion. In 2D integrable models the sufficient conditions for commutativity of transfer matrices are determined by the corresponding YBequations 3-5 . In order to obtain the analog of the YB equations, which will ensure the commutativity of transfer matrices (0.3) we use the so-called railway construction. Let us cut horizontally two planes of the R-matrix product of two transfer matrices (on Fig.2b we present a product of R-matrices for one transfer matrix plane) into two parts and substitute in between the identity which maps two chains of sites, (2n, m), m = 1 · · · L and (2n + 1, m), m = 1 · · · L + 1, into itself. The Trace have to be taken by identifying spaces 1 and L+1. In this expression we have introduced another set ofR-matrices, called intertwiners, which will be specified below. For further convenience we distinguisȟ R (2n+1,m)(2n+1,m+1)(2n,m)(2n,m+1) matrices for even and odd values of m marking them asŘ 3 andŘ 4 respectively. In the left side of Fig.3 we present one half of the plane of R-matrices together with an inserted chain ofŘ 3Ř4 as intertwiners. The chain of intertwiners can also be written by R-matrices. Now let us suggest, that the product of these intertwiners with the first double chain ofŘ-matrices from the product of two planes of transfer matrices is equal to the product of the same operators written in opposite order. Namely we demand, that R 3 R 3 R4 R4 1 R 2 R 2 R 1 R 1 R 2 R 1 R 1 R 2 R 2 R 2 R 1 R 1 R 1 R 2 R 2 R R4 R 3 R 3 R4 = FIGΠ L m=1Ř(2n+1,m)(2n+1,m+1)(2n,m)(2n,m+1) · Π L/2 m=1Ř(2n,2m)(2n,2m+1)(2n−1,2m)(2n−1,2m+1) (u) · Π L/2 m=1Ř(2n+1,2m)(2n+1,2m+1)(2n,2m)(2n,2m+1) (v) = Π L/2 m=1Ř(2n,2m)(2n,2m+1)(2n−1,2m)(2n−1,2m+1) (v) · Π L/2 m=1Ř(2n+1,2m)(2n+1,2m+1)(2n,2m)(2n,2m+1) (u) · Π 1 m=LŘ(2n+1,m)(2n+1,m+1)(2n,m)(2n,m+1) . (0.5) Graphically this equation is depicted in Fig.3. We move the column of intertwiners from the left to the right hand side of the column of two slices of the R-matrix product, simultaneously changing their order in a column, changing the order of spectral parameters u and v of the slices and demanding their equality. We can use the same type of equality and move the chain of intertwiners further to the right hand side of the next column of the two slices of theŘ-matrix product. Then, repeating this operation multiple times, one will approach the chain of inserteď R −1 intertwiners inside the Trace from the other side and cancel it. As a result we obtain the product of two transfer matrices in a reversed order of spectral parameters u and v. Hence, the set of equations (0.5) ensures the commutativity of transfer matrices. The set of equations (0.5) can be simplified. Namely, it is easy to see, that the equality can be reduced to the product of only 2Ř-matrices,Ř(u) andŘ(v) and two intertwiners,Ř 3 andŘ 4 . In other words, it is enough to write the equality of the product ofŘ-matrices from the inside of the dotted line in Fig.3. Graphically this equation is depicted in Fig.4. We see, that in this equation the product ofŘ-matrices acting on a space ⊗ 9 i=1 V i (for simplicity we numerate the spaces from 1 to 9) can be written aš R 4 5263 (u, v)Ř 3 4152 (u, v)Ř 2 5689 (u)Ř 1 2356 (v)id 7 =Ř 1 4578 (v)Ř 2 1245 (u)Ř 3 8596 (u, v)Ř 4 7485 (u, v)id 3 (0.6) Here we have introduced a short-hand notation forŘmatrices simply by marking the numbers of linear spaces of states, in which they are acting; id 3 and id 7 are identity operators acting on spaces 3 and 7 respectively. Eq. (0.6) can also easily be written by use of R. This is the set of equations, sufficient for commutativity of transfer matrices. The same set of equations are sufficient for commutingŘ-matrices in the second column in Fig.1. Equations (0.6) form an analog of YB equations ensuring the integrability of 3D quantum models. Since they have a form of relations between the cubes of the R-matrix picture(see Fig.1) we call them cubic equations. We will show now that the Kitaev model 12 can be described as a model of the prescribed type and its Rmatrix fulfills the set of cubic equations (0.6). The full transfer matrix of Kitaev model is a product T A T B of two transfer matrices of type (0.3) defined byŘ- matricesŘ A = 1 ⊗ 1 ⊗ 1 ⊗ 1 + u σ x ⊗ σ x ⊗ σ x ⊗ σ x andŘ B = 1 ⊗ 1 ⊗ 1 ⊗ 1 + u σ z ⊗ σ z ⊗ σ z ⊗ σ z respectively. The linear term of the expansion of T A T B in the spectral parameter u will produce the Kitaev model Hamiltonian H Kitaev = white plaquettes σ x ⊗ σ x ⊗ σ x ⊗ σ x + dark plaquettes σ z ⊗ σ z ⊗ σ z ⊗ σ z . (0.7) The integrability of the Kitaev model is trivially clear from the very beginning since all terms in the Hamiltonian defined on white and dark plaquettes commute with each other. The latter indicates, that the number of integrals of motion of the model coincides with its degrees of freedom. However, in this paper we are aiming to show, that one can develop 3D Algebraic Bethe Ansatz approach in such a way, that the Kitaev model will automatically be integrable. Namely, we will show now, that R A and R B -matrices of the Kitaev's model fulfill Eq. 0.6. The explicit form of Eq. 0.6 by use of indices according to the definition in Fig.1a readš R 4 β1β2β6β3 α5α2α6α3Ř 3 γ4γ1β5β4 α4α1β1β2Ř 2 β7β8γ8γ9 β5β6α8α9 (v)Ř 1 γ2γ3γ5γ6 β4β3β7β8 (u)δ γ7 α7 = R 1 β7β8β5β6 α4α5α7α8 (u)Ř 2 γ1γ2β4β3 α1α2β7β8 (v)Ř 3 β1β2γ9γ6 β6β3α9α6Ř 4 γ7γ4γ8γ5 β5β4β1β2 δ γ3 α3 . (0.8) whereŘ 1 (u) =Ř A (u) andŘ 2 (v) =Ř B (v) . It appears, that the intertwinerš Summary. We have formulated a class of three dimensional models defined by the R-matrix of the scattering of a two particle state on another two particle state, i.e. a meson-meson type scattering. We derived a set of equations for these R-matrices, which are a sufficient conditions for the commutativity of the transfer matrices with different spectral parameters. These equa-tions differ from the tetrahedron equations, which also ensure the integrability of 3D models, but are based on the R-matrix of 3 particle scatterings. Our set of equations will be reduced to tetrahedron type of equations by considering the two auxiliary spaces in the R-matrix as one (fusion) and replacing it by one thick line. We showed that the Kitaev model 12 belongs to this class of integrable models. R 4 =Ř A −1 (u),Ř 3 =Ř B (v) (0.9) where R −1 A (u) = 1 ⊗ 1 ⊗ 1 ⊗ 1 − uσ x ⊗ σ x ⊗ σ x ⊗ σ x H 1 in its expansion T (u) = r u r H r defines the Hamiltonian of the model, while the partition function is Z = T rT N . Integrable models should have as many integrals of motion, as degrees of freedom. This property may be reached by considering two planes of transfer matrices with different spectral parameters, T (u) and T (v) and demanding their commutativity [T (u), T (v)] = 0, or equivalently demanding the commutativity of the coefficients [H r , H s ] = 0 of the expansion. This means, that all FIG. 2 : 2Transfer matrix T as a product of R-matrices on the plane: a) presents the lattice with chess like structure; b) presents the product of R-matrices arranged in chess like way; c) 2D cut of the product TBTA. . 3 : 3Reduced set of commutativity conditions for transfer matrices. The dotted subset represents the cubic equations (0.6). FIG. 4 : 4The set of equations ensuring commutativity of transfer matrices with different spectral parameters. We numerate linear spaces of states V, where R-matrices are acting, by V1 · · · V9. R-matrices on the left hand side of equation are not acting on space V7 while R-matrices on the right hand side are not acting on V3. We put identity operators in the equation acting on this spaces for the consistency. fulfill the cubic equations (0.8) for any parameters u and v. This can be directly checked both, by a computer algebra program and analytically. The commutativity of transfer matrices T A (u) with T A (v) and T B (u) with T B (v) is trivial in the Kitaev model. Acknowledgment.A.S thanks IIP at Natal, where part of this work was done and Armenian Research Council (grant 13-1C132) for partial financial support. . W Heisenberg, Z. Phys. 49619W. Heisenberg, Z. Phys. 49, 619 (1928). . H Bethe, Z. Phys. 71205H. Bethe, Z. Phys. 71, 205 (1931). . C N Yang, C P Yang, Phys. Rev. 150321C.N. Yang and C.P. Yang, Phys. Rev. 150, 321 (1966). . R J Baxter, Ann. of Phys. 70193R.J. Baxter, Ann. of Phys. 70, 193 (1972). . L D Faddeev, L A Takhtajan, Usp. Mat. Nauk. 3413in RussianL.D. Faddeev, L.A. Takhtajan, Usp. Mat. Nauk, 34, 13 (1979) (in Russian). . A Polyakov, unpublishedA. Polyakov, 1979, unpublished. . A Zamolodchikov, Zh. Eksp. Teor. Fiz. 79641JETPA. Zamolodchikov, Zh. Eksp. Teor. Fiz. 79, 641 (1980). [English translation: Soviet Phys. JETP 52, 325 (1980)]; . A Zamolodchikov, Commun.Math.Physics. 79489A. Zamolodchikov, Commun.Math.Physics 79, 489 (1981). . R J Baxter, Commun. Math. Phys. 88185R.J.Baxter, Commun. Math. Phys. 88, 185 (1983). . J Hietarinta, J. Phys. A: Math. Gen. 275748J. Hietarinta, J. Phys. A: Math. Gen. 27, 5727, 5748 (1994). . V V Bazhanov, R J Baxter, J. Stat. Phys. 69453V.V. Bazhanov, R.J.Baxter, J. Stat. Phys. 69 (1992) 453; . V V Bazhanov, R J Baxter, Physica A. 194V.V. Bazhanov, R.J.Baxter, Physica A 194 (1993) 390- 396. 10 the basic R-matrix is defined on the product of four states: V ⊗V ⊗V ⊗V , which are situated on the vertices of a cube, however these models can be reformulated via the three channel R-matrix, where channels are associated with the faces of the cube. 8Therefore in this models also the equations of commutativity of transfer matrices are ZTEThough in models with interaction round a cube 8,10 the basic R-matrix is defined on the product of four states: V ⊗V ⊗V ⊗V , which are situated on the vertices of a cube, however these models can be reformulated via the three channel R-matrix, where channels are associated with the faces of the cube. Therefore in this models also the equa- tions of commutativity of transfer matrices are ZTE. . A Yu, Kitaev, Annals of Physics. 303A.Yu. Kitaev, Annals of Physics 303, 2 (2003). . R M Kashaev, V V Mangazeev, Yu G Stroganov, Int. J. Mod. Phys. 8R.M. Kashaev, V.V. Mangazeev, Yu.G. Stroganov, Int. J. Mod. Phys. A8 (1993) 587-601; . R M Kashaev, V V Mangazeev, Yu G Stroganov, Int. J. Mod. Phys. 8R.M. Kashaev, V.V. Mangazeev, Yu.G. Stroganov, Int. J. Mod. Phys. A8 (1993) 1399-1409. . V V Bazhanov, V V Mangazeev, S M Sergeev, J. Stat. Mech. 7004V.V. Bazhanov, V. V. Mangazeev, S.M. Sergeev, J. Stat. Mech. P07004 (2008); . V V Bazhanov, S M Sergeev, J. Phys. A: Math. Theor. 39V.V. Bazhanov, S.M. Sergeev, J. Phys. A: Math. Theor. 39 3295-3310 (2006). . V V Mangazeev, V V Bazhanov, S M Sergeev, J. Phys. A: Math. Theor. 46465206V. V. Mangazeev, V.V. Bazhanov, S.M. Sergeev, J. Phys. A: Math. Theor. 46 465206 (2013). . G Von Gehlen, S Pakuliak, S Sergeev, J. Phys. A: Math. Gen. 36975G. von Gehlen, S. Pakuliak and S. Sergeev, J. Phys. A: Math. Gen. 36,975 (2003); . G Von Gehlen, S Pakuliak, S Sergeev, Int. J. Mod. Phys. A. SupplG. von Gehlen, S. Pakuliak and S. Sergeev, Int. J. Mod. Phys. A 19 Suppl. 179-204 (2004) ; . G Von Gehlen, S Pakuliak, S Sergeev, J.Phys. A. 387269G. von Gehlen, S. Pakuliak, S. Sergeev, J.Phys. A 38, 7269 (2005). . J Ambjorn, Sh, A Khachatryan, Sedrakyan, Nucl. Phys. B. 734287J. Ambjorn, Sh. Khachatryan, A. Sedrakyan, Nucl. Phys. B 734, [FS] , 287 (2006). . A P Isaev, P P Kulish, Mod. Phys. Lett. A. 12427A. P. Isaev and P. P. Kulish, Mod. Phys. Lett. A 12 427 (1997). . A Kuniba, M Okado, J. Phys. A: Math. Theor. 45465206A. Kuniba, M. Okado, J. Phys. A: Math. Theor. 45 465206,(2012). . I G Korepanov, Modern Phys. Lett. B. 33I.G. Korepanov, Modern Phys. Lett. B 3, No 3 (1989) 201-206. . I G Korepanov, Comm. Math. Phys. 15485I.G. Korepanov, Comm. Math. Phys. 154 (1993) 85. . Sh, A Khachatryan, Sedrakyan, J. Stat. Phys. 150130Sh. Khachatryan, A. Sedrakyan, J. Stat. Phys. 150 130 (2013). . I Frenkel, G Moore, Commun. Math. Phys. 138259I. Frenkel and G. Moore, Commun. Math. Phys. 138 (1991) 259. . M L Ge, C H Oh, K Singh, Phys. Lett. A. 185177M. L. Ge, C. H. Oh and K. Singh, Phys. Lett. A 185, 177 (1994); . L C Kwek, C H Oh, K Singh, K Y Wee, J. Phys. A: Math. Gen. 286877L. C. Kwek, C. H. Oh, K. Singh and K. Y. Wee, J. Phys. A: Math. Gen. 28, 6877 (1995).
[]
[ "Multi-agent Reinforcement Learning in Sequential Social Dilemmas", "Multi-agent Reinforcement Learning in Sequential Social Dilemmas" ]
[ "Joel Z Leibo \nDeepMind\nLondon, London, London, London, LondonUK, UK, UK, UK, UK\n", "Vinicius Zambaldi [email protected] \nDeepMind\nLondon, London, London, London, LondonUK, UK, UK, UK, UK\n", "Marc Lanctot [email protected] \nDeepMind\nLondon, London, London, London, LondonUK, UK, UK, UK, UK\n", "Janusz Marecki \nDeepMind\nLondon, London, London, London, LondonUK, UK, UK, UK, UK\n", "Thore Graepel \nDeepMind\nLondon, London, London, London, LondonUK, UK, UK, UK, UK\n" ]
[ "DeepMind\nLondon, London, London, London, LondonUK, UK, UK, UK, UK", "DeepMind\nLondon, London, London, London, LondonUK, UK, UK, UK, UK", "DeepMind\nLondon, London, London, London, LondonUK, UK, UK, UK, UK", "DeepMind\nLondon, London, London, London, LondonUK, UK, UK, UK, UK", "DeepMind\nLondon, London, London, London, LondonUK, UK, UK, UK, UK" ]
[]
Matrix games like Prisoner's Dilemma have guided research on social dilemmas for decades. However, they necessarily treat the choice to cooperate or defect as an atomic action. In real-world social dilemmas these choices are temporally extended. Cooperativeness is a property that applies to policies, not elementary actions. We introduce sequential social dilemmas that share the mixed incentive structure of matrix game social dilemmas but also require agents to learn policies that implement their strategic intentions. We analyze the dynamics of policies learned by multiple self-interested independent learning agents, each using its own deep Qnetwork, on two Markov games we introduce here: 1. a fruit Gathering game and 2. a Wolfpack hunting game. We characterize how learned behavior in each domain changes as a function of environmental factors including resource abundance. Our experiments show how conflict can emerge from competition over shared resources and shed light on how the sequential nature of real world social dilemmas affects cooperation.
null
[ "https://arxiv.org/pdf/1702.03037v1.pdf" ]
580,203
1702.03037
519ca9e67ee65aea425a99c927df6f0b3499e2a5
Multi-agent Reinforcement Learning in Sequential Social Dilemmas Joel Z Leibo DeepMind London, London, London, London, LondonUK, UK, UK, UK, UK Vinicius Zambaldi [email protected] DeepMind London, London, London, London, LondonUK, UK, UK, UK, UK Marc Lanctot [email protected] DeepMind London, London, London, London, LondonUK, UK, UK, UK, UK Janusz Marecki DeepMind London, London, London, London, LondonUK, UK, UK, UK, UK Thore Graepel DeepMind London, London, London, London, LondonUK, UK, UK, UK, UK Multi-agent Reinforcement Learning in Sequential Social Dilemmas CCS Concepts •Computing methodologies → Multi-agent reinforce- ment learningAgent / discrete modelsStochastic gamesKeywords Social dilemmas, cooperation, Markov games, agent-based social simulation, non-cooperative games Matrix games like Prisoner's Dilemma have guided research on social dilemmas for decades. However, they necessarily treat the choice to cooperate or defect as an atomic action. In real-world social dilemmas these choices are temporally extended. Cooperativeness is a property that applies to policies, not elementary actions. We introduce sequential social dilemmas that share the mixed incentive structure of matrix game social dilemmas but also require agents to learn policies that implement their strategic intentions. We analyze the dynamics of policies learned by multiple self-interested independent learning agents, each using its own deep Qnetwork, on two Markov games we introduce here: 1. a fruit Gathering game and 2. a Wolfpack hunting game. We characterize how learned behavior in each domain changes as a function of environmental factors including resource abundance. Our experiments show how conflict can emerge from competition over shared resources and shed light on how the sequential nature of real world social dilemmas affects cooperation. INTRODUCTION Social dilemmas expose tensions between collective and individual rationality [1]. Cooperation makes possible better outcomes for all than any could obtain on their own. However, the lure of free riding and other such parasitic strategies implies a tragedy of the commons that threatens the stability of any cooperative venture [2]. The theory of repeated general-sum matrix games provides a framework for understanding social dilemmas. Fig. 1 shows payoff matrices for three canonical examples: Prisoner's Dilemma, Chicken, and Stag Hunt. The two actions are interpreted as cooperate and defect respectively. The four possible outcomes of each stage game are R (reward of mutual cooperation), P (punishment arising from mutual defection), S (sucker outcome obtained by the player who cooperates with a defecting partner), and T (temptation outcome achieved by defecting against a cooperator). A matrix game is a social dilemma when its four payoffs satisfy the following social dilemma inequalities (this formulation from [3]): R > P Mutual cooperation is preferred to mutual defection. (1) R > S Mutual cooperation is preferred to being exploited by a defector. ( 3. 2R > T + S This ensures that mutual cooperation is preferred to an equal probability of unilateral cooperation and defection. 4. either greed : T > R Exploiting a cooperator is preferred over mutual cooperation or fear : P > S Mutual defection is preferred over being exploited. (4) Matrix Game Social Dilemmas (MGSD) have been fruitfully employed as models for a wide variety of phenomena in theoretical social science and biology. For example, there is a large and interesting literature concerned with mechanisms through which the socially preferred outcome of mutual cooperation can be stabilized, e.g., direct reciprocity [4,5,6,7], indirect reciprocity [8], norm enforcement [9,10], simple reinforcement learning variants [3], multiagent reinforcement learning [11,12,13,14,15], spatial structure [16], emotions [17], and social network effects [18,19]. However, the MGSD formalism ignores several aspects of real world social dilemmas which may be of critical importance. 1. Real world social dilemmas are temporally extended. 2. Cooperation and defection are labels that apply to policies implementing strategic decisions. 3. Cooperativeness may be a graded quantity. The three canonical matrix game social dilemmas. By convention, a cell of X, Y represents a utility of X to the row player and Y to the column player. In Chicken, agents may defect out of greed. In Stag Hunt, agents may defect out of fear of a non-cooperative partner. In Prisoner's Dilemma, agents are motivated to defect out of both greed and fear simultaneously. 4. Decisions to cooperate or defect occur only quasi-simultaneously since some information about what player 2 is starting to do can inform player 1's decision and vice versa. 5. Decisions must be made despite only having partial information about the state of the world and the activities of the other players. We propose a Sequential Social Dilemma (SSD) model to better capture the above points while, critically, maintaining the mixed motivation structure of MGSDs. That is, analogous inequalities to (1) -(4) determine when a temporallyextended Markov game is an SSD. To demonstrate the importance of capturing sequential structure in social dilemma modeling, we present empirical game-theoretic analyses [20,21] of SSDs to identify the empirical payoff matrices summarizing the outcomes that would arise if cooperate and defect policies were selected as one-shot decisions. The empirical payoff matrices are themselves valid matrix games. Our main result is that both of the SSDs we considered, Gathering and Wolfpack, have empirical payoff matrices that are Prisoner's Dilemma (PD). This means that if one were to adhere strictly to the MGSDmodeling paradigm, PD models should be proposed for both situations. Thus any conclusions reached from simulating them would necessarily be quite similar in both cases (and to other studies of iterated PD). However, when viewed as SSDs, the formal equivalence of Gathering and Wolfpack disappears. They are clearly different games. In fact, there are simple experimental manipulations that, when applied to Gathering and Wolfpack, yield opposite predictions concerning the emergence and stability of cooperation. More specifically, we describe a factor that promotes the emergence of cooperation in Gathering while discouraging its emergence in Wolfpack, and vice versa. The straightforward implication is that, for modeling real-world social dilemmas with SSDs, the choice of whether to use a Gatheringlike or Wolfpack-like model is critical. And the differences between the two cannot be captured by MGSD modeling. Along the way to these results, the present paper also makes a small methodological contribution. Owing to the greater complexity arising from their sequential structure, it is more computationally demanding to find equilibria of SSD models than it is for MGSD models. Thus the standard evolution and learning approaches to simulating MGSDs cannot be applied to SSDs. Instead, more sophisticated multiagent reinforcement learning methods must be used (e.g [22,23,24]). In this paper we describe how deep Q-networks (e.g [25]) may be applied to this problem of finding equilibria of SSDs. DEFINITIONS AND NOTATION We model sequential social dilemmas as general-sum Markov (simultaneous move) games with each agent having only a partial observation onto their local environment. Agents must learn an appropriate policy while coexisting with one another. A policy is considered to implement cooperation or defection by properties of the realizations it generates. A Markov game is an SSD if and only if it contains outcomes arising from cooperation and defection policies that satisfy the same inequalities used to define MGSDs (eqs. 1 -4). This definition is stated more formally in sections 2.1 and 2.2 below. For temporal discount factor γ ∈ [0, 1] we can define the long-term payoff V π i (s0) to player i when the joint policy π = (π1, π2) is followed starting from state s0 ∈ S. Markov Games V π i (s0) = E a t ∼ π(O(s t )),s t+1 ∼T (s t , a t ) ∞ t=0 γ t ri(st, at) . (5) Matrix games are the special case of two-player perfectly observable (Oi(s) = s) Markov games obtained when |S| = 1. MGSDs also specify A1 = A2 = {C, D}, where C and D are called (atomic) cooperate and defect respectively. The outcomes R(s), P (s), S(s), T (s) that determine when a matrix game is a social dilemma are defined as follows. R(s) := V π C ,π C 1 (s) = V π C ,π C 2 (s),(6)P (s) := V π D ,π D 1 (s) = V π D ,π D 2 (s),(7)S(s) := V π C ,π D 1 (s) = V π D ,π C 2 (s),(8)T (s) := V π D ,π C 1 (s) = V π C ,π D 2 (s),(9) where π C and π D are cooperative and defecting policies as described next. Note that a matrix game is a social dilemma when R, P, S, T satisfy the inequalities (1) -(4). Definition of Sequential Social Dilemma This definition is based on a formalization of empirical game-theoretic analysis [20,21]. We define the outcomes (R, P, S, T ) := (R(s0), P (s0), S(s0), T (s0)) induced by initial state s0, and two policies π C , π D , through their longterm expected payoff (5) and the definitions (6) - (9). We refer to the game matrix with R, P , S, T organized as in Fig. 1-left. as an empirical payoff matrix following the terminology of [21]. Definition: A sequential social dilemma is a tuple (M, Π C , Π D ) where Π C and Π D are disjoint sets of policies that are said to implement cooperation and defection respectively. M is a Markov game with state space S. Let the empirical payoff matrix (R(s), P (s), S(s), T (s)) be induced by policies (π C ∈ Π C , π D ∈ Π D ) via eqs. (5) - (9). A Markov game is an SSD when there exist states s ∈ S for which the induced empirical payoff matrix satisfies the social dilemma inequalities (1) -(4). Remark: There is no guarantee that Π C Π D = Π, the set of all legal policies. This reflects the fact that, in practice for sequential behavior, cooperativeness is usually a graded property. Thus we are forced to define Π C and Π D by thresholding a continuous social behavior metric. For example, to construct an SSD for which a policy's level of aggressiveness α : Π → R is the relevant social behavior metric, we pick threshold values αc and α d so that α(π) < αc ⇐⇒ π ∈ Π C and α(π) > α d ⇐⇒ π ∈ Π D . LEARNING ALGORITHMS Most previous work on finding policies for Markov games takes the prescriptive view of multiagent learning [26]: that is, it attempts to answer "what should each agent do?" Several algorithms and analyses have been developed for the two-player zero-sum case [22,27,28,29,30]. The generalsum case is significantly more challenging [31], and algorithms either have strong assumptions or need to either track several different potential equilibria per agent [32,33], model other players to simplify the problem [34], or must find a Figure 3: Left: Gathering. In this frame the blue player is directing its beam at the apple respawn location. The red player is approaching the apples from the south. Right: Wolfpack. The size of the agent's view relative to the size of the map is illustrated. If an agent is inside the blue diamondshaped region around the prey when a capture occurs-when one agent touches the prey-both it and its partner receive a reward of rteam. cyclic strategy composed of several policies obtained through multiple state space sweeps [35]. Researchers have also studied the emergence of multi-agent coordination in the decentralized, partially observable MDP framework [36,37,38]. However, that approach relies on knowledge of the underlying Markov model, an unrealistic assumption for modeling real-world social dilemmas. In contrast, we take a descriptive view, and aim to answer "what social effects emerge when each agent uses a particular learning rule?" The purpose here then is to study and characterize the resulting learning dynamics, as in e.g., [13,15], rather than on designing new learning algorithms. It is well-known that the resulting "local decision process" could be non-Markovian from each agent's perspective [39]. This is a feature, not a bug in descriptive work since it is a property of the real environment that the model captures. We use deep reinforcement learning as the basis for each agent in part because of its recent success with solving complex problems [25,40]. Also, temporal difference predictions have been observed in the brain [41] and this class of reinforcement learning algorithm is seen as a candidate theory of animal habit-learning [42]. Deep Multiagent Reinforcement Learning Modern deep reinforcement learning methods take the perspective of an agent that must learn to maximize its cumulative long-term reward through trial-and-error interactions with its environment [43,44]. In the multi-agent setting, the i-th agent stores a function Qi : Oi × Ai → R represented by a deep Q-network (DQN). See [25] for details in the single agent case. In our case the true state s is observed differently by each player, as oi = O(s, i). However for consistency of notation, we use a shorthand: Qi(s, a) = Qi(O(s, i), a). During learning, to encourage exploration we parameterize the i-th agent's policy by πi(s) = argmax a∈A i Qi(s, a) with probability 1 − U(Ai) with probability where U(Ai) denotes a sample from the uniform distribution over Ai. Each agent updates its policy given a stored batch 1 of experienced transitions {(s, a, ri, s )t : t = 1, . . . T } such that Qi(s, a) ← Qi(s, a) + α ri + γ max a ∈A i Qi(s , a ) − Qi(s, a) This is a "growing batch" approach to reinforcement learning in the sense of [45]. However, it does not grow in an unbounded fashion. Rather, old data is discarded so the batch can be constantly refreshed with new data reflecting more recent transitions. We compared batch sizes of 1e5 (our default) and 1e6 in our experiments (see Sect. 5.3). The network representing the function Q is trained through gradient descent on the mean squared Bellman residual with the expectation taken over transitions uniformly sampled from the batch (see [25]). Since the batch is constantly refreshed, the Q-network may adapt to the changing data distribution arising from the effects of learning on π1 and π2. In order to make learning in SSDs tractable, we make the extra assumption that each individual agent's learning depends only on the other agent's learning via the (slowly) changing distribution of experience it generates. That is, the two learning agents are "independent" of one another and each regard the other as part of the environment. From the perspective of player one, the learning of player two shows up as a non-stationary environment. The independence assumption can be seen as a particular kind of bounded rationality: agents do no recursive reasoning about one another's learning. In principle, this restriction could be dropped through the use of planning-based reinforcement learning methods like those of [24]. SIMULATION METHODS Both games studied here were implemented in a 2D gridworld game engine. The state st and the joint action of all players a determines the state at the next time-step st+1. Observations O(s, i) ∈ R 3×16×21 (RGB) of the true state st depended on the player's current position and orientation. The observation window extended 15 grid squares ahead and 10 grid squares from side to side (see Fig. 3B). Actions a ∈ R 8 were agent-centered: step forward, step backward, step left, step right, rotate left, rotate right, use beam and stand still. Each player appears blue in its own local view, light-blue in its teammates view and red in its opponent's view. Each episode lasted for 1, 000 steps. Default neural networks had two hidden layers with 32 units, interleaved with rectified linear layers which projected to the output layer which had 8 units, one for each action. During training, players implemented epsilon-greedy policies, with epsilon decaying linearly over time (from 1.0 to 0.1). The default per-time-step discount rate was 0.99. RESULTS In this section, we describe three experiments: one for each game (Gathering and Wolfpack), and a third experiment investigating parameters that influence the emergence of cooperation versus defection. Experiment 1: Gathering The goal of the Gathering game is to collect apples, represented by green pixels (see Fig. 3A). When a player collects 1 The batch is sometimes called a "replay buffer" e.g. [25]. Shown is the beam-use rate (aggressiveness) as a function of re-spawn time of apples N apple (abundance) and respawn time of agents N tagged (conflict-cost). These results show that agents learn aggresssive policies in environments that combine a scarcity of resources with the possibility of costly action. Less aggressive policies emerge from learning in relatively abundant environments with less possibility for costly action. Bottom: Wolfpack. Shown is two minus the average number of wolves per capture as a function of the capture radius and group capture benefit (rteam). Again as expected, greater group benefit and larger capture radius lead to an increase in wolves per capture, indicating a higher degree of cooperation. an apple it receives a reward of 1 and the apple is temporarily removed from the map. The apple respawns after N apple frames. Players can direct a beam in a straight line along their current orientation. A player hit by the beam twice is "tagged" and removed from the game for N tagged frames. No rewards are delivered to either player for tagging. The only potential motivation for tagging is competition over the apples. Refer to the Gathering gameplay video 2 for demonstration. Intuitively, a defecting policy in this game is one that is aggressive-i.e., involving frequent attempts to tag rival players to remove them from the game. Such a policy is motivated by the opportunity to take all the apples for oneself that arises after eliminating the other player. By contrast, a cooperative policy is one that does not seek to tag the other player. This suggests the use of a social behavior met-ric (section 2.2) that measures a policy's tendency to use the beam action as the basis for its classification as defection or cooperation. To this end, we counted the number of beam actions during a time horizon and normalized it by the amount of time in which both agents were playing (not removed from the game). By manipulating the rate at which apples respawn after being collected, N apple , we could control the abundance of apples in the environment. Similarly, by manipulating the number of timesteps for which a tagged agent is removed from the game, N tagged , we could control the cost of potential conflict. We wanted to test whether conflict would emerge from learning in environments where apples were scarce. We considered the effect of abundance (N apple ) and conflict-cost (N tagged ) on the level of aggressiveness (beamuse rate) that emerges from learning. Fig. 4A shows the beam-use rate that evolved after training for 40 million steps as a function of abundance (N apple ) and conflict-cost (N tagged ). Supplementary video 3 shows how such emergent conflict evolves over the course of learning. In this case, differences in beam-use rate (proxy for the tendency to defect) learned in the different environments emerge quite early in training and mostly persist throughout. When learning does change beam-use rate, it is almost always to increase it. We noted that the policies learned in environments with low abundance or high conflict-cost were highly aggressive while the policies learned with high abundance or low conflictcost were less aggressive. That is, the Gathering game predicts that conflict may emerge from competition for scarce resources, but is less likely to emerge when resources are plentiful. To further characterize the mixed motivation structure of the Gathering game, we carried out the empirical gametheoretic analysis suggested by the definition of section 2.2. We chose the set of policies Π C that were trained in the high abundance / low conflict-cost environments (low aggression policies) and Π D as policies trained in the low abundance and high conflict-cost environments (high aggression policies), and used these to compute empirical payoff matrices as follows. Two pairs of policies (π C 1 , π D 1 ) and (π C 2 , π D 2 ) are sampled from Π C and Π D and matched against each other in the Gathering game for one episode. The resulting rewards are assigned to individual cells of a matrix game, in which π C i corresponds the cooperative action for player i, and π D j , the defective action for player j. This process is repeated until convergence of the cell values, and generates estimates of R, P, S, and T for the game corresponding to each abundance / conflict-cost (N apple , N tagged ) level tested. See Figure 5 for an illustration of this workflow. Fig. 6A summarizes the types of empirical games that were found given our parameter spectrum. Most cases where the social dilemma inequalities (1) -(4) held, i.e., the strategic scenario was a social dilemma, turned out to be a prisoner's dilemma. The greed motivation reflects the temptation to take out a rival and collect all the apples oneself. The fear motivation reflected the danger of being taken out oneself by a defecting rival. P is preferred to S in the Gathering game because mutual defection typically leads to both players alternating tagging one another, so each gets some time alone to collect apples. Whereas the agent receiving the outcome S does not try to tag its rival and thus never gets this 3 https://goo.gl/w2VqlQ chance. Experiment 2: Wolfpack The Wolfpack game requires two players (wolves) to chase a third player (the prey). When either wolf touches the prey, all wolves within the capture radius (see Fig. 3B) receive a reward. The reward received by the capturing wolves is proportional to the number of wolves in the capture radius. The idea is that a lone wolf can capture the prey, but is at risk of losing the carcass to scavengers. However, when the two wolves capture the prey together, they can better protect the carcass from scavengers and hence receive a higher reward. A lone-wolf capture provides a reward of r lone and a capture involving both wolves is worth rteam. Refer to the Wolfpack gameplay video 4 for demonstration. The wolves learn to catch the prey over the course of training. Fig. 4B shows the effect on the average number of wolves per capture obtained from training in environments with varying levels of group capture bonus rteam/r lone and capture radius. Supplementary video 5 shows how this dependency evolves over learning time. Like in the Gathering game, these results show that environment parameters influence how cooperative the learned policies will be. It is interesting that two different cooperative policies emerged from these experiments. On the one hand, the wolves could cooperate by first finding one another and then moving together to hunt the prey, while on the other hand, a wolf could first find the prey and then wait for the other wolf to arrive before capturing it. Analogous to our analysis of the Gathering game, we choose Π C and Π D for Wolfpack to be the sets of policies learned in the high radius / group bonus and low radius /group bonus environments respectively. The procedure for estimating R, P, S, and T was the same as in section 5.1. Fig. 6B summarizes these results. Interestingly, it turns out that all three classic MGSDs, chicken, stag hunt, and prisoner's dilemma can be found in the empirical payoff matrices of Wolfpack. Experiment 3: Agent parameters influencing the emergence of defection So far we have described how properties of the environment influence emergent social outcomes. Next we consider the impact of manipulating properties of the agents. Psychological research attempting to elucidate the motivational factors underlying human cooperation is relevant here. In particular, Social Psychology has advanced various hypotheses concerning psychological variables that may influence cooperation and give rise to the observed individual differences in human cooperative behavior in laboratory-based social dilemmas [2]. These factors include consideration-of-futureconsequences [46], trust [47], affect (interestingly, it is negative emotions that turn out to promote cooperation [48]), and a personality variable called social value orientation characterized by other-regarding-preferences. The latter has been studied in a similar Markov game social dilemma setup to our SSD setting by [49]. Obviously the relatively simple DQN learning agents we consider here do not have internal variables that directly correspond to the factors identified by Social Psychology. Nor Figure 5: Workflow to obtain empirical payoff matrices from Markov games. Agents are trained under different environmental conditions, e.g., with high or low abundance (Gathering case) or team capture bonus (Wolfpack case) resulting in agents classified as cooperators (π C ∈ Π C ) or defectors (π D ∈ Π D ). Empirical game payoffs are estimated by sampling (π1, π2) from Π C × Π C , Π C × Π D , Π D × Π C , and Π D × Π D . By repeatedly playing out the resulting games between the sampled π1 and π2, and averaging the results, it is possible to estimate the payoffs for each cell of the matrix. should they be expected to capture the full range of human individual differences in laboratory social dilemmas. Nevertheless, it is interesting to consider just how far one can go down this road of modeling Social Psychology hypotheses using such simple learning agents 6 . Recall also that DQN is in the class of reinforcement learning algorithms that is generally considered to be the leading candidate theory of animal habit-learning [50,42]. Thus, the interpretation of our model is that it only addresses whatever part of cooperative behavior arises "by habit" as opposed to conscious deliberation. Experimental manipulations of DQN parameters yield consistent and interpretable effects on emergent social behavior. Each plot in Fig. 7 shows the relevant social behavior metric, conflict for Gathering and lone-wolf behavior for Wolfpack, as a function of an environment parameter: N apple , N tagged (Gathering) and rteam/r lone (Wolfpack). The figure shows that in both games, agents with greater discount parameter (less time discounting) more readily defect than agents that discount the future more steeply. For Gathering this likely occurs because the defection policy of tagging the other player to temporarily remove them from the game only provides a delayed reward in the form of the increased opportunity to collect apples without interference. However, when abundance is very high, even the agents with higher discount factors do not learn to defect. In such paradisiacal settings, the apples respawn so quickly that an individual agent cannot collect them quickly enough. As a consequence, there is no motivation to defect regardless of the temporal discount rate. Manipulating the size of the storedand-constantly-refreshed batch of experience used to train each DQN agent has the opposite effect on the emergence of defection. Larger batch size translates into more experience with the other agent's policy. For Gathering, this means that avoiding being tagged becomes easier. Evasive action benefits more from extra experience than the ability to target the other agent. For Wolfpack, larger batch size allows greater opportunity to learn to coordinate to jointly catch the prey. Possibly the most interesting effect on behavior comes from the number of hidden units in the neural network behind the agents, which may be interpreted as their cognitive capacity. Curves for tendency to defect are shown in the right column of Fig. 7, comparing two different network sizes. For Gathering, an increase in network size leads to an increase in the agent's tendency to defect, whereas for Wolfpack the opposite is true: Greater network size leads to less defection. This can be explained as follows. In Gathering, defection behavior is more complex and requires a larger network size to learn than cooperative behavior. This is the case because defection requires the difficult task of targeting the opposing agent with the beam whereas peacefully collecting apples is almost independent of the opposing agent's behavior. In Wolfpack, cooperation behavior is more complex and requires a larger network size because the agents need to coordinate their hunting behaviors to collect the team reward whereas the lone-wolf behavior does not require coordination with the other agent and hence requires less network capacity. Note that the qualitative difference in effects for network size supports our argument that the richer framework of SSDs is needed to capture important aspects of real social dilemmas. This rather striking difference between Gathering and Wolfpack is invisible to the purely matrix game based MGSD-modeling. It only emerges when the different complexities of cooperative or defecting behaviors, and hence the difficulty of the corresponding learning problems is modeled in a sequential setup such as an SSD. DISCUSSION In the Wolfpack game, learning a defecting lone-wolf policy is easier than learning a cooperative pack-hunting policy. This is because the former does not require actions to be conditioned on the presence of a partner within the capture radius. In the Gathering game the situation is reversed. Cooperative policies are easier to learn since they need only be concerned with apples and may not depend on the rival player's actions. However, optimally efficient cooperative policies may still require such coordination to prevent situations where both players simultaneously move on the same apple. Cooperation and defection demand differing levels of coordination for the two games. Wolfpack's cooperative policy requires greater coordination than its defecting policy. Gathering's defection policy requires greater coordination (to successfully aim at the rival player). Both the Gathering and Wolfpack games contain embedded MGSDs with prisoner's dilemma-type payoffs. The MGSD model thus regards them as structurally identical. Yet, viewed as SSDs, they make rather different predictions. This suggests a new dimension on which to investigate classic questions concerning the evolution of cooperation. For any to-be-modeled phenomenon, the question now arises: which SSD is a better description of the game being played? If Gathering is a better model, then we would expect cooperation to be the easier-to-learn "default" policy, probably requiring less coordination. For situations where Wolfpack is the better model, defection is the easier-to-learn "default" behavior and cooperation is the harder-to-learn policy requiring greater coordination. These modeling choices are somewhat orthogonal to the issue of assigning values to the various possible outcomes (the only degree of freedom in MGSD-modeling), yet they make a large difference to the results. SSD models address similar research questions as MGSD models, e.g. the evolution of cooperation. However, SSD models are more realistic since they capture the sequential structure of real-world social dilemmas. Of course, in modeling, greater verisimilitude is not automatically virtuous. When choosing between two models of a given phenomenon, Occam's razor demands we prefer the simpler one. If SSDs were just more realistic models that led to the same conclusions as MGSDs then they would not be especially useful. This however, is not the case. We argue the implication of the results presented here is that standard evolutionary and learning-based approaches to modeling the trial and error process through which societies converge on equilibria of social dilemmas are unable to address the following important learning related phenomena. 1. Learning which strategic decision to make, abstractly, whether to cooperate or defect, often occurs simultaneously with learning how to efficiently implement said decision. 2. It may be difficult to learn how to implement an effective cooperation policy with a partner bent on defectionor vice versa. 3. Implementing effective cooperation or defection may involve solving coordination subproblems, but there is no guarantee this would occur, or that cooperation and defection would rely on coordination to the same extent. In some strategic situations, cooperation may require coordination, e.g., standing aside to allow a partner's passage through a narrow corridor while in others defection may require coordination e.g. blocking a rival from passing. Shown are plots of (two minus) average-wolves-per-capture (Lone-wolf capture rate) as a function of rteam (Group Benefit). For both Gathering and Wolfpack we vary the following factors: temporal discount (left), batch size (centre), and network size (right). Note that the effects of discount factor and batch size on the tendency to defect point in the same direction for Gathering and Wolfpack, network size has the opposite effect (see text for discussion.) 5. The complexity of learning how to implement effective cooperation and defection policies may not be equal. One or the other might be significantly easier to learn-solely due to implementation complexityin a manner that cannot be accounted for by adjusting outcome values in an MGSD model. Our general method of tracking social behavior metrics in addition to reward while manipulating parameters of the learning environment is widely applicable. One could use these techniques to simulate the effects of external interventions on social equilibria in cases where the sequential structure of cooperation and defection are important. Notice that several of the examples in Schelling's seminal book Micromotives and Macrobehavior [51] can be seen as temporally extended social dilemmas for which policies have been learned over the course of repeated interaction, including the famous opening example of lecture hall seating behavior. It is also possible to define SSDs that model the extraction of renewable vs non-renewable resources and track the sustainability of the emergent social behaviors while taking into account the varying difficulties of learning sustainable (cooperating) vs. non-sustainable (defecting) policies. Effects stemming from the need to learn implementations for strategic decisions may be especially important for informed policy-making concerning such real-world social dilemmas. Figure 1 : 1Canonical matrix game social dilemmas. Left: Outcome variables R, P , S, and T are mapped to cells of the game matrix. Right: Figure 2 : 2Venn diagram showing the relationship between Markov games, repeated matrix games, MGSDs, and SSDs. A repeated matrix game is an MGSD when it satisfies the social dilemma inequalities (eqs. 1 -4). A Markov game with |S| > 1 is an SSD when it can be mapped by empirical gametheoretic analysis (EGTA) to an MGSD. Many SSDs may map to the same MGSD. A two-player partially observable Markov game M is defined by a set of states S and an observation function O : S ×{1, 2} → R d specifying each player's d-dimensional view, along with two sets of actions allowable from any state A1 and A2, one for each player, a transition function T : S × A1 × A2 → ∆(S), where ∆(S) denotes the set of discrete probability distributions over S, and a reward function for each player: ri : S × A1 × A2 → R for player i. Let Oi = {oi | s ∈ S, oi = O(s, i)} be the observation space of player i. To choose actions, each player uses policy πi : Oi → ∆(Ai). Figure 4 : 4Social outcomes are influenced by environment parameters. Top: Gathering. Figure 6 : 6Summary of matrix games discovered within Gathering (Left) and Wolfpack (Right) through extracting empirical payoff matrices. The games are classified by social dilemma type indicated by color and quandrant. With the x-axis representing fear = P − S and the y-axis representing greed = T − R, the lower right quadrant contains Stag Hunt type games (green), the top left quadrant Chicken type games (blue), and the top right quadrant Prisoner's Dilemma type games (red). Non-SSD type games, which either violate social dilemma condition (1) or do not exhibit fear or greed are shown as well. Figure 7 : 7Factors influencing the emergence of defecting policies. Top row: Gathering. Shown are plots of average beam-use rate (aggressiveness) as a function of N apple (scarcity) Bottom row: Wolfpack. These authors contributed equally. https://goo.gl/2xczLc https://goo.gl/AgXtTn 5 https://goo.gl/vcB8mU The contrasting approach that seeks to build more structure into the reinforcement learning agents to enable more interpretable experimental manipulations is also interesting and complementary e.g.,[24]. . Some strategic situations may allow for multiple different implementations of cooperation, and each may require coordination to a greater or lesser extent. The same goes for multiple implementations of defection. AcknowledgmentsThe authors would like to thank Chrisantha Fernando, Toby Ord, and Peter Sunehag for fruitful discussions in the leadup to this work, and Charles Beattie, Denis Teplyashin, and Stig Petersen for software engineering support. Prisoner's dilemma-recollections and observations. Anatol Rapoport, Game Theory as a Theory of a Conflict Resolution. SpringerAnatol Rapoport. Prisoner's dilemma-recollections and observations. In Game Theory as a Theory of a Conflict Resolution, pages 17-34. Springer, 1974. The psychology of social dilemmas: A review. Jeff Paul Am Van Lange, Joireman, D Craig, Eric Parks, Van Dijk, Organizational Behavior and Human Decision Processes. 1202Paul AM Van Lange, Jeff Joireman, Craig D Parks, and Eric Van Dijk. The psychology of social dilemmas: A review. Organizational Behavior and Human Decision Processes, 120(2):125-141, 2013. Learning dynamics in social dilemmas. W Michael, Andreas Macy, Flache, Proceedings of the National Academy of Sciences. 993supplMichael W Macy and Andreas Flache. Learning dynamics in social dilemmas. Proceedings of the National Academy of Sciences, 99(suppl 3):7229-7236, 2002. The evolution of reciprocal altruism. Robert L Trivers, Quarterly Review of Biology. Robert L. Trivers. The evolution of reciprocal altruism. Quarterly Review of Biology, pages 35-57, 1971. The Evolution of Cooperation. Basic Books. Robert Axelrod, Robert Axelrod. The Evolution of Cooperation. Basic Books, 1984. Tit for tat in heterogeneous populations. A Martin, Karl Nowak, Sigmund, Nature. 3556357Martin A Nowak and Karl Sigmund. Tit for tat in heterogeneous populations. Nature, 355(6357):250-253, 1992. A strategy of win-stay, lose-shift that outperforms tit-for-tat in the prisoner's dilemma game. Martin Nowak, Karl Sigmund, Nature. 3646432Martin Nowak, Karl Sigmund, et al. A strategy of win-stay, lose-shift that outperforms tit-for-tat in the prisoner's dilemma game. Nature, 364(6432):56-58, 1993. Evolution of indirect reciprocity by image scoring. A Martin, Karl Nowak, Sigmund, Nature. 3936685Martin A Nowak and Karl Sigmund. Evolution of indirect reciprocity by image scoring. Nature, 393(6685):573-577, 1998. An evolutionary approach to norms. Robert Axelrod, 80American political science reviewRobert Axelrod. An evolutionary approach to norms. American political science review, 80(04):1095-1111, 1986. Cooperation emergence under resource-constrained peer punishment. Samhar Mahmoud, Simon Miles, Michael Luck, Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems. the 2016 International Conference on Autonomous Agents & Multiagent SystemsInternational Foundation for Autonomous Agents and Multiagent SystemsSamhar Mahmoud, Simon Miles, and Michael Luck. Cooperation emergence under resource-constrained peer punishment. In Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems, pages 900-908. International Foundation for Autonomous Agents and Multiagent Systems, 2016. Multiagent reinforcement learning in the iterated prisoner's dilemma. T W Sandholm, R H Crites, Biosystems. 371-2T.W. Sandholm and R.H. Crites. Multiagent reinforcement learning in the iterated prisoner's dilemma. Biosystems, 37(1-2):147-166, 1996. Learning to cooperate in multi-agent social dilemmas. Enrique Munoz De Cote, Alessandro Lazaric, Marcello Restelli, Proceedings of the Fifth International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS). the Fifth International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS)Enrique Munoz de Cote, Alessandro Lazaric, and Marcello Restelli. Learning to cooperate in multi-agent social dilemmas. In Proceedings of the Fifth International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS), 2006. Classes of multiagent Q-learning dynamics with greedy exploration. M Wunder, M Littman, M Babes, Proceedings of the 27th International Conference on Machine Learning. the 27th International Conference on Machine LearningM. Wunder, M. Littman, and M. Babes. Classes of multiagent Q-learning dynamics with greedy exploration. In Proceedings of the 27th International Conference on Machine Learning, 2010. Empirically evaluating multiagent learning algorithms. CoRR, abs/1401. Erik Zawadzki, Asher Lipson, Kevin Leyton-Brown, 8074Erik Zawadzki, Asher Lipson, and Kevin Leyton-Brown. Empirically evaluating multiagent learning algorithms. CoRR, abs/1401.8074, 2014. Evolutionary dynamics of multi-agent learning: A survey. Daan Bloembergen, Karl Tuyls, Daniel Hennes, Michael Kaisers, Journal of Artificial Intelligence Research. 53Daan Bloembergen, Karl Tuyls, Daniel Hennes, and Michael Kaisers. Evolutionary dynamics of multi-agent learning: A survey. Journal of Artificial Intelligence Research, 53:659-697, 2015. Evolutionary games and spatial chaos. A Martin, Robert M Nowak, May, Nature. 3596398Martin A Nowak and Robert M May. Evolutionary games and spatial chaos. Nature, 359(6398):826-829, 1992. Emotional multiagent reinforcement learning in spatial social dilemmas. Chao Yu, Minjie Zhang, Fenghui Ren, Guozhen Tan, IEEE Transactions on Neural Networks and Learning Systems. 2612Chao Yu, Minjie Zhang, Fenghui Ren, and Guozhen Tan. Emotional multiagent reinforcement learning in spatial social dilemmas. IEEE Transactions on Neural Networks and Learning Systems, 26(12):3083-3096, 2015. A simple rule for the evolution of cooperation on graphs and social networks. Hisashi Ohtsuki, Christoph Hauert, Erez Lieberman, Martin A Nowak, Nature. 4417092Hisashi Ohtsuki, Christoph Hauert, Erez Lieberman, and Martin A Nowak. A simple rule for the evolution of cooperation on graphs and social networks. Nature, 441(7092):502-505, 2006. A new route to the evolution of cooperation. C Francisco, Jorge M Santos, Pacheco, Journal of Evolutionary Biology. 193Francisco C Santos and Jorge M Pacheco. A new route to the evolution of cooperation. Journal of Evolutionary Biology, 19(3):726-733, 2006. Analyzing complex strategic interactions in multi-agent systems. E William, Rajarshi Walsh, Gerald Das, Jeffrey O Tesauro, Kephart, AAAI-02 Workshop on Game-Theoretic and Decision-Theoretic Agents. William E Walsh, Rajarshi Das, Gerald Tesauro, and Jeffrey O Kephart. Analyzing complex strategic interactions in multi-agent systems. In AAAI-02 Workshop on Game-Theoretic and Decision-Theoretic Agents, pages 109-118, 2002. Methods for empirical game-theoretic analysis (extended abstract). Michael Wellman, Proceedings of the National Conference on Artificial Intelligence (AAAI). the National Conference on Artificial Intelligence (AAAI)Michael Wellman. Methods for empirical game-theoretic analysis (extended abstract). In Proceedings of the National Conference on Artificial Intelligence (AAAI), pages 1552-1555, 2006. Markov games as a framework for multi-agent reinforcement learning. M L Littman, Proceedings of the 11th International Conference on Machine Learning (ICML). the 11th International Conference on Machine Learning (ICML)M. L. Littman. Markov games as a framework for multi-agent reinforcement learning. In Proceedings of the 11th International Conference on Machine Learning (ICML), pages 157-163, 1994. Game theory and multiagent reinforcement learning. Ann Nowé, Peter Vrancx, Yann-Michaël De Hauwere, Reinforcement Learning: State-of-the-Art. Marco Wiering and Martijn van OtterloSpringerAnn Nowé, Peter Vrancx, and Yann-Michaël De Hauwere. Game theory and multiagent reinforcement learning. In Marco Wiering and Martijn van Otterlo, editors, Reinforcement Learning: State-of-the-Art, chapter 14. Springer, 2012. Coordinate to cooperate or compete: abstract goals and joint intentions in social interaction. Max Kleiman-Weiner, M K Ho, J L Austerweil, Josh B Michael L Littman, Tenenbaum, Proceedings of the 38th Annual Conference of the Cognitive Science Society. the 38th Annual Conference of the Cognitive Science SocietyMax Kleiman-Weiner, M K Ho, J L Austerweil, Michael L Littman, and Josh B Tenenbaum. Coordinate to cooperate or compete: abstract goals and joint intentions in social interaction. In Proceedings of the 38th Annual Conference of the Cognitive Science Society, 2016. Human-level control through deep reinforcement learning. V Mnih, K Kavukcuoglu, D Silver, A A Rusu, J Veness, M G Bellemare, A Graves, M Riedmiller, A K Fidjeland, G Ostrovski, S Petersen, C Beattie, A Sadik, I Antonoglou, H King, D Kumaran, D Wierstra, S Legg, D Hassabis, Nature. 5187540V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540):529-533, 2015. If multi-agent learning is the answer, what is the question?. Y Shoham, R Powers, T Grenager, Artificial Intelligence. 1717Y. Shoham, R. Powers, and T. Grenager. If multi-agent learning is the answer, what is the question? Artificial Intelligence, 171(7):365-377, 2007. Value function approximation in zero-sum Markov games. M G Lagoudakis, R Parr, Proceedings of the 18th Conference on Uncertainty in Artificial Intelligence (UAI). the 18th Conference on Uncertainty in Artificial Intelligence (UAI)M. G. Lagoudakis and R. Parr. Value function approximation in zero-sum Markov games. In Proceedings of the 18th Conference on Uncertainty in Artificial Intelligence (UAI), pages 283-292, 2002. Approximate dynamic programming for two-player zero-sum Markov games. J Pérolat, B Scherrer, B Piot, O Pietquin, Proceedings of the International Conference on Machine Learning (ICML). the International Conference on Machine Learning (ICML)J. Pérolat, B. Scherrer, B. Piot, and O. Pietquin. Approximate dynamic programming for two-player zero-sum Markov games. In Proceedings of the International Conference on Machine Learning (ICML), 2015. Softened approximate policy iteration for Markov games. J Pérolat, B Piot, M Geist, B Scherrer, O Pietquin, Proceedings of the International Conference on Machine Learning (ICML). the International Conference on Machine Learning (ICML)J. Pérolat, B. Piot, M. Geist, B. Scherrer, and O. Pietquin. Softened approximate policy iteration for Markov games. In Proceedings of the International Conference on Machine Learning (ICML), 2016. Algorithms for computing strategies in two-player simultaneous move games. Branislav Bošanský, Viliam Lisý, Marc Lanctot, Jiří Cermák, Mark H M Winands, Artificial Intelligence. 237Branislav Bošanský, Viliam Lisý, Marc Lanctot, Jiří Cermák, and Mark H.M. Winands. Algorithms for computing strategies in two-player simultaneous move games. Artificial Intelligence, 237:1-40, 2016. Cyclic equilibria in Markov games. M Zinkevich, A Greenwald, M Littman, Neural Information Processing Systems. M. Zinkevich, A. Greenwald, and M. Littman. Cyclic equilibria in Markov games. In Neural Information Processing Systems, 2006. Multiagent reinforcement learning: Theoretical framework and an algorithm. J Hu, M P Wellman, Proceedings of the 15th International Conference on Machine Learning (ICML). the 15th International Conference on Machine Learning (ICML)J. Hu and M. P. Wellman. Multiagent reinforcement learning: Theoretical framework and an algorithm. In Proceedings of the 15th International Conference on Machine Learning (ICML), pages 242-250, 1998. Correlated-Q learning. A Greenwald, K Hall, Proceedings of the 20th International Conference on Machine Learning (ICML). the 20th International Conference on Machine Learning (ICML)A. Greenwald and K. Hall. Correlated-Q learning. In Proceedings of the 20th International Conference on Machine Learning (ICML), pages 242-249, 2003. Friend-or-foe Q-learning in general-sum games. Michael Littman, Proceedings of the Eighteenth International Conference on Machine Learning. the Eighteenth International Conference on Machine LearningMichael Littman. Friend-or-foe Q-learning in general-sum games. In Proceedings of the Eighteenth International Conference on Machine Learning, pages 322-328, 2001. On the use of non-stationary strategies for solving two-player zero-sum Markov games. J Pérolat, B Piot, B Scherrer, O Pietquin, Proceedings of the 19th International Conference on Artificial Intelligence and Statistics. the 19th International Conference on Artificial Intelligence and StatisticsJ. Pérolat, B. Piot, B. Scherrer, and O. Pietquin. On the use of non-stationary strategies for solving two-player zero-sum Markov games. In Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, 2016. A framework for sequential planning in multi-agent settings. J Piotr, Prashant Gmytrasiewicz, Doshi, Journal of Artificial Intelligence Research. 24Piotr J Gmytrasiewicz and Prashant Doshi. A framework for sequential planning in multi-agent settings. Journal of Artificial Intelligence Research, 24:49-79, 2005. Exploiting coordination locales in distributed POMDPs via social model shaping. Pradeep Varakantham, Jun-Young Kwak, Matthew E Taylor, Janusz Marecki, Paul Scerri, Milind Tambe, Proceedings of the 19th International Conference on Automated Planning and Scheduling, ICAPS. the 19th International Conference on Automated Planning and Scheduling, ICAPSPradeep Varakantham, Jun-young Kwak, Matthew E Taylor, Janusz Marecki, Paul Scerri, and Milind Tambe. Exploiting coordination locales in distributed POMDPs via social model shaping. In Proceedings of the 19th International Conference on Automated Planning and Scheduling, ICAPS, 2009. Solving transition independent decentralized Markov decision processes. Raphen Becker, Shlomo Zilberstein, Victor Lesser, Claudia V Goldman, Journal of Artificial Intelligence Research. 22Raphen Becker, Shlomo Zilberstein, Victor Lesser, and Claudia V Goldman. Solving transition independent decentralized Markov decision processes. Journal of Artificial Intelligence Research, 22:423-455, 2004. The world of independent learners is not Markovian. Guillaume J Laurent, Laëtitia Matignon, N Le Fort-Piat, Int. J. Know.-Based Intell. Eng. Syst. 151Guillaume J. Laurent, Laëtitia Matignon, and N. Le Fort-Piat. The world of independent learners is not Markovian. Int. J. Know.-Based Intell. Eng. Syst., 15(1):55-64, 2011. Mastering the game of Go with deep neural networks and tree search. David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. 529David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mastering the game of Go with deep neural networks and tree search. Nature, 529:484-489, 2016. A neural substrate of prediction and reward. W Schultz, P Dayan, P R Montague, Science. 2755306W. Schultz, P. Dayan, and P.R. Montague. A neural substrate of prediction and reward. Science, 275(5306):1593-1599, 1997. Reinforcement learning in the brain. Y Niv, The Journal of Mathematical Psychology. 533Y. Niv. Reinforcement learning in the brain. The Journal of Mathematical Psychology, 53(3):139-154, 2009. Introduction to Reinforcement Learning. Richard S Sutton, Andrew G Barto, MIT PressRichard S. Sutton and Andrew G. Barto. Introduction to Reinforcement Learning. MIT Press, 1998. Reinforcement learning improves behaviour from evaluative feedback. Michael L Littman, Nature. 5217553Michael L Littman. Reinforcement learning improves behaviour from evaluative feedback. Nature, 521(7553):445-451, 2015. Batch reinforcement learning. Sascha Lange, Thomas Gabel, Martin Riedmiller, Reinforcement learning. SpringerSascha Lange, Thomas Gabel, and Martin Riedmiller. Batch reinforcement learning. In Reinforcement learning, pages 45-73. Springer, 2012. Time, uncertainty, and individual differences in decisions to cooperate in resource dilemmas. V Katherine, Colleen F Kortenkamp, Moore, Personality and Social Psychology Bulletin. 325Katherine V Kortenkamp and Colleen F Moore. Time, uncertainty, and individual differences in decisions to cooperate in resource dilemmas. Personality and Social Psychology Bulletin, 32(5):603-615, 2006. High and low trusters' responses to fear in a payoff matrix. D Craig, Lorne G Parks, Hulbert, Journal of Conflict Resolution. 394Craig D Parks and Lorne G Hulbert. High and low trusters' responses to fear in a payoff matrix. Journal of Conflict Resolution, 39(4):718-730, 1995. When happiness makes us selfish, but sadness makes us fair: Affective influences on interpersonal strategies in the dictator game. Bing Hui, Joseph P Tan, Forgas, Journal of Experimental Social Psychology. 463Hui Bing Tan and Joseph P Forgas. When happiness makes us selfish, but sadness makes us fair: Affective influences on interpersonal strategies in the dictator game. Journal of Experimental Social Psychology, 46(3):571-576, 2010. How other-regarding preferences can promote cooperation in non-zero-sum grid games. Joseph L Austerweil, Stephen Brawner, Amy Greenwald, Elizabeth Hilliard, Mark Ho, Michael L Littman, James Macglashan, Carl Trimbach, Proceedings of the AAAI Symposium on Challenges and Opportunities in Multiagent Learning for the Real World. the AAAI Symposium on Challenges and Opportunities in Multiagent Learning for the Real WorldJoseph L. Austerweil, Stephen Brawner, Amy Greenwald, Elizabeth Hilliard, Mark Ho, Michael L. Littman, James MacGlashan, and Carl Trimbach. How other-regarding preferences can promote cooperation in non-zero-sum grid games. In Proceedings of the AAAI Symposium on Challenges and Opportunities in Multiagent Learning for the Real World, 2016. Uncertainty-based competition between prefrontal and dorsolateral striatal systems for behavioral control. Yael Nathaniel D Daw, Peter Niv, Dayan, Nature neuroscience. 812Nathaniel D Daw, Yael Niv, and Peter Dayan. Uncertainty-based competition between prefrontal and dorsolateral striatal systems for behavioral control. Nature neuroscience, 8(12):1704-1711, 2005. Micromotives and macrobehavior. C Thomas, Schelling, WW Norton & CompanyThomas C. Schelling. Micromotives and macrobehavior. WW Norton & Company, 1978 Rev. 2006.
[]
[ "Disentangling wrong-way risk: pricing CVA via change of measures and drift adjustment", "Disentangling wrong-way risk: pricing CVA via change of measures and drift adjustment" ]
[ "Damiano Brigo ", "Frédéric Vrins " ]
[]
[]
A key driver of Credit Value Adjustment (CVA) is the possible dependency between exposure and counterparty credit risk, known as Wrong-Way Risk (WWR). At this time, addressing WWR in a both sound and tractable way remains challenging: arbitrage-free setups have been proposed by academic research through dynamic models but are computationally intensive and hard to use in practice. Tractable alternatives based on resampling techniques have been proposed by the industry, but they lack mathematical foundations. This probably explains why WWR is not explicitly handled in the Basel III regulatory framework in spite of its acknowledged importance. The purpose of this paper is to propose a new method consisting of an appealing compromise: we start from a stochastic intensity approach and end up with a pricing problem where WWR does not enter the picture explicitly. This result is achieved thanks to a set of changes of measure: the WWR effect is now embedded in the drift of the exposure, and this adjustment can be approximated by a deterministic function without affecting the level of accuracy typically required for CVA figures. The performances of our approach are illustrated through an extensive comparison of Expected Positive Exposure (EPE) profiles and CVA figures produced either by (i) the standard method relying on a full bivariate Monte Carlo framework and (ii) our drift-adjustment approximation. Given the uncertainty inherent to CVA, the proposed method is believed to provide a promising way to handle WWR in a sound and tractable way.
10.2139/ssrn.3366804
[ "https://arxiv.org/pdf/1611.02877v1.pdf" ]
157,599,517
1611.02877
14b2c9ea4f4ed6e6e8c1b3fa2d3dba47492a145a
Disentangling wrong-way risk: pricing CVA via change of measures and drift adjustment First version: February 29, 2016. This version: November 10, 2016. Damiano Brigo Frédéric Vrins Disentangling wrong-way risk: pricing CVA via change of measures and drift adjustment First version: February 29, 2016. This version: November 10, 2016.counterparty riskCVAwrong-way riskstochastic intensityjump-diffusionschange of measuredrift adjustmentwrong way measure A key driver of Credit Value Adjustment (CVA) is the possible dependency between exposure and counterparty credit risk, known as Wrong-Way Risk (WWR). At this time, addressing WWR in a both sound and tractable way remains challenging: arbitrage-free setups have been proposed by academic research through dynamic models but are computationally intensive and hard to use in practice. Tractable alternatives based on resampling techniques have been proposed by the industry, but they lack mathematical foundations. This probably explains why WWR is not explicitly handled in the Basel III regulatory framework in spite of its acknowledged importance. The purpose of this paper is to propose a new method consisting of an appealing compromise: we start from a stochastic intensity approach and end up with a pricing problem where WWR does not enter the picture explicitly. This result is achieved thanks to a set of changes of measure: the WWR effect is now embedded in the drift of the exposure, and this adjustment can be approximated by a deterministic function without affecting the level of accuracy typically required for CVA figures. The performances of our approach are illustrated through an extensive comparison of Expected Positive Exposure (EPE) profiles and CVA figures produced either by (i) the standard method relying on a full bivariate Monte Carlo framework and (ii) our drift-adjustment approximation. Given the uncertainty inherent to CVA, the proposed method is believed to provide a promising way to handle WWR in a sound and tractable way. Introduction The 2008 Financial crisis stressed the importance of accounting for counterparty risk in the valuation of OTC transactions, even when the later are secured via (clearly unperfect) collateral agreements. Counterparty default risk calls for a price adjustment when valuing OTC derivatives, called Credit Value Adjustment (CVA). This adjustment depends on the traded portfolio Π and the counterparty C. It represents the market value of the expected losses on the portfolio in case C defaults prior to the portfolio maturity T . Alternatively, this can be seen as today's price of replacing the counterparty in the financial transactions constituting the portfolio, see for example [12], [14], [22]. The mathematical expression of this adjustment can be derived in a rather easy way within a risk-neutral pricing framework. Yet, the computation of the resulting conditional expectation poses some problems when addressing Wrong-Way Risk (WWR) that is, accounting for the possible statistical dependence between exposure and counterparty credit risk. Several techniques have been proposed to tackle this point. At this time, there are two main approaches to tackle WWR: the dynamic approach (either structural or reduced-form) and the static (resampling) approach. The first one provides an arbitrage-free setup and is popular among academic researchers. Unfortunately, it has the major disadvantage of being computationally intensive and cumbersome, which makes its practical use difficult. On the other hand, the second approach does not have a rigorous justification but has the nice feature of providing the industry with a tractable alternative to evaluate WWR in a rather simple way. In spite of its significance, WWR is currently not explicitly accounted for in the Basel III regulatory framework; the lack of a reasonable alternative to handle CVA is probably one of the reasons. In this paper, we revisit the CVA problem under WWR and propose an appealing way to handle it in a sound but yet tractable way. We show how CVA with WWR can be written as CVA without WWR provided that the exposure dynamics is modified accordingly. This will be achieved via a set of measures called "wrong way measures". The paper is organized as follows. Section 2 recalls the fundamental CVA pricing formulae with and without WWR. Next, in Section 3, we briefly review the most popular techniques to address WWR in CVA computations. We then focus on the case where default risk is managed in a stochastic intensity framework and consider a Cox process setting more specifically. Section 4 introduces a set of new numéraires that will generate equivalent martingale measures called wrong way measures (WWM). Equipped with these new measures, the CVA problem with WWR takes a similar form as the CVA problem without WWR, provided that we change the measure under which one computes the expectation of the positive exposure. Section 5 is dedicated to the computation of the exposure dynamics under the WWM. Particular attention is paid to the stochastic drift adjustment under affine intensity models. In order to reduce the complexity of the pricing problem, the stochastic drift adjustment is approximated by a deterministic function; the WWR effect is thus fully encapsulated in the exposure's drift via a deterministic adjustment. Finally, Section 6 proposes an extensive analysis of the performances of the proposed approach in comparison with the standard stochastic intensity method featuring Euler discretizations of the bivariate stochastic differential equation (SDE) governing the joint dynamics of default intensity (credit spread risk) and portfolio value (market risk). Counterparty risk adjustment Define the short (risk-free) rate process r = (r t ) t 0 and the corresponding bank account numéraire B t := e t 0 rsds so that the deflator B := (B t ) t 0 has dynamics : dB t = r t B t dt . Under the no-arbitrage assumption, there exists a risk-neutral probability measure Q associated to this numéraire, in the sense that it makes all B-discounted non-dividend paying tradeable assets Qmartingales. In this setup, CVA can be computed as the Q-expectation of the non-recovered losses resulting from counterparty's default, discounted according to B. More explicitly, if R stands for the recovery rate of C and V t is the close out price of Π at time t 1 , the general formula for the CVA on portfolio Π traded with counterparty C which default time is modeled via the random variable τ > 0 is given by (see for example [11]): CVA = E B (1 − R) 1I {τ T } V + τ B τ = (1 − R) E B E B H T V + τ B τ σ(H u , 0 u t) where E B denotes the expectation operator under Q, H := (H t ) t 0 is the default indicator process defined as H t := 1I {τ t} , and the second equality results from the assumption that R is a constant and from the tower property. The outer expectation can be written as an integral with respect to the risk-neutral survival probability G(t) := Q[τ > t] = E B 1I {τ >t} . The survival probability is a deterministic positive and decreasing function satisfying G(0) = 1 and typically expressed as G(t) = e − t 0 h(s)ds where h is a non-negative function called hazard rate. In practice, this curve is bootstrapped from market quotes of securities driven by the creditworthiness of C, i.e. defaultable bonds or credit default swaps (CDS). If τ admits a density, the expression for CVA then becomes CVA = −(1 − R) T 0 E B V + t B t τ = t dG(t) .(1) In the case where the portfolio Π is independent of τ , one can drop conditioning in the above expectation to obtain the so-called standard (or independent) CVA formula: CVA ⊥ = −(1 − R) T 0 E B V + t B t dG(t) .(2) where the superscript ⊥ in general denotes that the related quantity is computed under the independence assumption. The deterministic function being integrated with respect to the survival probability is called the (discounted ) expected positive exposure, also known under the acronym EPE: EPE ⊥ (t) := E B V + t B t . Under this independence assumption, CVA takes the form of the weighted (continuous) sum of European call option prices with strike 0 where the underlying of the option is the residual value of the portfolio Π. Wrong way risk In the more general case where the market value of Π depends on the default time τ , we cannot drop conditioning in the expectation (1), and one has to account for the dependency between credit and exposure. Depending on the sign of this relationship and, more generally, on the joint distribution of the portfolio value and the default time, this can increase or decrease the CVA; when CVA is increased, this effect is known as wrong way risk (WWR). When CVA decreases, this is called right way risk. In this paper we will use the term "wrong way risk" to loosely denote both situations. In order to capture this effect, we need to model jointly exposure and credit. The first named author and co-authors pioneered the literature on WWR in a series of papers using a variety of modeling approaches across asset classes. In interest rate markets, the analysis of WWR on uncollateralized interest rate portfolios is studied in [16] via intensity models for credit risk, while WWR on collateralized interest rate portfolios is studied in [8]. For credit markets, and uncollateralized CDS in particular, WWR is considered in [9], where intensity models and copula functions are used; WWR on collateralized CDS with collateral and gap risk is studied with the same technical tools in [7]. WWR on commodities, and oil swaps in particular, has been studied in [6] via intensity models, and WWR on equity is studied in [15] resorting to analytically tractable first passage (AT1P) firm value models. Most of these studies are summarized in the monograph [14]. Two approaches for one problem Two main approaches have been proposed in the literature to tackle WWR. They all aim at coupling portfolio value and default likelihood in a tractable way. The first approach (called dynamic) consists in modeling credit worthiness using stochastic processes. Among this first class of models one distinguish two setups. The first dynamic setup (structural model ) relies on Merton's approach to model the firm value. Default is reached as soon as the firm value goes below a barrier representing the level of the firm's assets. This method is very popular in credit risk in general, except for pricing purposes as it is know to underestimate short-term default (see e.g. [2,14,15] and references therein for CVA pricing methods using structural credit models). The second dynamic setup (reduced-form model ) consists in modeling the default likelihood via a stochastic intensity process. In this setup, default is unpredictable; only the default likelihood is modeled. In the sequel we restrict ourselves to stochastic intensity models, which is the most popular dynamic setup for CVA purposes (see for instance [14], [16], [23], [26]). 2 This first class of models is mathematically sound in the sense that it can be arbitrage-free if handled properly. However, as pointed in [26], dealing with this additional stochastic process may be computationally intensive. Hence, practitioners developed a second class of models called static, to get rid of these difficulties. They consist in coupling exposure and credit using a copula, a specific function that creates a valid multivariate distribution from univariate ones (this is not to be confused with the copula connecting two default times that was used for example in [7], resulting in a more rigorous formulation in that context). This method is also known as a resampling technique and is very popular among practitioners as it drastically simplifies the way CVA can be evaluated under WWR. In particular, it is numerically interesting in the sense that in a first phase one can consider exposure and credit separately, and then, in a second phase, introduce the dependence effect a posteriori by joining the corresponding distributions via a copula (see e.g. [26] and [28] for further reading on this technique). Clearly, this way to couple exposure and credit risk is somewhat artificial. In particular, it is known to suffer from potential arbitrage opportunities, contrary to the WWR approaches listed earlier. In summary there are two classes of models: dynamic arbitrage-free models that are computationally demanding and hard to use in practice, and static resampling models that have no sound mathematical justification but providing a tractable alternative for the industry. Later in the paper we explain how one can develop a framework encompassing the best of both the static and dynamic approaches without their inconveniences. In particular, we circumvent the technical difficulty inherent to stochastic intensity models with the help of changes of measure. Before doing so we provide the reader with additional details regarding the stochastic intensity model setup. CVA under a stochastic intensity model The reduced-form approach relies on a change of filtrations. Filtration G := (G t ) t 0 represents the total information available to the investors on the market. In our context, this can be viewed as all relevant asset prices and/or risk factors. All stochastic processes considered here are thus defined on a complete filtered probability space (Ω, G, G = (G t ) 0 t T , Q) where Q is the risk-neutral measure and G := G T with T the investment horizon (which can be considered here as the portfolio maturity). We can define F := (F t ) 0 t T as the largest subfiltration of G preventing the default time τ to be a F-stopping time. In other words, F contains the same information as G except that the default indicator process H is not observable (i.e. H is adapted to G but not to F). In other terms, we are assuming the total market filtration G to be separable in F and the pure default monitoring filtration H where H = (H t ) 0 t T , H t = σ(H u , 0 u t), G t = H t ∨ F t . A key quantity for tackling default is the Azéma (Q, F)-supermartingale (see [19]), defined as the projection of the survival indicator H to the subfiltration F: S t := E B 1I {τ >t} | F t = Q [τ > t| F t ] . The financial interpretation of S t is a survival probability at t given only observation of the defaultfree filtration F up to t and default monitoring H for any name. Formally, the stochastic process S is linked to the survival probability G by the law of iterated expectations: E B [S t ] = E B E B 1I {τ >t} | F t = E B 1I {τ >t} = Q[τ > t] = G(t) .(3) In many practical applications, the curve G is given exogenously from market quotes (bond or credit default swaps). When this is the case, the above relationship puts constraints on the dynamics of S so that the equality E B [S t ] = G(t) is then referred to as the calibration equation. A very important result from stochastic calculus is the so-called Key Lemma (Lemma 3.1.3. in [3]) that allows to get rid of the explicit default time τ , focusing on the Azéma supermartingale instead. Applying this lemma to CVA yields the following equation, that holds whenever V τ H T is Q-integrable and V is F-predictable 3 : CVA = (1 − R) E B V + τ B τ 1I {τ T } = −(1 − R) E B T 0 V + t B t dS t .(4) The above result can be understood intuitively by localizing the default time in any possible small interval (t, t + dt], for t spanning the whole maturity horizon [0, T ]. Defining dH t := H t+dt − H t = 1I {τ ∈(t,t+dt]} one gets E B V + τ B τ H T = T 0 E B V + t B t dH t = T 0 E B V + t B t E B [dH t |F t ] = − T 0 E B V + t B t dS t(5) where we have used Fubini's theorem, the tower property and assumed that V is F-adapted hence is independent from H. CVA in the Cox process setup An interesting specific case of Azéma supermartingales arises when S is positive and decreasing from S 0 = 1 with zero quadratic variation. This corresponds to the Cox setup: the process S can be parametrized as S t = e −Λt where Λ t := t 0 λ u du , where λ := (λ t ) t 0 is a non-negative, F-adapted stochastic intensity process. In this specific case, one can think of S := (S t ) t 0 as a survival process so that τ can be viewed as the (first) passage time of S below a random threshold drawn from a standard uniform random variable, independent of S. Then, CVA (including WWR) reduces to CVA = −(1 − R) T 0 E B V + t B t ζ t dG(t)(6) where ζ t := λ t S t h(t)G(t) . Remark 1. The process ζ represents the differential of the survival process (S) normalized with respect to the differential of its time-0 Q-expectation (G): ζ t = dQ[τ > t|F s ] dt s=t dQ[τ > t|F s ] dt s=0 . When G is given exogenously from market quotes, the denominator can be considered as the prevailing market view of the default likelihood. From that perspective, ζ is a kind of model-to-market survival rate change ratio. In the above expression, EPE(t) := E B V + t B t ζ t is the EPE under WWR : it is the deterministic profile to be integrated with respect to the survival probability curve G to get the CVA (up to the constant 1 − R) including WWR, just like the EPE in the no-WWR case eq (2). Moreover, from the calibration equation (3), E B [λ t S t ] = − E B d dt S t = − d dt E B [S t ] = − d dt G(t) := −G (t) = h(t)G(t) so that ζ is a unit-Q-expectation, non-negative stochastic process. In the case of independence between exposure (V ) and risk-free rate (r, B) on the one hand, and credit risk (λ, S) on the other hand, the expected value in eq. (6) can be factorized and the equation collapses to eq. (2) 4 : CVA = −(1 − R) T 0 E B V + t B t E B [ζ t ] dG(t) = −(1 − R) T 0 EPE ⊥ (t)dG(t) = CVA ⊥ . 4 Recall that eq. (4) holds provided that the portfolio value process V does not depend on the explicit default random variable. However, it may well depend on credit worthiness quantities embedded in F, typically credit spreads λ. Consider for example the case where the default time τ is modeled as the first jump of a Cox process with a strictly positive intensity process. In that case, τ = Λ −1 ξ , with ξ standard exponential independent from λ. Then, the portfolio value Vt may depend on λ up to t, but not on information on ξ. Generally speaking however, the factorization of expectations E B V + t B t ζ t = E B V + t B t E B [ζ t ](7) is not valid. Because of WWR, we have to account for the potential statistical dependence between market and credit risk. This is typically obtained by modeling V, r and λ using correlated risk factors. This can be achieved by modeling these processes with e.g. correlated Brownian motions. One can even think of making λ a deterministic function of the exposure V , as in [23]. This setup has the advantage to feature parameters that are more intuitive from a trading or risk-management perspective than an instantaneous correlation between latent risk-factors, but the calibration of the intensity process is more involved and depends on the specific portfolio composition. As the time-t stochastic intensity λ t depends in a deterministic way on V t , the survival process S depends on the whole path of the exposure process up to t. In spite of this specificity, this approach fits in the stochastic intensity setup and hence fits in the class of methods covered here. The Wrong Way Measure (WWM) In the general case where eq. (7) does not hold, one needs to evaluate the left-hand expectation, which is much more involved than the right-hand side and is the main reason why such models are not used in practice. Nevertheless, noting that ζ is a non-negative unit-expectation process, comparing eq. (2) with eq. (6) suggests that the problem could be addressed using change of measure techniques. In this section, we derive a set of equivalent martingale measures allowing us to obtain such a factorization of expectations, even in presence of WWR. The main difference is the measure under which the expectations appearing on the right-hand side are taken. Derivation of the EPE expression in the new measure We start by specifying a bit further the filtered probability space on which all stochastic processes are defined. The filtration F is generated by a finite dimensional Brownian motion W driving exposure, rates and credit spreads. Filtration G is thus the market filtration obtained by enlarging F with the natural filtration of the default indicator H. Notice that in a Cox setup, all discounted assets that do not explicitly depend on τ (even if they depend on λ) are Q-martingales under both filtrations. With this setup at hand, we can define C t s as the time-s price of an asset protecting one unit of currency against default of the counterparty on the period (t, t + dt], t s. Using the Key lemma once again, C t s := E B B s B t 1I {τ ∈(t,t+dt]} G s = 1I {τ >s} S s E B B s B t λ t S t F s :=C F ,t s dt . Because B s is F s -measurable, C F ,t s = B s M t s where M t s := E B 1 B t λ t S t F s . With this notation, C F ,t t = B t M t t = λ t S t . It is obvious to see that C F ,t s grows at the risk-free rate with respect to s on [0, t] (t is fixed). Indeed, by the martingale representation theorem the positive martingale M t s can be written on [0, t] as dM t s = M t s γ γ γ s dW s . Therefore C F ,t s can be seen as the price of a tradeable asset on [0, t] computed under partial (F) information. Obviously, the corresponding expected rate of growth is equal to the risk-free rate under Q: dC F ,t s = d(B s M t s ) = dB s M t s + B s dM t s = r s B s M t s ds + B s dM t s = r s C F ,t s ds + C F ,t s γ γ γ s dW s or equivalently with Ito's lemma, d log C F ,t s = r s − γ γ γ s γ γ γ T s 2 ds + γ γ γ s dW s . We can thus choose C F ,t s as numéraire for all s, 0 s t and write E B V + t B t λ t S t = E C F ,t C F ,t 0 C F ,t t λ t S t V + t = C F ,t 0 E C F ,t V + t = E C F ,t V + t E B λ t S t B t , or equivalently rescaling by 1/(h(t)G(t)), E B V + t B t ζ t = E C F ,t V + t E B ζ t B t .(8) The probability measure associated to the expectation operator E C F ,t is noted Q C F ,t and is called the Wrong Way Measure (WWM or WW measure). This measure will be further specified from Q and the corresponding Radon-Nikodỳm derivative process in Section 4.3. A related measure based on a partial-information numéraire price had been introduced in Chapter 23 of [12] to derive the CDS options market model. Equation (8) is very similar to eq. (7) except that (i) the RHS expectation of the positive exposure is taken under another measure than Q and (ii) the bank account numéraire B does not appear in the first but in the second expectation, embedding credit risk. In contrast with (7), (8) holds true whatever the actual dependency between all those risks. It yields another expression for the EPE under WWR: EPE(t) = E C F ,t V + t E B ζ t B t .(9) EPE expression in the new measure under risk-free rate-credit independence It is very common to assume independence between risk-free rates and credit. Indeed, such a potential relationship has typically a very limited numerical impact (see e.g. [5] for more details). With this additional assumption one gets E B λtSt Bt = −P r (0, t)G (t) where P r (s, t) is the time-s price of a riskfree zero-coupon bond expiring at t, i.e. P r (s, t) := E B e − t s rudu F s . Hence, under independence between counterparty's credit risk and the bank account numéraire, CVA finally reads CVA = −(1 − R) T 0 E C F ,t V + t P r (0, t)dG(t) .(10) The above expression looks very similar to the standard CVA expression that is revisited assuming independence between all risk factors, namely CVA ⊥ = −(1 − R) T 0 E B V + t P r (0, t)dG(t) .(11) Hence, the general CVA formula (10) (including WWR but with the mild independence assumption between risk-free rates and counterparty credit) takes a similar form as the simple standard CVA (11) (i.e. without WWR) where, in addition, risk-free rates are deterministic. In this case, the general CVA expression is given by the independent CVA expression, but replacing EPE ⊥ (t) = E B [V + t ] in eq. (11) by EPE(t) = E C F ,t V + t . This observation suggests that changing the measure may indeed help dealing with WWR. Radon-Nikodỳm derivative process The numéraire C F ,t = (C F ,t s ) 0 s t is the numéraire associated with the WWM Q C F ,t . The corresponding Radon-Nikodỳm derivative process Z t is a Q-martingale on [0, t] : Z t s := dQ C F ,t dQ Fs = C F ,t s B 0 C F ,t 0 B s = M t s M t 0 = E B λtSt Bt |F s E B λtSt Bt .(12) In the case of independence between rates and credit, Z t simplifies to Z t s = P r (s, t) E B [ζ t |F s ] B s P r (0, t) . In order for the CVA formula (10) to be useful in practice, we need to compute E C F ,t V + t that is, to derive the exposure dynamics under this new measure. This is the purpose of the next section. Exposure's drift adjustment In the sequel we restrict ourselves to the case where portfolio and credit risk are driven by a specific one-dimensional Q-Brownian motion, namely W V and W λ , respectively. These two processes can be correlated but W λ is independent from the short-rate drivers. These assumptions can be relaxed but help clarifying the point we want to make, which is to show how one can get rid of the link between exposure and credit by changing the pricing measure. In this setup we assume that these Brownian motions and the short-rate drivers actually generate the filtration F. We postulate continuous dynamics for V under Q, dV s = α s ds + β s dW V s(13) where we assume the processes α and β to be continuous and F-adapted, and derive the dynamics of V under Q C F ,t . It is known from Girsanov's theorem that dW V s = dW V s + d W V , log C F ,t s , whereW V s is a (Q C F ,t , F)-Brownian motion. In other words, the dynamics of V under Q C F ,t is given by dV s = α s + θ t s ds + β s dW V s , with θ t s is a stochastic process known as drift adjustment. Standard results from stochastic calculus (see e.g. the change-of-numéraire toolkit presented in [12, Ch. II]) yields the general form of this drift adjustment : θ t s dt = β s d W V , log C F ,t s . Evaluating this cross-variation requires to further specify the risk-neutral dynamics of the numéraire C F ,t and in particular, the Q-dynamics for the default intensity λ. Again, we adopt a quite general framework: dλ s = µ λ s ds + σ λ s dW λ s , where µ λ s and σ λ s are continuous F-adapted stochastic processes. As λ is assumed independent from r, the new numéraire takes the form C F ,t s = E B B s B t F s E B λ t S t F s = P r (s, t) E B λ t S t F s = −P r (s, t)S s ∂P λ (s, t) ∂t(14) where P λ (s, t) := E B e − t s λudu F s . In order to proceed, we must further specify the form of P λ (s, t). Affine intensity and short-rate processes In this section we derive the specific form of θ t s in the standard case where both risk-free rate r and stochastic intensity λ follow independent affine stochastic intensity processes, the independence assumption being justified for example in [5,12]: P r (s, t) = A r (s, t) e −B r (s,t)rs , P λ (s, t) = A λ (s, t) e −B λ (s,t)λs . This in turn implies that for x ∈ {r, λ}, see for example [4], dx s = µ x s ds + σ x s dW r s where d W r , W λ t ≡ 0 and the drift and diffusion coefficient of both processes take the specific form µ x s = µ x s (x s ) = a(s) + b(s)x s σ x s = σ x s (x s ) = c(s) + d(s)x s for some deterministic functions a, b, c, d. Since P λ (s, t) > 0 for all 0 s t, one obviously gets A λ (s, t) > 0 so we can write P λ t (s, t) := ∂P λ (s, t) ∂t = ∂A λ (s, t) ∂t e −B λ (s,t)λs − ∂B λ (s, t) ∂t λ s P λ (s, t) =: P λ (s, t) A λ t (s, t) A λ (s, t) − B λ t (s, t)λ s . Observe that the functions A x , B x satisfy, for all u 0 and x ∈ {r, λ}: A x (u, u) = B x t (u, u) = 1 and B x (u, u) = A x t (u, u) = 0 . Plugging this expression P λ t (s, t) in (14), one obtains C F ,t s = −S s P r (s, t)P λ (s, t) A λ t (s, t) A λ (s, t) − B λ t (s, t)λ s and Ito's lemma yields the dynamics of the log-numéraire, valid for s ∈ [0, t] d log C F ,t s = −λ s ds + d log P r (s, t) + d log P λ (s, t) + d log B λ t (s, t)λ s − A λ t (s, t) A λ (s, t) . From the affine structure of r and λ, the above equation becomes d log C F ,t s = (. . .)ds − B λ (s, t)dλ s − B r (s, t)dr s + 1 B λ t (s, t)λ s − A λ t (s,t) A λ (s,t) d B λ t (s, t)λ s − A λ t (s, t) A λ (s, t) = (. . .)ds + A λ (s, t)B λ t (s, t) A λ (s, t)B λ t (s, t)λ s − A λ t (s, t) − B λ (s, t) σ λ s dW λ s − B r (s, t)σ r s dW r s .(15) Finally, one gets the following relationship for the drift adjustment: θ t s = ρ λ s β s σ λ s A λ (s, t)B λ t (s, t) A λ (s, t)B λ t (s, t)λ s − A λ t (s, t) − B λ (s, t) − ρ r s β s σ r s B r (s, t)(16) where ρ λ s represents the instantaneous correlation between the Brownian motions driving the exposure and credit risk, ρ λ s ds := d W V , W λ s , and similarly ρ r s ds := d W V , W r s . Deterministic approximation of the drift adjustment Our main point in this paper is to investigate the WWR impact. Therefore, we focus on deterministic risk-free rates and deterministic correlation ρ λ s = ρ(s) in the sequel. This helps simplifying the framework since then log P r (s, t) contributes zero to the quadratic variation of log C F ,t s . In such a framework, the drift adjustment simplifies to θ t s = ρ(s)β s σ λ s A λ (s, t)B λ t (s, t) A λ (s, t)B λ t (s, t)λ s − A λ t (s, t) − B λ (s, t) .(17) Remark 2. The Radon-Nikodỳm derivative process Z t derived in Section 4.3 is given as a conditional expectation of ζ rescaled by risk-free zero-coupon bond prices and the bank account numéraire. It is possible however to further specify the form of Z t as a function of the drift adjustment θ t in the particular framework considered in this section. It is clear from eq. (12) that Z t s = B0 C F ,t 0 C F ,t s Bs where B0 C F ,t 0 = 1 and C F ,t s Bs is a non-negative martingale. In the case of deterministic interest rates, the analytical expression of C F ,t s is easily obtained from the dynamics of log C F ,t given in (15) so that finally Z t s = exp s 0θ t u dW λ u − 1 2 s 0 θ t u 2 du whereθ t s ρ(s)β s := θ t s . As the adjustment in the drift exposure features the stochastic intensity, θ t = (θ t s ) 0 s t is stochastic, and we cannot simplify the problem by avoiding to simulate the driver W λ of the intensity process. In order to reduce the dimensionality of the problem, one can look for deterministic approximations θ(s, t) of θ t s . We introduce below two easy alternatives. Replace λ s by its expected valueλ(s). A first method consists in replacing λ s by its expected value under Q:λ(s) := E B [λ s ] in eq. (17): θ(s, t) = ρ(s)β s σ λ s (λ(s) )   A λ (s, t)B λ t (s, t) A λ (s, t)B λ t (s, t)λ(s) − A λ t (s, t) − B λ (s, t)   ,(18) where we have used the notation σ λ s (λ s ) := σ λ s . Replace λ s by the implied hazard rate h(s). A second method consists in replacing λ s by h(s) in eq. (17): θ(s, t) = ρ(s)β s σ λ s ( h(s) )   A λ (s, t)B λ t (s, t) A λ (s, t)B λ t (s, t) h(s) − A λ t (s, t) − B λ (s, t)   .(19) Recall that h(t) is the hazard rate implied by the survival probability G(t) = P λ (0, t). Remark 3. As calibration to market data forces the equality E B e − t 0 λsds = e − t 0 h(s)ds , both methods are equivalent up to Jensen's effect: e − t 0 h(s)ds = E B e − t 0 λsds ≈ e − t 0 E B [λs]ds = e − t 0λ (s)ds . Another way to see the connections between the two approaches is to notice thatλ(t) coincides with h(t) as long as one can neglect covariance between λ and its integrated version Λ: h(t) = − d dt ln G(t) = −G (t) G(t) = E B [λ t S t ] E B [S t ] =λ t + Cov B [λ t , S t ] E B [S t ] ≈λ(t) . Calibration and deterministically shifted affine processes The class of affine processes is quite broad; Ornstein-Uhlenbeck (OU), Cox-Ingersoll-Ross (CIR) including the case with jumps (JCIR, see [10] for related calculations for CDS and CDS options) all fit in this class. Unfortunately, both only have three degrees of freedom; not enough for the calibration equation (3) to hold in general. This can be circumvented by considering homogeneous affine processes that are shifted in a deterministic way. In this setup, λ becomes a shifted version of a (latent) homogeneous affine processes y: λ t = y t + ψ(t) . The deterministic shift function ψ is chosen to ensure that the model-implied function P λ (0, t) agrees with that of a given survival probability curve G(t) provided externally. This is exactly eq (3): Hence, the shifted process λ is affine too, with G(t) = P λ (0, t) = E B e − t 0 λudu = E B e − t 0 yudu e − t 0 ψ(u)du = P y (0, t)e − t 0 ψ(u)du .A λ (s, t) = A y (s, t)e B y (s,t)ψ(s)−Ψ(s,t) , B λ (s, t) = B y (s, t) . These A λ , B λ are the A, B functions to be used in the drift adjustment (19). In particular, A λ t (s, t) = A y t (s, t)e B y (s,t)ψ(s)−Ψ(s,t) + A λ (s, t) (B y t (s, t)ψ(s) − ψ(t)) = A λ (s, t) A y t (s, t) A y (s, t) + B y t (s, t)ψ(s) − ψ(t) , B λ t (s, t) = B y t (s, t) . Hence, the functions A λ and B λ as well as their derivatives can be easily computed from the functions A y , B y of the underlying process y and the survival probability G. For the sake of completeness we give the explicit formulae in the appendix for y being OU, CIR and JCIR. The shifted versions λ t = y t + ψ(t) are called Hull-White, CIR ++ and JCIR ++ , respectively. Numerical experiments In this section we compare the WW measure approach with deterministic approximation of the drift adjustment to the standard Monte-Carlo setup featuring a 2D Euler scheme of the bivariate SDE driving the exposure and intensity. We assume various Gaussian exposures and CIR ++ stochastic intensity and disregard the impact of discounting to put the focus and the treatment of the credit-exposure dependency. 5 Exposure processes, EPE and WWR-EPE Assuming Gaussian exposures has the key advantage of leading to analytical expressions of EPEs. For instance, let N (µ, σ) stands for the Normal distribution with mean µ and standard deviation σ. Then, assuming V t (P) ∼ N (µ(t), σ(t))(20) for some deterministic functions µ(t), σ(t) > 0 and probability measure P, E P V + t = σ(t)φ µ(t) σ(t) + µ(t)Φ µ(t) σ(t) ,(21) where φ is the standard Normal density and Φ the corresponding cumulative distribution function. Hence, the Q-EPE is analytically tractable if the exposure dynamics under Q features an affine drift and a deterministic diffusion coefficient i.e. when α s =α(s) + α(s)V s and β s = β(s) in (13). This includes the special cases where the exposure is modeled via an arithmetic Brownian motion, as in the Bachelier model, or as a mean-reverting Ornstein-Uhlenbeck process, as in Vasicek's models. In case the exact exposure dynamics is not Gaussian, one might consider the Gaussian assumption as an approximation, possibly obtained via moment-matching or drift-freezing techniques, see for example [18] for the lognormal case applied to basket options and [14] Chapters 4.4 and 4.5 for swap portfolios. Moreover, the profiles can take various forms, and can successfully depict the behavior of exposure profiles of equity return swaps or forward contracts (in the simple Brownian case), or exposure profiles of interest rates swap (IRS, if drifted Brownian bridges are used instead). We focus below on specific exposures and stochastic intensity processes that lead to analytical tractability. Forward-type Gaussian exposure We choose for the coefficients of exposure dynamics (13) α s ≡ 0 and β s ≡ ν so that the exposure is a rescaled Brownian motion, implying V t (Q) ∼ N 0, ν √ t Hence, the EPE collapses to the RHS of (21) with µ(t) = 0 and σ(t) = ν √ t: EPE ⊥ (t) = ν √ tφ(0) . In order to compute the EPE under WWR for a given survival probability curve G, we consider an affine stochastic intensity process λ and imply the drift function ψ from the calibration equation P λ (0, t) = G(t) as in Section 5.3: ψ(t) = d dt ln P y (0, t) G(t) . One can then easily estimate the EPE (and thus CVA) E B [V + t ζ t ] by jointly simulating λ and V in a standard Monte Carlo framework. In this specific case however, the change-of-measure technique proves to be very useful. Indeed, under the deterministic drift adjustment approximation or under drift freezing, V t is interestingly again normally distributed under Q C F ,t with mean µ(t) = Θ(t) := t 0 θ(u, t)du and standard deviation σ(t) = ν √ t. Applying eq. (21) yields EPE(t) ≈ ν √ tφ Θ(t) ν √ t + Θ(t)Φ Θ(t) ν √ t . 6.1.2 Swap-type, mean-reverting Gaussian exposure. In this section we adopt an exposure profile that mimics that of an interest rate swap in the sense that there is a pull-to-parity effect towards maturity. To that end, we use a drifted Brownian bridge. More explicitly, we follow [27, Th. 4.7.6] and set the coefficients in (13) to α s = γ(T − s) − Vs T −s and σ s = ν so that V t = γt(T − t) + ν(T − t) t 0 1 T − s dW V s leading to V t (Q) ∼ N γt(T − t), ν t(1 − t/T ) . In this model, γ governs the future expected moneyness of the swap implied by the forward curve and ν drives the volatility. Because the diffusion part is the same in both forward-type and swap-type SDEs of V , the drift adjustment process θ t takes the same form in either cases. However, the marginal distributions of the WWR exposure change. We compute them below. Recall that the dynamics of a OU process with time-dependent coefficients takes the form dX s = κ(s)(η(s) − X s )ds + (s)dW s . The solution to this SDE is a Gaussian process whose solution can be easily found to be X t |X s = G(s, t)(X s + I(s, t) + J(s, t)) , In particular, X t is Normally distributed with mean m(t) and variance v(t): m(t) = G(s, t) (m(s) + I(s, t)) and v(t) = G 2 (s, t) v(s) + t s (u) G(s, u) 2 du In the simplest case of constant volatility (s) = , X t is distributed as X t (Q) ∼ N e − t 0 κ(u)du t 0 κ(u)η(u)e − u 0 κ(v)dv du, e −2 t 0 κ(u)du t 0 2 e 2 u 0 κ(v)dv du . Basic algebra shows that under Q C F ,t the exposure V t takes the form (22) with κ(s) = (T − s) −1 η(s) = (T − s)(γ(T − s) + θ t s ) = ν . Unfortunately, η(s) is not a deterministic function; it features the intensity process via θ t s . However, it becomes deterministic if one replaces θ t s by its deterministic proxy θ(s, t): η(s) ≈ (T − s)(γ(T − s) + θ(s, t)) . Under this approximation, the exposure becomes a generalized OU process and V t Q C F ,t ∼ N γt(T − t) + (t − T ) t 0 θ(u, t) u − T du, ν t(1 − t/T ) providing a closed form expression for the EPE under WWR given by eq. (21) with P = Q C F ,t and µ(t) = γt(T − t) + (t − T ) t 0 θ(u, t) u − T du , σ(t) = ν t(1 − t/T ) . Remark 4. Notice that some non-Gaussian exposures are analytically tractable too with the deterministic drift approximation. For instance, if one knows beforehand that the exposure will be positive, one can consider the coefficients in the exposure process dynamics (13) to be α s = α(s)V s and β s = β(s)V s , with α(·) and β(·) deterministic functions of time: V t = V 0 exp t 0 α(s) − β 2 (s) 2 ds + t 0 β(s)dW V s , leading to EPE ⊥ (t) = E B [V + t ] = E B [V t ] = V 0 e t 0 α(s)ds . Using Girsanov's theorem, the solution V can be written as a function of the Q C F ,t -Brownian motion V t = V 0 exp t 0 α(s) + θ t s − β 2 (s) 2 ds + t 0 β(s)dW V s , where θ t s is given in (17) after replacing β s by β(s). Using the deterministic approximation, one obtains EPE(t) = E C F ,t V + t = E C F ,t [V t ] = V 0 e t 0 α(s)ds E C F ,t e t 0 θ t s ds ≈ V 0 e t 0 (α(s)+θ(s,t))ds = EPE ⊥ (t) e Θ(t) . Note that the geometric Browian motion assumption for V here can be seen as an approximation stemming from moment matching, see for example [18] for the case of geometric Brownian motion. Discretization schemes The deterministic approximation to the stochastic drift-adjustment resulting from the change-of-measure has the appealing feature to avoid having to simulate the default intensity. In the case where the exposures are normally or lognormally distributed, this leads to semi-closed form expressions for CVA, where only two numerical integrations are required (one to compute Θ(t) = t 0 θ(u, t)du and the other one to integrate the EPE profile with respect to the survival probability curve G to get the final CVA). This contrasts with the standard method that consists in the joint simulation of the exposure and default intensity. To do so, one must rely on 2D Monte Carlo scheme. In two dimensions, it is hard to avoid using small time steps as the joint distribution of (V, λ) is unavailable when λ is CIR and V is Gaussian or lognormal, for example, under non-zero quadratic covariation (or "correlation") between the driving Brownian motions. Several schemes have been tested for simulating the CIR process for λ. Most of them are comparable when the Feller condition is satisfied and when volatility is small. However, it is known that simulating such processes is typically difficult otherwise. In particular, the performances deteriorate when the volatility is large. The CIR schemes can be divided in two classes. 6 Non-negative schemes A first class of schemes avoid negative samples whatever the time step δ and the volatility. This is the case for instance of the "reflected scheme" originally introduced in [20]: y (i+1)δ = y iδ + κ(θ − y iδ )δ + σ δy iδ z i ,(23) where z i are iid standard Normal samples. The positivity can also be imposed using implicit schemes, see e.g. [1] and [5] for a discussion on the performances. The issue is that the convergence is rather disappointing : depending on the data, even a very small δ of 1E − 5 may not be enough in order for the empirical expectationÊ B [S t ] to fit reasonably well the theoretical expression P λ (0, t). This is of course a major obstacle as many (bivariate) sample paths have to be drawn, potentially for large maturities. In fact, even in this simple framework, using these positive schemes becomes quickly unmanageable on a standard computer because the time step required to ensure the above fit is too small. Fig. 1 illustrates this. We have simulated N = 300k sample paths from (23). The left chart provides the sample mean of the Azéma supermartingalê Table 1. Survival probabilities (theoretical, blue solid and empirical dotted, red), histograms of λ t (middle) and proxies λ(t) being either the expectation of λ t (theoretical, blue solid and empirical, red dotted) and h(t) (black, dotted). One can see that all samples are non-negative (as expected) but the fit between theoretical and empirical survival probability curves is quite poor. Relaxing the non-negativity constraint Alternatively, the scheme proposed by [20] and discussed in [25] seems to work well. It consists in the following discretization scheme y (i+1)δ = y iδ + κ(θ − y + iδ )δt + σ δy + iδ z i .(24) As clearly visible from the histogram in Fig. 2, this scheme has the major drawback of not preventing negative samples for the intensity for a finite δ (especially when volatility is large). However, the fit betweenÊ B [S t ] and P λ (0, t) (and similarlyλ t ≈λ n t ) is already pretty good when δ = 1E −2 . As expected, the proportion of negative samples decreases with decreasing time step but the computation time explodes (the computation times are comparable to those of the previous scheme). Table 1. Survival probabilities (theoretical, blue solid and empirical dotted, red), histograms of λ t (middle) and proxies λ(t) being either the expectation of λ t (theoretical, blue solid and empirical, red dotted) and h(t) (black, dotted). One can see by visual inspection that the fit between theoretical and empirical survival probability curves is quite good even for δ = 1E −2 . The choice of the discretization scheme will be shown to have little impact on CVA figures in Sec-tion 6.4.4. Hence, one can opt indifferently for any standard scheme provided that it can deal with cases where Feller's condition is violated. This violation often happes in real market cases when the large credit volatility tends to push trajectories up and down in a way that is not compatible with Feller's condition. We choose the scheme (24) which is rather standard in practice. Discretization scheme for the JCIR The JCIR process can easily be simulated by adjusting any of the above method for sampling a CIR process for the jumps, path-by-path and period-by-period. Sample paths of the compound Poisson process are simulated independently, and at the end of each period featuring a jump, the corresponding CIR paths are adjusted by the associated jump size. Because of discretization errors, the scheme is of course satisfactory only for time step δ being small enough. It is worth mentioning that Giesecke and Smelov recently proposed in [21] an exact scheme to sample jump diffusions: the standard error look comparable to a naive discretization but the computational time is cut by more than 75% and more interestingly, the bias is killed. We rely on the standard discretization algorithm in this paper as our focus is precisely to propose a method allowing one to get rid of intensity simulation. Wrong-way EPE profiles From the counterparty risk pricing point of view only CVA that is, the integral of the EPE profile with respect to the survival probability curve matters. Yet, it is interesting to first have a look at the EPE profiles under wrong-way risk, i.e. at EPE(t) = E B V + t Bt ζ t as deterministic functions of time. This helps getting an idea of how good the change-of-measure technique is (combined with the deterministic approximation of the drift adjustment) not at the aggregate level, but for the exposure at a specific time. This is important for analysts monitoring counterparty exposure, and more generally for riskmanagement purposes. Therefore in this section we provide EPE profiles for specific parameter values of the exposure and stochastic intensity processes. The parameter values are chosen such that specific EPE shapes are generated (e.g. exposure profiles being not a monotonic function of correlation). This proves particularly interesting as it allows us to analyze whether the drift-adjustment method is able to reproduce the subtleties of these profiles, like asymmetry and crossings for example. Figure 3 below shows EPE profiles for Gaussian (forward-type or equity return) exposures for the CIR ++ (µ λ s = κ(θ − λ s ) and σ λ s = σ λ (λ s ) with σ λ (x) = σ λ √ x). Top panels show the EPE obtained by the full (2D) Monte Carlo simulation. They prove to be extremely close to the corresponding panels at the bottom, obtained semi-analytically with the measure-change under deterministic drift adjustment approximation (19). The approximation performs very well for the CIR ++ intensity in swap-type profiles too, as one can see from Comparing top and bottom rows of figures 3 and 4 suggests that the deterministic approximation of the drift adjustment preserves the ability of the change-of-measure approach to reproduce specific properties of the EPE profiles, including crossings and asymmetry. We have considered CIR and CIR ++ here for intensities, but one might also wish to consider other affine models. Shifted OU (also known as Hull-White) is one of them. This extremely tractable model is very popular for interest-rates modeling. However, as this is a Gaussian process, it is not appropriate for default intensities or positive exposures given the possibility of negative values. More specifically, Q-EPE can be computed semi-analytically in the case of Gaussian exposures and OU dynamics for "intensities" in which case CVA can indeed become negative, which is clearly wrong (see e.g. [29]). By contrast, expressions of the form E P [V + t ] will of course always be non-negative, whatever the measure P so that there is no hope that the WWM approach will agree with the results found by computing the EPE under Q directly. The reason for this mismatch is of course that the choice of the numéraire is not valid in this case as it is not guaranteed to be positive. However, the change-of-measure technique is acceptable when the process parameters are such that λ takes negative values with very small probability, at least when V t is positive (positive ρ). We do not discuss further the results corresponding to OU "intensities". Another possible model is JCIR (or its shifted version JCIR ++ ). For sake of brevity, we will analyze CVA figures directly in Section 6.4.3. CVA figures The above section emphasizes that the change-of-numéraire technique, in spite of the deterministic approximation of the drift adjustment, allows to adequately represent the functional form of the EPE profiles under WWR. In this section we focus on CVA figures and compare the results obtained by using either the full Monte Carlo simulation or the semi-analytical results using the deterministic drift adjustment. Instead of specifying a given survival probability curve, we start from the CIR parameters and take P y (0, t) as G(t) so that no shift is needed, i.e. λ ≡ y. This way of proceeding rules out potential problems of getting negative intensities as a result of a negative shift and yields a large degree of freedom to play with the parameters. Effect of the long-term mean We ha fix the CIR parameters and play with four different values of the long term mean (driving the slope of the CDS curve, i.e. contango or backwardation) as well as with the maturity, the type and the volatility of the exposure process. The corresponding CVA figures are given in Figure 5. Notice that the CVA is quoted in basis points upfront. They can be converted in a running premium Chapter 21.3 in [12] and [30]. Table 1: Feller condition is violated in some cases, specifically in Set 4. Comparison of performances for 4 sets of CIR parameters Some possible sets for the CIR parameters are given in Table 1. Set 1 has been chosen exogeneously, Set 2 is taken from [16] while Set 3 & Set 4 come from [17]. We refer to these works for CDS implied volatilities and other market pattern implied by these parameters. Notice that Set 4 looks relatively extreme in that the volatility parameter is quite large and Feller condition is strongly violated. In this section we stress the impact of the volatility on the quality of the deterministic approximation of the drift adjustment. The CVA figures are shown with respect to correlation on Fig. 6. Figure 7 provides the CVA as a function of ρ for CIR and JCIR. For the sake of comparison, we also provide the results implied by the Gaussian Copula (static resampling) approach. The idea behind the resmapling method is to assume that V t and τ are linked via a given copula for any t. The Gaussian copula is specifically handy when the exposure is normally distributed at any point in time, V t ∼ N (µ(t), σ(t)). To see this, notice first that V t has the same distribution as a Uniform random variable U mapped through the quantile function F −1 Vt of V t : Comparison between CIR and JCIR V t ∼ F −1 Vt (U ) . As G(τ ) ∼ U one can parametrize U as a function of τ using a Gaussian coupling scheme: U (τ ) := Φ(ρΦ −1 (G(τ )) + 1 − ρ 2 Z) ∼ U ; where Z is a standard Normal random variable independent from τ ; this amounts to say that V t and τ are linked via a Gaussian copula with constant correlation ρ. Hence, one can draw samples of V t conditionally upon τ = t by evaluating F −1 Vt at U (t). In the specific case where the exposure is Gaussian, F −1 Vt (x) = µ(t) + σ(t)Φ −1 (x) so that finally V t | τ =t ∼ F −1 Vt (U (t)) = µ(t) + σ(t)ρΦ −1 (G(t)) + σ(t) 1 − ρ 2 Z ∼ N (µ ρ (t), σ ρ (t)) , where µ ρ (t) := µ(t) + ρσ(t)Φ −1 (G(t)) and σ ρ (t) := σ(t) 1 − ρ 2 . Using (21), the EPE associated to the Gaussian copula approach takes then the simple analytical form EPE(t) = σ ρ (t)φ µ ρ (t) σ ρ (t) + µ ρ (t)Φ µ ρ (t) σ ρ (t) . We plot on Fig.7 some CVA figures for CIR, JCIR and the Gaussian copula as a function of the correlation parameter ρ. Notice that the Gaussian Copula figures are impacted by the choice of the CIR parameters as they depend on the curve G(t) that is assumed equal to P λ (0, t), which is a function of the parameters driving λ. Impact of the discretization scheme and the deterministic approximation We analyze here the impact of the discretization scheme, the time step δ as well as the choice of the deterministic approximation θ(s, t) of θ t s , (19) or (18). One can see from Table 2 that the impact of the deterministic approximation of θ t s is lower than 1 basis point except when Feller condition is strongly violated due to a very large volatility (Set 4); in that case h(t) andλ(t) can signficantly differ for large t. It is not surprising to observe that the performance of the deterministic approximation deteriorates for large ρ in such volatile cases. Observe that similarly, the impact of the discretization scheme is typically limited to one basis point in all cases except again for Set 4. Remark 5. We can use any of the deterministic approximations θ(s, t) of θ t s as both h(t) andλ (t) can be easily obtained in the case of the CIR ++ dynamics. For instance, λ(s) = ψ(s) + y 0 e −κs + θ(1 − e −κs ) , where ψ can be extracted from the market-implied curve G. Both deterministic approximations yield very similar results except in extreme scenarii. Therefore, we restrict ourselves to show the results related to the second approximation, replacing λ s by h(s) as in (19). Conclusion Wrong way risk is a well-known key driver of counterparty credit risk. In spite of its primary importance however, it is frequently disregarded. The standard CVA formula provided in the Basel III report for instance does not propose a WWR framework. This is obviously a major shortcoming that may drastically underestimate the figures. Such a simplification is commonly justified by the lack of a better alternative of accounting for wrong way risk in a sound (yet tractable) manner. Monte Carlo methods (blue, average ± 2 standard deviations on 10 × 10k paths) (right). Profiles: 3Y Gaussian exposures with ν = 8% and CIR parameters given by Set 2 (top) and 15Y swap-type exposures with ν = 2.2% and CIR parameters given by Set 3 (bottom). In both cases, JCIR arrival rate and mean size of jumps are given by α = γ = 10%. Set 3 0.01 6 18 40 6 18 40 7 ± 1 18 ± 1 37 ± 1 7 ± 0 18 ± 1 37 ± 2 0.001 6 ± 1 18 ± 0 37 ± 1 7 ± 1 18 ± 1 36 ± 2 Set 4 0.01 3 37 141 3 37 138 6 ± 1 35 ± 2 94 ± 3 14 ± 1 47 ± 2 111 ± 3 0.001 6 ± 1 34 ± 2 93 ± 5 10 ± 1 42 ± 2 104 ± 5 Table 2: CVA figures (upfront in bps, rounded) for Gaussian exposure with maturity 3Y and volatility ν = 8%. Methods W M (1) and W M (2) corresponds to the drift-adjustment method with deterministic approximations (19) and (18), respectively. Methods M C(1) and M C(2) corresponds to the full Monte Carlo method with discretization scheme (24) and (23), respectively. The three quotes per column respectively correspond to upfront CVA in bps for ρ = −0.8 (left) ρ = 0 (middle) and ρ = 0.8 (right). The confidence intervals have been generated from 10 sets of simulations featuring 10k paths each and correspond to global average ± twice the empirical CVA's standard deviation. In this paper, a new methodology has been proposed to overcome the difficulties of modeling credit risk in a reduced-form setup for tackling WWR when pricing CVA. This method relies on a new equivalent measure called wrong way measure. The outcome is that the effect of WWR is embedded in a drift adjustment of the exposure process. This drift adjustment is a stochastic process that generally depends on the stochastic intensity. Consequently, the change-of-measure technique does not lead, strictly speaking, to a dimensionality reduction of the CVA pricing problem. Nevertheless, it is possible to avoid the simulation of the intensity process by approximating the drift adjustment by a deterministic function. In spite of its simplicity, numerical evidence shows that for a broad range of parameter values, the expected positive exposure profiles under WWR are very well approximated when replacing the intensity λ t by the hazard rate h(t) or its expected valueλ t in the drift adjustment. Therefore, the approximation has a typically limited impact on CVA figures, providing arguably satisfactory estimations given the uncertainty on other key variables like e.g. the recovery rate or the close-out value of the portfolio. Hence, the proposed setup drastically simplifies the management of WWR when pricing CVA. Cox-Ingersoll-Ross (CIR) formulae When y is a CIR process, i.e. when dy t = κ(θ − y t )dt + σ √ y t dW λ t then λ defined as λ t = y t + ψ(t) is said to be a CIR ++ process. The process y is always non-negative and there are many circumstances where λ remains positive too. One gets A CIR (s, t) = h exp( κ+h 2 τ ) e hτ − 1 B CIR (s, t) 2κθ σ 2 A CIR t (s, t) = A CIR (s, t) Cox-Ingersoll-Ross with compound Poisson jumps (JCIR) formulae Consider jump-diffusion dynamics like JCIR, dy t = κ(θ − y t )dt + σ √ y t dW λ t + dJ t where J t is a pure-jump process. A tractable setup is to consider J t to be a compound Poisson process with exponentially distributed jump sizes with mean γ with jump rate α. The process λ resulting from a deterministic shift λ t = y t + ψ(t) of this model is called JCIR ++ . Setting (u)du, Ψ(s, t) := Ψ(t) − Ψ(s) and using the affine property of y, P λ (s, t) = A y (s, t)e −B y (s,t)ys e −(Ψ(t)−Ψ(s)) = A y (s, t)e B y (s,t)ψ(s)−Ψ(s,t) e −B y (s,t)λs . (v)dv dW u . t ] := N −1 N n=1 S t (ω n ) (dashed) with the theoretical expectation P λ (0, t) (solid). The middle plot exhibits the histogram of λ; one can check that obviously, there is no negative samples. Finally, the right plot provides a comparison ofλ n t :=ÊB [λ t ] := N n=1 λ t (ω n ) (dashed), the analytical counterpart λ t = E B [λ t ] (solid) as well as h(t) = − d dt ln(P λ (0, t)) /P λ (0, t) (dotted).One can see by visual inspection, that the approximationsÊB [S t ] ≈ P λ (0, t) andÊ B[λ t ] ≈λ t are relatively poor for δ = 1E −2 (top row). The bottom row provides the same plots but for δ = 1E 3 . As expected, the fit improves when decreasing the time step, but the computation time explodes from 36 s (δ = 1E −2 , top) to about 6 minutes (δ = 1E −3 , bottom) on a standard laptop computer. Figure 1 : 1Statistics of samples generated via scheme (23) based on 300k paths with time step δ for 5Y maturity with CIR parameters given by Set 2 in Figure 2 : 2Statistics of samples generated via scheme (24) based on 300k paths with time step δ for 5Y maturity with CIR parameters given by Set 2 in Fig. 4 . 4 Figure 3 : 3EPE ⊥ (dashed) and EPE (solid) with CIR ++ intensity for various correlation levels, from ρ = 80% (orange) to ρ = −80% (black) by steps of 20%. Full 2D Monte Carlo (top, 30k paths, δ = 1E −2 ) and WWR measure with deterministic drift adjustment (bottom). Parameters: ν = 8% (exposure) and y 0 = h(0), σ = 12%, κ = 35%, θ = 12% (intensity). Figure 4 : 4EPE ⊥ (dashed) and EPE (solid) for various correlation levels, from ρ = 80% (orange) to ρ = −80% (black) by steps of 20%. Full 2D Monte Carlo (top, 30k paths, δ = 2E −2 ) and deterministic drift adjustment (bottom). Parameters: γ = 0.1%, ν = 2.2% (exposure) and y Figure 5 : 5CVA Figures with κ = 10%, θ = αy 0 %, y 0 = 50 bps. Top row: Brownian exposure T = 10Y with ν = 2.2%, σ = 0.8%, Bottom row drift-inclusive Brownian bridge exposure T = 15Y with ν = 8%, σ = 1%. Legend: CVA with change-of-measure technique (solid red), average of 10 Full 2D Monte Carlo runs with 30k each (dotted blue) and corresponding confidence interval (2 times standard deviation estimated from the sets of runs). Figure 6 : 6CVA Figures for both change-of-measure and Monte Carlo methods (30k paths) for a 15Y swap-type exposures for various exposure volatility and CIR parameters . Figure 7 : 7CVA Figures for Gaussian copula (dotted cyan), deterministic drift adjustment (red) and B CIR (s, t) = e hτ − 1 h + κ+h 2 (e hτ −1 ) B CIR t (s, t) = e hτ B CIR (s, t)h e hτ − 1 2 where h := √ κ 2 + 2σ 2 . AB JCIR (s, t) = A CIR (s, JCIR (s, t) = B CIR (s, t) B JCIR t (s, t) = B CIR t (s, t) Set y 0 (bps)κ θ (bps) σ 2κθ − σ 2 1 300 2% 1610 8% 4E −5 2 350 35% 450 15% 0.9% 3 100 80% 200 20% -0.8% 4 300 50% 500 50% -20% ± 1 35 ± 2 55 ± 3 19 ± 1 36 ± 3 55 ± 1 0.001 19 ± 1 36 ± 1 55 ± 1 20 ± 1 36 ± 1 55 ± 1 40 ± 1 69 ± 3 18 ± 1 40 ± 1 69 ± 2 0.001 18 ± 1 40 ± 1 69 ± 2 18 ± 0 40 ± 2 69 ± 2δ WM(1) WM(2) MC(1) MC(2) Set 1 0.01 20 36 57 21 36 57 19 Set 2 0.01 19 40 72 19 40 72 18 ± 0 Here, we assume that this corresponds to the risk-free price of the portfolio which is the most common assumption, named "risk free closeout", even though other choices can be made, such as replacement closeout, see for example[13],[14] Note that other methodologies have been recently proposed for credit risk modeling and CVA pricing (see e.g.[29] and[24]) but they will not be considered here. In our CVA context, this second condition amounts to say that the portfolio Π is not allowed to explicitly depend on τ . For instance, it cannot contain corporate bonds whose reference entity is precisely the counterparty C Equivalently, the processes V considered below can be seen as the stochastically discounted exposure V /B and then set r ≡ 0 (B ≡ 1). Notice that many schemes are available for CIR. We restrict ourselves to present two of them who exhibit decent performances in our CVA application and able to deal with the non-Feller case. = h(0), κ = 35%, θ = 12%, σ = 12% (intensity). Appendix Ornstein-Uhlenbeck (OU) formulaeThe dynamics of OU (or Vasicek) intensities is given by the SDE dy t = κ(θ − y t )dt + σdW λ t in which case λ defined as λ t = y t + ψ(t) is known as the Hull-White dynamics. This model is very popular for interest-rates modeling. Nevertheless, it is not appropriate for the modeling of stochastic intensities as it is a Gaussian process and hence can take negative values. This inconsistency is revealed by our methodology as in OU, the numéraire is not almost surely positive. Hence, the resulting figures can be negative, which is of course impossible.Yet, the analytical expressions of the functions A, B, A t and B t involved in the drift adjustment are available. Setting τ := t − s, one findsB OU (s, t) = 1 − e −κτ κ B OU t (s, t) = e −κτ On the discretization schemes for the CIR (and other Bessel squared) processes. A Alfonsi, CERMICS (Université Marne-la-ValléeTechnical reportA. Alfonsi. On the discretization schemes for the CIR (and other Bessel squared) processes. Technical report, CERMICS (Université Marne-la-Vallée), 2005. Integrated structural approach to counterparty credit risk with dependent jumps. L Ballotta, G Fusai, D Marazzina, Cass Business School, City University London (UKTechnical reportL. Ballotta, G. Fusai, and D. Marazzina. Integrated structural approach to counterparty credit risk with dependent jumps. Technical report, Cass Business School, City University London (UK), 2015. Credit risk modeling. T Bielecki, M Jeanblanc, M Rutkowski, Osaka (JapanCenter for the Study of Finance and Insurance, Osaka UniversityTechnical reportT. Bielecki, M. Jeanblanc, and M. Rutkowski. Credit risk modeling. Technical report, Center for the Study of Finance and Insurance, Osaka University, Osaka (Japan), 2011. Arbitrage Theory in Continuous Time. T Bjork, Oxford University PressT. Bjork. Arbitrage Theory in Continuous Time. Oxford University Press, 2004. Credit default swaps calibration and option pricing with the SSRD stochastic intensity and interest rate model. D Brigo, A Alfonsi, Finance and Stochastics. 9D. Brigo and A. Alfonsi. Credit default swaps calibration and option pricing with the SSRD stochas- tic intensity and interest rate model. Finance and Stochastics, 9:29-42, 2005. . D Brigo, I , D. Brigo and I.. Accurate counterparty risk valuation for energy-commodities swaps. Bakkar, Energy RiskBakkar. Accurate counterparty risk valuation for energy-commodities swaps. Energy Risk, March 2009. Arbitrage-free bilateral counterparty risk valuation under collateralization and application to credit default swaps. D Brigo, A Capponi, A Pallavicini, Mathematical Finance. 241D. Brigo, A. Capponi, and A. Pallavicini. Arbitrage-free bilateral counterparty risk valuation under collateralization and application to credit default swaps. Mathematical Finance, 24(1):125-146, 2014. Pricing counterparty risk including collateralization, netting rules, re-hypothecation and wrong-way risk. D Brigo, A Capponi, A Pallavicini, V Papatheodorou, International Journal of Theoretical and Applied Finance. 162D. Brigo, A. Capponi, A. Pallavicini, and V. Papatheodorou. Pricing counterparty risk including collateralization, netting rules, re-hypothecation and wrong-way risk. International Journal of Theoretical and Applied Finance, 16(2), 2013. Counterparty risk for credit default swaps: Impact of spread volatility and default correlation. D Brigo, K Chourdakis, International Journal of Theoretical and Applied Finance. 1207D. Brigo and K. Chourdakis. Counterparty risk for credit default swaps: Impact of spread volatility and default correlation. International Journal of Theoretical and Applied Finance, 12(07):1007-1026, 2009. An exact formula for default swaptions pricing in the SSRJD stochastic intensity model. D Brigo, N El-Bachir, Mathematical Finance. 203D. Brigo and N. El-Bachir. An exact formula for default swaptions pricing in the SSRJD stochastic intensity model. Mathematical Finance, 20(3):365-382, 2010. Risk Neutral Pricing of Counterparty Risk. Risks Books. D Brigo, M Masetti, D. Brigo and M. Masetti. Risk Neutral Pricing of Counterparty Risk. Risks Books, 2005. Interest Rate Models -Theory and Practice. D Brigo, F Mercurio, SpringerD. Brigo and F. Mercurio. Interest Rate Models -Theory and Practice. Springer, 2006. Closeout convention tensions. Risk magazine. D Brigo, M Morini, D. Brigo and M. Morini. Closeout convention tensions. Risk magazine, December:86-90, 2011. Counterparty Credit Risk, Collateral and Funding. D Brigo, M Morini, A Pallavicini, WileyD. Brigo, M. Morini, and A. Pallavicini. Counterparty Credit Risk, Collateral and Funding. Wiley, 2013. Credit Calibration with Structural Models and Equity Return Swap valuation under Counterparty Risk. D Brigo, M Morini, M Tarenghi, Wiley/Bloomberg PressD. Brigo, M. Morini, and M. Tarenghi. Credit Calibration with Structural Models and Equity Return Swap valuation under Counterparty Risk, pages 457-484. Wiley/Bloomberg Press, 2011. Counterparty risk and contingent cds under correlation. Risk Magazine. D Brigo, A Pallavicini, D. Brigo and A. Pallavicini. Counterparty risk and contingent cds under correlation. Risk Magazine, February 2008. Bilateral counterparty risk valuation for interestrates products: impact of volatilities and correlations. D Brigo, A Pallavicini, V Papatheodorou, Technical reportD. Brigo, A. Pallavicini, and V. Papatheodorou. Bilateral counterparty risk valuation for interest- rates products: impact of volatilities and correlations. Technical report, 2011. Approximated momentmatching dynamics for basket-options pricing. Damiano Brigo, Fabio Mercurio, Francesco Rapisarda, Rita Scotti, Quantitative Finance. 41Damiano Brigo, Fabio Mercurio, Francesco Rapisarda, and Rita Scotti. Approximated moment- matching dynamics for basket-options pricing. Quantitative Finance, 4(1):1-16, 2004. C Dellacherie, P.-A Meyer, Probabilités et Potentiel -Espaces Mesurables. Hermann. C. Dellacherie and Meyer P.-A. Probabilités et Potentiel -Espaces Mesurables. Hermann, 1975. Sur la discrétisation et le comportementà petit bruit de l'EDS mutlidimensionelles dont les coéfficients sopntà dérivées singulières. A Diop, INRIA. PhD thesisA. Diop. Sur la discrétisation et le comportementà petit bruit de l'EDS mutlidimensionelles dont les coéfficients sopntà dérivées singulières. PhD thesis, INRIA, 2003. Exact sampling of jump-diffusions. K Giesecke, D Smelov, Operations Research. 6114K. Giesecke and D. Smelov. Exact sampling of jump-diffusions. Operations Research, 61(14):894- 907, 2013. Counterparty Credit Risk. J Gregory, Wiley FinanceJ. Gregory. Counterparty Credit Risk. Wiley Finance, 2010. CVA and wrong-way risk. J Hull, A White, Financial Analysts Journal. 685J. Hull and A. White. CVA and wrong-way risk. Financial Analysts Journal, 68(5):58-69, 2012. Conic martingales from stochastic integrals. M Jeanblanc, F Vrins, Mathematical Finance. To appear inM. Jeanblanc and F. Vrins. Conic martingales from stochastic integrals. To appear in Mathematical Finance, 2016. A comparison of biased simulation schemes for stochastic volatility models. R Lord, R Koekkoek, D Van Dijk, Quantitative Finance. 102R. Lord, R. Koekkoek, and D. Van Dijk. A comparison of biased simulation schemes for stochastic volatility models. Quantitative Finance, 10(2):177-194, 2010. Pricing counterparty risk at the trade level and credit value adjustment allocations. M Pykthin, D Rosen, Journal of Credit Risk. 64M. Pykthin and D. Rosen. Pricing counterparty risk at the trade level and credit value adjustment allocations. Journal of Credit Risk, 6(4):3-38, 2011. Stochastic Calculus for Finance vol. II -Continuous-time models. S E Shreve, SpringerS.E. Shreve. Stochastic Calculus for Finance vol. II -Continuous-time models. Springer, 2004. Modeling and hedging wrong way risk in CVA with exposure sampling. A Sokol, RiskMinds USA. Risk. A. Sokol. Modeling and hedging wrong way risk in CVA with exposure sampling. In RiskMinds USA. Risk, 2011. Wrong-way risk models: A comparison of analytical exposures. F Vrins, SubmittedF. Vrins. Wrong-way risk models: A comparison of analytical exposures. Submitted, 2016. Getting CVA up and running. Risk Magazine. F Vrins, J Gregory, F. Vrins and J. Gregory. Getting CVA up and running. Risk Magazine, October 2012.
[]
[ "Isogeometric analysis in electronic structure calculations", "Isogeometric analysis in electronic structure calculations" ]
[ "Robert Cimrman [email protected] \nNew Technologies Research Centre\nUniversity of West Bohemia\nUniverzitní 8306 14PlzeňCzech Republic\n", "Matyáš Novák \nInstitute of Physics\nAcademy of Sciences of the Czech Republic\nNa Slovance\nPragueCzech Republic\n", "Radek Kolman \nInstitute of Thermomechanics, Academy of Sciences of the Czech Republic\nDolejškova 5182 00PragueCzech Republic\n", "Miroslav Tůma \nInstitute of Computer Science\nAcademy of Sciences of the Czech Republic\nPod Vodárenskou věží 2, 182 07PragueCzech Republic\n", "Jiří Vackář \nInstitute of Physics\nAcademy of Sciences of the Czech Republic\nNa Slovance\nPragueCzech Republic\n" ]
[ "New Technologies Research Centre\nUniversity of West Bohemia\nUniverzitní 8306 14PlzeňCzech Republic", "Institute of Physics\nAcademy of Sciences of the Czech Republic\nNa Slovance\nPragueCzech Republic", "Institute of Thermomechanics, Academy of Sciences of the Czech Republic\nDolejškova 5182 00PragueCzech Republic", "Institute of Computer Science\nAcademy of Sciences of the Czech Republic\nPod Vodárenskou věží 2, 182 07PragueCzech Republic", "Institute of Physics\nAcademy of Sciences of the Czech Republic\nNa Slovance\nPragueCzech Republic" ]
[]
In electronic structure calculations, various material properties can be obtained by means of computing the total energy of a system as well as derivatives of the total energy w.r.t. atomic positions. The derivatives, also known as Hellman-Feynman forces, require, because of practical computational reasons, the discretized charge density and wave functions having continuous second derivatives in the whole solution domain. We describe an application of isogeometric analysis (IGA), a spline modification of finite element method (FEM), to achieve the required continuity. The novelty of our approach is in employing the technique of Bézier extraction to add the IGA capabilities to our FEM based code for ab-initio calculations of electronic states of non-periodic systems within the density-functional framework, built upon the open source finite element package SfePy. We compare FEM and IGA in benchmark problems and several numerical results are presented.
10.1016/j.matcom.2016.05.011
[ "https://arxiv.org/pdf/1601.00583v1.pdf" ]
9,689,643
1601.00583
426b9516ec20fda2806451d89723f075ee37d408
Isogeometric analysis in electronic structure calculations 1999/2, 4 Jan 2016 Robert Cimrman [email protected] New Technologies Research Centre University of West Bohemia Univerzitní 8306 14PlzeňCzech Republic Matyáš Novák Institute of Physics Academy of Sciences of the Czech Republic Na Slovance PragueCzech Republic Radek Kolman Institute of Thermomechanics, Academy of Sciences of the Czech Republic Dolejškova 5182 00PragueCzech Republic Miroslav Tůma Institute of Computer Science Academy of Sciences of the Czech Republic Pod Vodárenskou věží 2, 182 07PragueCzech Republic Jiří Vackář Institute of Physics Academy of Sciences of the Czech Republic Na Slovance PragueCzech Republic Isogeometric analysis in electronic structure calculations 1999/2, 4 Jan 2016Preprint submitted to Mathematics and Computers in Simulation January 5, 2016(Robert Cimrman)electronic structure calculationdensity functional theoryfinite element methodisogeometric analysis * Corresponding author In electronic structure calculations, various material properties can be obtained by means of computing the total energy of a system as well as derivatives of the total energy w.r.t. atomic positions. The derivatives, also known as Hellman-Feynman forces, require, because of practical computational reasons, the discretized charge density and wave functions having continuous second derivatives in the whole solution domain. We describe an application of isogeometric analysis (IGA), a spline modification of finite element method (FEM), to achieve the required continuity. The novelty of our approach is in employing the technique of Bézier extraction to add the IGA capabilities to our FEM based code for ab-initio calculations of electronic states of non-periodic systems within the density-functional framework, built upon the open source finite element package SfePy. We compare FEM and IGA in benchmark problems and several numerical results are presented. Introduction The electronic structure calculations allow to predict and understand material properties such as stable atomic arrangements by minimizing the total internal energy of a system of atoms, as well as to determine derived properties such as elasticity, hardness, electric and magnetic properties, etc. 5 We are developing a real space code [29] for electronic structure calculations based on • the density functional theory (DFT), [10, 22,18,23]; • the environment-reflecting pseudopotentials [28]; • a weak solution of the Kohn-Sham equations [15]. 10 The code is based on the open source finite element package SfePy (Simple Finite Elements in Python, http://sfepy.org) [4], which is a general package for solving (systems of) partial differential equations (PDEs) by the finite element method (FEM), cf. [11,27]. The key required ability for practical computations is the calculation of the 15 Hellman-Feynman forces (HFF), which correspond to the derivatives of the total energy w.r.t. atomic positions. The HFF can efficiently provide gradients in a gradient-based optimizer searching the total energy minimum. A major hurdle to overcome in computing the HFF is the requirement that the discretized charge density and wave functions should have continuous second derivatives in the 20 whole solution domain -implementing a globally C 2 continuous basis in FEM is not easy. Therefore we decided to enhance the SfePy package with another PDE discretization scheme, the Isogeometric analysis (IGA), see [6,2]. IGA is a modification of FEM which employs shape functions of different spline types such as B-splines, NURBS (Non-uniform rational B-spline), T-25 splines [1], etc. It was successfully employed for numerical solutions of various physical and mathematical problems, such as fluid dynamics, diffusion and other problems of continuum mechanics [6]. IGA has been reported to have excellent convergence properties when solving eigenvalue problems connected to free vibrations in elasticity [17], where the 30 errors in frequencies decrease in the whole frequency band with increasing the approximation order, so even for high frequencies the accuracy is very good. It should be noted that dispersion and frequency errors are reported to decrease with increasing spline order [13]. Moreover, IGA solution excellently approximates not only eigenvalues in the full frequency spectrum but also accurately 35 approximates eigen-modes. There are no optical modes as in higher-order FEM, where the errors in higher frequencies grow rapidly with the approximation order and band gaps in the frequency band exist, see [7, 16,12]. The Kohn-Sham equations are a highly non-linear eigenvalue problem so the above properties of IGA seem to further support our choice. 40 The drawbacks of using IGA, as reported also in [17], concern mainly the increased computational cost of the numerical integration and assembling. Also, because of the higher global continuity, the assembled matrices have more nonzero entries than the matrices corresponding to the C 0 FEM basis. A comparison study of IGA and FEM matrix structures, the cost of their evaluation, 45 and mainly the cost of direct and iterative solvers in IGA has been presented by [5] and [25]. Recently, using FEM and its variants in electronic structure calculation context is pursued by a growing number of groups, cf. [8], where the hp-adaptivity is discussed, [20,21] where spectral finite elements as well as the hp-adaptivity 50 are considered, or [19], where NURBS-based FEM is applied. In the paper we first outline the physical problem of electronic states calculations in Section 2, then introduce the computational methods and their implementation in terms of both FEM and IGA in Section 3. Finally, we present a comparison of FEM and IGA using a benchmark problem and show some 55 numerical results in Section 4. Calculation of electronic states The DFT allows decomposing the many-particle Schrödinger equation into the one-electron Kohn-Sham equations. Using atomic units they can be written in the common form 60 − 1 2 ∇ 2 + V H (r) + V xc (r) +V (r) ψ i = ε i ψ i ,(1) which provide the orbitals ψ i that reproduce, with the weights of occupations n i , the charge density ρ of the original interacting system, as ρ(r) = N i n i |ψ i (r)| 2 .(2) V is a (generally) non-local Hermitian operator representing the effective ionic potential for electrons. In the present case, within pseudopotential approach, V represents core electrons, separated from valence electrons, together with 65 the nuclear charge. V xc is the exchange-correlation potential describing the non-coulomb electron-electron interactions. We use local-density aproximation (LDA) of this potential [18]. V H is the electrostatic potential obtained as a solution to the Poisson equation. The Poisson equation for V H has the charge density ρ at its right-hand side and is as follows: 70 ∆V H = 4πρ .(3) Denoting the total potential V := V H + V xc +V , we can write, using Hartree atomic units, − 1 2 ∇ 2 + V (r) ψ i = ε i ψ i .(4) Note that the above mentioned eigenvalue problem is highly non-linear, as the potential V depends on the orbitals ψ i . Therefore an iterative scheme is needed, defining the DFT loop for attaining a self-consistent solution. DFT loop For the global convergence of the DFT iteration we use the standard algorithm outlined in Fig. 1. The purpose of the DFT loop is to find a selfconsistent solution. Essentially, we are seeking a fixed point of a function of V hxc := V H [ρ] + V xc [ρ] . For this task, a variety of nonlinear solvers can be used. 80 We use Broyden-type quasi-Newton solvers applied to DF T (V i hxc ) − V i hxc = V i+1 hxc − V i hxc = 0 ,(5) where DF T denotes a single iteration of the DFT loop. initial ρ,V ? V = V H [ρ] + V xc [ρ] +V ? − 1 2 ∇ 2 + V (r) ψ i = ε i ψ i ? ρ = n i |ψ i | 2 ? converged to self-consistency? After the DFT loop convergence is achieved, the derived quantities, particularly the total energy, are computed. By minimizing the total energy as a function of atomic positions, the equilibrium atomic positions can be found. 85 Therefore the DFT loop itself can be embedded into an outer optimization loop, where the objective function gradients are the HFF. Forces acting on atoms The force acting on atom α at position τ α is equal to the derivative of the total energy functional with respect to an infinitesimal displacement of this atom 90 δτ α : F α = − δE δτ α .(6) Making use of the Hellmann-Feynman theorem that relates the derivative of the total energy, with respect to a parameter λ, to the expectation value of the derivative of the Hamiltonian operator, w.r.t. the same parameter, dE dλ = Ψ * λ dĤ λ dλ Ψ λ ,(7) within the density functional theory we can write F α = F α HF,es − i n i δε i − ρ(r)δV eff (r)d 3 r δτ α ,(8) where the first term is the electrostatic Hellmann-Feynman force and the second term is the "Pulay" force, also known as "incomplete basis set" force, that contains the corrections that depend on technical details of the calculation. The electrostatic Hellmann-Feynman force is given by the sum over all the atoms β = α of electrostatic forces between the charges of atomic nuclei Z α and Z β and 100 by the force acting on the charge Z α in the electric field of the charge density ρ: F α HF,es = Z α d dτ α   − β =α Z β |τ α − τ β | + ρ(r) |τ α − r| d 3 r   .(9) The effective potential V eff = − α Z α |r − τ α | + V hxc (r)(10) used in the Pulay force term, together with the kinetic energy operator, forms the total energy (for more details see e.g. [31], [30], [9]). Within the pseudopotential formalism, as it was shown by Ihm, Zunger and 105 Cohen [14], the electrostatic HF force (9) transforms into − β =α d dτ α Z α Z β |τ α − τ β | + l ρ ion,l (|r − τ α |)E l (r)d 3 r ,(11) where ρ ion,l is "virtual ionic partial charge" derived from the l-component U ps,l of the semilocal form of the pseudopotential, ρ ion,l (r) = − π 4 ∇ 2 U ps,l (r) ,(12) and E l (r) = −∇ r ρ ps,l (r ) |r − r| d 3 r .(13) Here ρ ps,l is the l-projected (via an integration over angles on a unit sphere) 110 charge w.r.t. atomic center α ρ ps,l (r ) = i n i ψ * i (r )P α l ψ i (r )dθdφ ,(14) whereP α l is the Legendre-polynomial-based projector to the l-subspace w.r.t. site α. The continuity of the derivatives of the wave functions ψ i up to the second order is the necessary condition for the validity of the derivation above; otherwise 115 everything would become much more complicated and unsuitable for practical use, in connection with FEM/IGA approach (for more details see e.g. [31], [30], [9]). (4) can be rewritten using the weak formulation: Computational methods and their implementation find functions ψ i ∈ H 1 (Ω) such that for all v ∈ H 1 0 (Ω) holds Ω 1 2 ∇ψ i · ∇v dV + Ω vV ψ i dV = ε i Ω vψ i dV + ∂Ω 1 2 dψ i dn dS .(15) If the solution domain Ω is sufficiently large, the last term can be neglected. 125 The Poisson equation (3) has the following weak form: Ω ∇v · ∇V H = 4π Ω ρv .(16) Equations (15), (16) then need to be discretized -the continuous fields are approximated by discrete fields with a finite set of degrees of freedom (DOFs) and a basis, typically piece-wise polynomial: u(r) ≈ u h (r) = N k=1 u k φ k (r) for r ∈ Ω ,(17) where u is a continuous field (ψ, v, V H in our equations), u k , k = 1, 2, . . . , N 130 are the discrete DOFs and φ k are the basis functions. From the computational point of view it is desirable that the basis functions have a small support, so that the resulting system matrix is sparse. Finite element method In the FEM the discretization process involves the discretization of the do- 135 main Ω -it is replaced by a polygonal domain Ω h that is covered by small non-overlapping subdomains called elements (e.g. triangles or quadrilaterals in 2D, tetrahedrons or hexahedrons in 3D), cf. [11,27]. The elements form a FE mesh. The basis functions are defined as piece-wise polynomials over the individual Isogeometric analysis In IGA, the CAD geometrical description in terms of NURBS patches is used directly for the approximation of the unknown fields, without the intermediate 150 FE mesh -the meshing step is removed, which is one of its principal advantages. A D-dimensional geometric domain can be defined by r(ξ) = n A=1 P A R A,p (ξ) = P T R(ξ) ,(18) where ξ = {ξ 1 , . . . , ξ D } are the parametric coordinates, and P = {P A } n A=1 is the set of control points, R A,p , A = 1, 2, . . . n are the NURBS solid basis functions and p is the NURBS solid degree. 155 If D > 1, the NURBS solid can be defined as a tensor product of univariate NURBS curves. First a mapping is defined, see [2], that maps between the tensor product space and the global indexing of the basis functions. Let i = 1, 2, . . . , n, j = 1, 2, . . . , m and k = 1, 2, . . . , l, theñ A(i, j, k) = (l × m)(i − 1) + l(j − 1) + k . If N i,p (ξ), M j,q (η) and L k,r (ζ) are the univariate B-spline basis functions with 160 degrees p, q and r, respectively, then with A =Ã(i, j, k) and =Ã(î,ĵ,k) R p,q,r A (ξ, η, ζ) = L i,r (ζ)M j,q (η)N k,p (ξ)w A n i=1 m j=1 l k=1 Lî ,r (ζ)Mĵ ,q (η)Nk ,p (ξ)w are the NURBS solid basis functions (here R p,q,r A corresponds to R A,p in (18)), and w A are the weights (products of univariate NURBS basis weights). Below we will denote (ξ, η, ζ) by a vector ξ. The univariate B-spline basis functions are defined by a knot vector, that is the vector of non-decreasing parametric 165 coordinates Ξ = {ξ 1 , ξ 2 , . . . , ξ n+p+1 }, where ξ A ∈ R is the A th knot and p is the polynomial degree of the B-spline basis functions [24]. Then for p = 0 N A,0 (ξ) =    1 for ξ A ≤ ξ < ξ A+1 , 0 otherwise. For p > 0 the basis functions are defined by the Cox-de Boor recursion formula (defining 0 0 ≡ 0) N A,p (ξ) = ξ − ξ A ξ A+p − ξ A N A,p−1 (ξ) + ξ A+p+1 − ξ ξ A+p+1 − ξ A+1 N A+1,p−1 (ξ) . Note that it is possible to insert knots into a knot vector without changing 170 the geometric or parametric properties of the curve by computing the new set of control points by the knot insertion algorithm, see e.g. [2]. The continuity of approximation does not change when inserting control points. The basic properties of the B-spline basis functions can be found in [24]. In IGA, the same NURBS basis, that is used for the geometry description, 175 is used also for the approximation of PDE solutions. For our equation (15) we have ψ(ξ) ≈ ψ h (ξ) = n A=1 ψ A R A,p (ξ) , v(ξ) ≈ v h (ξ) = n A=1 v A R A,p (ξ) ,(19) where ψ A are the unknown DOFs -coefficients of the basis in the linear combination, and v A are the test function DOFs. Complex geometries cannot be described by a single NURBS outlined above, 180 often called NURBS patch -many such patches might be needed, and special care must be taken to ensure required continuity along patch boundaries and to avoid holes. Usually, the patches are connected using C 0 continuity only, as the individual patches have the open knot vectors [24]. However, on a single patch, the NURBS basis can be smooth as needed for 185 the HFF calculation -a degree p curve has p − 1 continuous derivatives, if no internal knots are repeated, as follows from the B-spline basis properties [24]. The basis functions R A,p , A = 1, . . . , n on the patch are uniquely determined by the knot vector for each axis, and cover the whole patch. Due to our continuity requirements, only single-patch domains are considered in this paper. Also, we 190 set w A = 1, A = 1, . . . , n, thus using a B-spline basis instead of full NURBS basis, because our domain is simply a cube, see Section 4. itself does not see the NURBS description at all. It is based on the observation that repeating a knot in the knot vector decreases continuity of the basis in that knot by one. This can be done in such a way that the overall shape remains the same, but the "elements" appear naturally as given by non-zero knot spans. IGA implementation in SfePy The final basis restricted to each of the elements is formed by the Bernstein 200 polynomials B, cf. [2]. The Bézier extraction process is illustrated in Fig. 2. The depicted basis corresponds to the second parametric axis of the domain shown in Fig. 3, see below. In [2] algorithms are developed that allow computing the Bézier extraction operator C for each element such that the original (smooth) NURBS basis 205 function R can be recovered from the local Bernstein basis B using R = CB. The Bézier extraction also allows construction of the Bézier mesh, see Several kinds of grids (or "meshes") can be constructed for a NURBS patch, In our implementation, full Gauss quadrature rules with the 1D quadrature order r = p + 1 are used to integrate over the Bézier elements. Numerical examples In this section we show some results based on our initial tests with the IGA The thin blue lines are iso-lines of the NURBS parametrization. Nitrogen atom benchmark The nitrogen atom serves us as a benchmark problem. A cube domain with the size of 10 × 10 × 10 atomic units was used for all the computations. The (K + V (ψ i ))ψ i = ε i M ψ i .(20) The following numbers of DOFs per cube side (including DOFs fixed by boundary conditions) were used: 240 • FEM: 16,19,22,28,34,40,46, 52, 58, 64; • IGA: 12, 14, 16, 18, 20, 22, 24, 26, 28. This corresponds to: • FEM: 5,6,7,9,11,13,15,17,19, 21 cubic elements per cube side; • IGA: 9,11,13,15,17,19,21,23,25 Bézier elements per cube side. 245 The grids corresponding to the coarsest IGA approximation used are shown (2D projection) in Fig. 3. Because the difficulty of solving (20) depends not only on the number of DOFs, but also on the number of non-zeros in the sparse matrices, we show the convergence with respect to both of these parameters, see Examples of computed quantities In physical simulations we are interested in other quantities, besides the eigenvalues ε i . As an example we show the charge density and the orbitals ψ i of the nitrogen atom in Fig. 6 for some of the grids used in the convergence study above. It shows that the electron states form into shapes of spherical har-270 monics even if no preliminary shape-anticipating assumption is done. Note that depending on the grid resolution, the orientation of orbitals without spherical symmetry (ψ 2 , ψ 3 , and ψ 4 ) changes. For illustration of a more complex structure, we also present the distribution of the charge density ρ of the tetrafluormethane molecule (CF 4 ) in Fig. 7. Even 275 though the parametric grid used in this computation was only 20 × 20 × 20, with C 2 continuous B-spline basis, good results were obtained: our code correctly reproduced both the angles of C-F branches and the inter-atomic distances, by minimizing the total energy. The latter allows a globally C 2 continuous approximation of unknown fields, which is crucial for computing of derivatives of the total energy w.r.t. atomic Numerical results comparing the FEM and IGA calculations were presented on the benchmark problem of nitrogen atom. These results suggest significantly 290 better convergence properties of IGA over FEM for our application, due to the higher smoothness of the approximation. This will be further studied on more complex substances, together with the implementation of the calculation of total energy derivatives. Finally, other quantities (charge density and related orbitals) that can be computed were illustrated using figures. Conclusion 295 To alleviate the numerical quadrature cost, reduced quadrature rules has been proposed for the context of the Bézier extraction [26], which we plan to assess in future. quadrature rules for quadratic and cubic splines in isogeometric analysis, Computer Methods in Applied Mechanics and Engineering 277 (2014) 1-45. Figure 1 : 1DFT, iterative self-consistent scheme. denote H 1 (Ω) the usual Sobolev space of functions with L 2 integrable derivatives and H 1 0 (Ω) = {u ∈ H 1 (Ω)|u = 0 on ∂Ω}. The eigenvalue problem elements, have a small support and are typically globally C 0 continuous. The discretized equations are evaluated over the elements as well to obtain local matrices or vectors that are then assembled into a global sparse system. The evaluation usually involves a numerical integration on a reference element, and a mapping to individual physical elements[11,27]. The nodal basis of Lagrange 145 interpolation polynomials or the hierarchical basis of Lobatto polynomials can be used in our code. Fig. 2 , 2right and Fig. 3, right. The code then loops over the Bézier elements and assembles local contributions in the usual FE sense. The operator C is a function of the knot vectors only -it does not depend on the positions of control points. Figure 2 : 2Left: NURBS basis of degree 3 that describes the second axis of the parametric mesh in Fig. 3. Right: the corresponding Bernstein basis with Bézier elements delineated by vertical lines. as depicted in Fig. 3. The parametric mesh is simply the tensor product of the knot vectors defining the parametrization -the lines correspond to the knot vector values. The control mesh has vertices given by the NURBS patch control points and connectivity corresponding to the tensor product nature of 215 the patch. The Bézier mesh has been introduced above and its vertices are the control points of the individual Bézier elements. In our code we use the corner vertices of the Bézier elements to construct a topological Bézier mesh, which can be used for subdomain selection (e.g. parts of the boundary, where boundary conditions need to be applied), because its vertices are interpolatory, i.e., they 220 are in the NURBS domain or on its boundary. Figure 3 : 3From left to right: parametric mesh (tensor product of knot vectors), control mesh, Bézier mesh. The corner vertices of Bézier mesh elements form the topological Bézier mesh. FEM approximation used Lagrange polynomial basis of order three (tri-cubic 230 Lagrange polynomials) on a uniform hexahedral mesh. The IGA approximation used degree 3 B-splines and a uniform knot vector in each parametric axis. The control points were also spaced uniformly, so that the placement of the basis in the physical space was not coarser in the middle of the cube, see Fig. 3. We compared IGA and FEM solution convergence given the increasing num-235 ber of DOFs (grid size). The number of DOFs corresponds to the sizes of the matrices that come from the FEM-or IGA-discretized (15), formally written as Figs. 4 , 5 .Figure 4 :Figure 5 : 4545The non-zeros are determined structurally by the compact support 250 of each basis function and the pattern (allocated space) is the same for both (K + V (ψ i )) and M . Convergence of eigenvalues ε 1 and ε 2 w.r.t. the number of DOFs. 20 ·10 9 67 ·10 9 232 ·10 9 799 ·10 9 2754 ·10 9 Convergence of eigenvalues ε 1 and ε 2 w.r.t. the number of non-zeros of the matrices.It should be noted that the converged values are neither exact physical binding energies of electrons nor the ionization energies. They are just the Kohn-Sham eigenvalues for a given particular problem under given conditions and 255 approximations, in our case with a relatively small physical domain size, since the aim was to test quickly the numerical properties of different bases. However, note that both methods converge to exactly the same values, which confirms the numerical validity of the FEM/IGA calculation. The above results suggest that IGA converges to a solution both for a much 260 smaller number of DOFs and for smaller number of non-zeros in the matrices. This is in agreement with the initial prognosis that the higher-order smooth basis functions can improve the convergence and accuracy of the finite element electronic structure calculations. Note the non-oscillating convergence of the IGA values in contrast to the oscillating convergence of FEM values. Figure 6 : 6Nitrogen atom: iso-surfaces of charge density ρ and orbitals ψ i for several IGA grid sizes. The number of DOFs per axis is shown in the left-most column. The sizes of the individual images are in proportion. The color range shown in the top row is used in the corresponding column, except the ψ 1 plot for the 24 DOFs/axis grid -there ψ 1 has the opposite sign then in the other rows. This is a side effect of the eigenvalue solver implementation and is physically insignificant. a weak solution of the Kohn-Sham equations. Our computer implementation built upon the open source package SfePy supports computations both with the finite element basis and the NURBS or B-splines basis of isogeometric analysis. Figure 7 : 7Tetrafluormethane molecule: iso-surfaces of charge density ρ. positions etc., as given by the Hellmann-Feynman theorem. Our implementation [3] uses a variant of IGA based on Bézier extractionoperators[2] that is suitable for inclusion into existing FE codes. The code195 B 43 (1991) 6411-6422. . Y Bazilevs, V M Calo, J A Cottrell, J A Evans, T J R Hughes, 305Y. Bazilevs, V. M. Calo, J. A. Cottrell, J. A. Evans, T. J. R. Hughes, 305 Isogeometric analysis using Tsplines. S Lipton, M A Scott, T W Sederberg, Comput. Methods Appl. Mech. Engrg. 199S. Lipton, M. A. Scott, T. W. Sederberg, Isogeometric analysis using T- splines, Comput. Methods Appl. Mech. Engrg. 199 (2010) 229-263. Isogeometric finite element data structures based on Bezier extraction of NURBS. M J Borden, M A Scott, J A Evans, T J R Hughes, Int. J. Numer. Meth. Engng. 87M. J. Borden, M. A. Scott, J. A. Evans, T. J. R. Hughes, Isogeometric finite element data structures based on Bezier extraction of NURBS, Int. J. Numer. Meth. Engng. 87 (2011) 15-47. Enhancing SfePy with isogeometric analysis. R Cimrman, Proceedings of the 7th European Conference on Python in Science. P. de Buyl, N. Varoquauxthe 7th European Conference on Python in ScienceR. Cimrman, Enhancing SfePy with isogeometric analysis, in: P. de Buyl, N. Varoquaux (eds.), Proceedings of the 7th European Conference on Python in Science (EuroSciPy 2014), 2014, pp. 65-72. URL http://arxiv.org/abs/1412.6407 SfePy -write your own FE application. R Cimrman, P. de Buyl315R. Cimrman, SfePy -write your own FE application, in: P. de Buyl, 315 Proceedings of the 6th European Conference on Python in Science. N. Varoquauxthe 6th European Conference on Python in ScienceN. Varoquaux (eds.), Proceedings of the 6th European Conference on Python in Science (EuroSciPy 2013), 2014, pp. 65-70. URL http://arxiv.org/abs/1404.6391 The cost of continuity: A study of the performance of isogeometric finite elements using 320 direct solvers. N Collier, D Pardo, L Dalcin, M Paszynski, V M Calo, Computer Methods in Applied Mechanics and Engineering. N. Collier, D. Pardo, L. Dalcin, M. Paszynski, V. M. Calo, The cost of continuity: A study of the performance of isogeometric finite elements using 320 direct solvers, Computer Methods in Applied Mechanics and Engineering 213-216 (2012) 353 -361. J A Cottrell, T J R Hughes, Y Bazilevs, Isogeometric Analysis: Toward Integration of CAD and FEA. Chichester, West Sussex, U.K.John Wiley & SonsJ. A. Cottrell, T. J. R. Hughes, Y. Bazilevs, Isogeometric Analysis: Toward Integration of CAD and FEA, John Wiley & Sons, Chichester, West Sussex, U.K., 2009. Studies of refinement and continuity in isogeometric structural analysis. J A Cottrell, T J R Hughes, A Reali, Comput. Methods Appl. Mech. Engrg. 196J. A. Cottrell, T. J. R. Hughes, A. Reali, Studies of refinement and con- tinuity in isogeometric structural analysis, Comput. Methods Appl. Mech. Engrg. 196 (2007) 4160-4183. On the adaptive finite element analysis of the Kohn-Sham equations: Methods, algorithms. D Davydov, T D Young, P Steinmann, implemen- 330D. Davydov, T. D. Young, P. Steinmann, On the adaptive finite element analysis of the Kohn-Sham equations: Methods, algorithms, and implemen- 330 . International Journal for Numerical Methods in Engineering. tation, International Journal for Numerical Methods in Engineering (2015) n/a-n/a. Symmetrization of atomic forces within the full-potential linearized augmented-plane-wave method. A Di Pomponio, A Continenza, R Podloucky, J Vackář, Phys. Rev. B. 53A. Di Pomponio, A. Continenza, R. Podloucky, J. Vackář, Symmetrization of atomic forces within the full-potential linearized augmented-plane-wave method, Phys. Rev. B 53 (1996) 9505-9508. R M Dreizler, E K U Gross, Density Functional Theory. BerlinSpringer-VerlagR. M. Dreizler, E. K. U. Gross, Density Functional Theory, Springer-Verlag, Berlin, 1990. The Finite Element Method: Linear Static and Dynamic Finite Element Analysis. T J R Hughes, Dover PublicationsMineola, New York, USAT. J. R. Hughes, The Finite Element Method: Linear Static and Dynamic Finite Element Analysis, Dover Publications, Mineola, New York, USA, 2000. Finite element and NURBS approximations of eigenvalue, boundary-value, and initial-value problems. T J R Hughes, J A Evans, A Reali, Comput. Methods Appl. Mech. Engrg. 272T. J. R. Hughes, J. A. Evans, A. Reali, Finite element and NURBS approx- imations of eigenvalue, boundary-value, and initial-value problems, Com- put. Methods Appl. Mech. Engrg. 272 (2014) 290-320. Duality and unified analysis of discrete approximations in structural dynamics and wave propagation: Com-345 parison of p-method finite elements with k-method NURBS. T J R Hughes, A Reali, G Sangalli, Computer Methods in Applied Mechanics and Engineering. 197T. J. R. Hughes, A. Reali, G. Sangalli, Duality and unified analysis of dis- crete approximations in structural dynamics and wave propagation: Com- 345 parison of p-method finite elements with k-method NURBS, Computer Methods in Applied Mechanics and Engineering 197 (49-50) (2008) 4104- 4124. Momentum-space formalism for the total energy of solids. J Ihm, A Zunger, C M L , J. Phys. C: Solid State Phys. 12J. Ihm, A. Zunger, C. M. L., Momentum-space formalism for the total energy of solids, J. Phys. C: Solid State Phys. 12 (1979) 4409-4422. Self-consistent equations including exchange and correlation effects. W Kohn, L J Sham, Phys. Rev. 1404AW. Kohn, L. J. Sham, Self-consistent equations including exchange and correlation effects, Phys. Rev. 140 (4A) (1965) A1133-A1138. Complex wavenumber Fourier analysis of the B-spline based finite element method. R Kolman, J Plešek, M Okrouhlík, Wave Motion. 51R. Kolman, J. Plešek, M. Okrouhlík, Complex wavenumber Fourier analysis of the B-spline based finite element method, Wave Motion 51 (2013) 348- 359. Isogeometric analysis of free vibration of simple shaped elastic samples. R Kolman, S V Sorokin, B Bastl, J Kopačka, J Plešek, Journal of the Acoustical Society of America. 1374R. Kolman, S. V. Sorokin, B. Bastl, J. Kopačka, J. Plešek, Isogeometric analysis of free vibration of simple shaped elastic samples, Journal of the Acoustical Society of America 137 (4) (2015) 2089-2100. R M Martin, Electronic Structure: Basic Theory and Practical Methods. Cambridge, New York, USACambridge University PressR. M. Martin, Electronic Structure: Basic Theory and Practical Methods, Cambridge University Press, Cambridge, New York, USA, 2005. B-splines and {NURBS} based finite element methods for Kohn-Sham equations. A Masud, R Kannan, Computer Methods in Applied Mechanics and Engineering. A. Masud, R. Kannan, B-splines and {NURBS} based finite element meth- ods for Kohn-Sham equations, Computer Methods in Applied Mechanics and Engineering 241-244 (2012) 112-127. Subquadratic-scaling subspace projection method for large-scale Kohn-Sham density functional theory calculations using 365 spectral finite-element discretization. P Motamarri, V Gavini, Phys. Rev. B. 90115127P. Motamarri, V. Gavini, Subquadratic-scaling subspace projection method for large-scale Kohn-Sham density functional theory calculations using 365 spectral finite-element discretization, Phys. Rev. B 90 (2014) 115127. Higher-order adaptive finite-element methods for Kohn-Sham density functional theory. P Motamarri, M R Nowak, K Leiter, J Knap, V Gavini, Journal of Computational Physics. 253P. Motamarri, M. R. Nowak, K. Leiter, J. Knap, V. Gavini, Higher-order adaptive finite-element methods for Kohn-Sham density functional theory, Journal of Computational Physics 253 (2013) 308-343. R G Parr, Y Weitao, Density-Functional Theory of Atoms and Molecules. 370R. G. Parr, Y. Weitao, Density-Functional Theory of Atoms and Molecules, 370 Pseudopotential methods in condensed matter applications. W E Pickett, Comp. Phys. Reports. 9W. E. Pickett, Pseudopotential methods in condensed matter applications, Comp. Phys. Reports 9 (1989) 115-198. The NURBS Book. L Piegl, W Tiller, Springer-VerlagNew York, New York, USA2nd ed.) ed.L. Piegl, W. Tiller, The NURBS Book, (2nd ed.) ed., Springer-Verlag, New York, New York, USA, 1995-1997. Isogeometric collocation: Cost comparison with Galerkin methods and extension to adaptive hierarchical {NURBS} discretizations. D Schillinger, J A Evans, A Reali, M A Scott, T J R Hughes, Computer Methods in Applied Mechanics and Engineering. 267D. Schillinger, J. A. Evans, A. Reali, M. A. Scott, T. J. R. Hughes, Isogeo- metric collocation: Cost comparison with Galerkin methods and extension to adaptive hierarchical {NURBS} discretizations, Computer Methods in Applied Mechanics and Engineering 267 (2013) 170 -232. . D Schillinger, S J Hossain, T J R Hughes, Reduced Bézier elementD. Schillinger, S. J. Hossain, T. J. R. Hughes, Reduced Bézier element An Analysis of the Finite Element Method. G Strang, G Fix, Wellesley-Cambridge Press414WellesleyG. Strang, G. Fix, An Analysis of the Finite Element Method, Wellesley- Cambridge Press, Wellesley, 2008, pp. 414. Adaptability and accuracy of all-electron pseudopo-385 tentials. J Vackář, A Šimůnek, Phys. Rev. B. 67125113J. Vackář, A.Šimůnek, Adaptability and accuracy of all-electron pseudopo- 385 tentials, Phys. Rev. B 67 (2003) 125113. of Prog. Theoretical Chem. and Phys., chap. Finite Element Method in Density Functional Theory Electronic Structure Calculations. J Vackář, O Čertík, R Cimrman, M Novák, O Šipr, J Plešek, Advances in the Theory of Quantum Systems in Chemistry and Physics. NetherlandsSpringer Nether-390 lands22J. Vackář, O.Čertík, R. Cimrman, M. Novák, O.Šipr, J. Plešek, Advances in the Theory of Quantum Systems in Chemistry and Physics, vol. 22 of Prog. Theoretical Chem. and Phys., chap. Finite Element Method in Den- sity Functional Theory Electronic Structure Calculations, Springer Nether- 390 lands, Netherlands, 2011, pp. 199-217. Fractional occupations and densityfunctional energies and forces. M Weinert, J W Davenport, Phys. Rev. B. 45M. Weinert, J. W. Davenport, Fractional occupations and density- functional energies and forces, Phys. Rev. B 45 (1992) 13709-13712. All-electron and pseudopotential force calculations using the linearized-augmented-plane-wave method. R Yu, D Singh, H Krakauer, Phys. Rev. R. Yu, D. Singh, H. Krakauer, All-electron and pseudopotential force cal- culations using the linearized-augmented-plane-wave method, Phys. Rev.
[]
[ "Topology of generic holomorphic foliations on Stein manifolds: structure of leaves and Kupka-Smale property", "Topology of generic holomorphic foliations on Stein manifolds: structure of leaves and Kupka-Smale property" ]
[ "Tanya Firsova " ]
[]
[]
We study topology of leaves of 1-dimensional singular holomorphic foliations of Stein manifolds. We prove that for a generic foliation all leaves, except for at most countably many are contractible, the rest are topological cylinders. We show that a generic foliation is complex Kupka-Smale.
null
[ "https://arxiv.org/pdf/1105.2019v1.pdf" ]
118,456,208
1105.2019
69e5605d4146ed947f67cdfab4f0d95c77019e76
Topology of generic holomorphic foliations on Stein manifolds: structure of leaves and Kupka-Smale property 10 May 2011 Tanya Firsova Topology of generic holomorphic foliations on Stein manifolds: structure of leaves and Kupka-Smale property 10 May 2011 We study topology of leaves of 1-dimensional singular holomorphic foliations of Stein manifolds. We prove that for a generic foliation all leaves, except for at most countably many are contractible, the rest are topological cylinders. We show that a generic foliation is complex Kupka-Smale. Introduction Consider a system of differential equations x ′ 1 = f 1 (x 1 , . . . , x n ) . . . x ′ n = f n (x 1 , . . . , x n ) ,(1) where (x 1 , . . . , x n ) ∈ C n , f 1 , . . . , f n ∈ O(C n ). The phase space C n , outside the singular locus, is foliated by Riemann surfaces. It is a natural question: what is the topological type of these leaves? For polynomial foliations of fixed degree this question was asked by Anosov and still remains unsolved. In general, it can be quite complicated. Consider, for example, a Hamiltonian foliation of C 2 : H n = const, where H n is a generic polynomial of degree n. All non-singular leaves are Riemann surfaces with (n−1)(n−2) 2 handles and n punctures. There are examples of foliations with dense leaves, having infinitely generated fundamental groups [18]. So one can restrict the question: what is the topological type of leaves for a generic foliation? The genericity here is understood as follows: the space of holomorphic foliations can be naturally equipped with the (Baire) topology of uniform convergence on nonsingular compacts sets. We recall the definition of the topology in Appendix 5.4. We call a foliation generic if it belongs to a residual set -an intersection of countably many open everywhere dense sets. In our paper we describe the topological type of leaves for generic foliations on C n , and more generally, on arbitrary Stein manifolds. We prove the following theorem: Theorem 1.1. For a generic 1-dimensional singular holomorphic foliation on a Stein manifold X all leaves, except for at most countably many, are contractible, the rest are topological cylinders. We consider foliations with singular locus of codimension 2, i.e. foliations locally determined by holomorphic vector fields [15]. Our technique is applicable in a more general setting. In particular, we establish the analog of Kupka-Smale theorem for generic foliations on Stein manifolds: Theorem 1.2. A generic 1-dimensional singular holomorphic foliation on X is complex Kupka-Smale. Let cycle γ be a phase curve of a real vector field, then γ is a loop on the phase curve of the complexified vector field. A complex cycle by definition is a free homotopy class of loops on a leaf of a foliation. Recall that by definition, a real Kupka-Smale vector field has hyperbolic cycles only. Condition (2) is a generalization of this property. We review notions of complex hyperbolicity and invariant manifolds in the Appendix. The above definition was suggested by Marc Chaperon in [4]. In this preprint he studies holomorphic 1-dimensional singular foliations on Stein manifolds. He shows that the property (1) holds for generic foliations. He also gives the proof of the property (3) for generic foliations on C n and states the result for generic foliations on Stein manifolds. Our technique also allows us to prove transversality results for strongly invariant manifolds of the same singular point: Theorems 1.1, 1.2 for foliations of C 2 are proved in [7]. Golenishcheva-Kutuzova [10] showed that for a generic foliation countable many cylinders do exist. We expect that for a generic singular holomorphic 1-dimensional foliation of a Stein manifold there are countably many cylinders. The conformal type of leaves of a generic polynomial foliation of fixed degree was described by Candel, Gomez-Mont [6]. The result was later improved by Lins Neto [17], and Glutsyuk [9]: Theorem 1.4. [9], [17] Any leaf of a generic polynomial foliation of degree n is hyperbolic. We expect that the same answer is true for generic foliations of Stein manifolds. The technique from [6], [17], [9] can be adjusted to attack the problem. See the paper [14] for a vast discussion of open problems. Greg Buzzard studied similar genericity questions for polynomial automorphisms of C n . He proved that a generic polynomial automorphism of C n is Kupka-Smale [2]. our strategy of simultaneous elimination of degeneracies.) Therefore, once all nonisolated cycles are removed, all leaves, except for countably many, are contractible. To prove that the rest have fundamental group Z, one needs to eliminate all degeneracies from the following list: 1. two cycles that belong to the same leaf of the foliation and are not multiples of the same cycle in the homology group of the leaf; 2. saddle connections; 3. cycles on a separatrix that are not multiples of the cycle around the critical point. Recall that a separatrix is a leaf that can be holomorphically extended into a singular point and a saddle connection is a common separatrix of two singular points. In the smooth category one can remove a degeneracy of the foliation locally. Say, one can destroy a homoclinic loop by changing the foliation only in a flow-box around a point on the loop. In the holomorphic category, a priori, one cannot perturb a foliation in a flow-box without changing the foliation globally. Our strategy to remove degeneracies in the holomorphic category is the following: In Section 2 we construct a family of foliations, that eliminate degeneracy, in a neighborhood of a degenerate object, rather than in a flow-box around a point. A non-isolated cycle, a nontrivial pair of cycles are examples of degenerate objects. We give a complete list of degenerate objects in Section 2. All degenerate objects we consider are curves. Our technique allows us to construct an appropriate family only if a degenerate object is holomorphically convex. We expect though that it should be possible to carry out for any degenerate object. In [7] our approach to construct a family of local foliations in a neighborhood of a degenerate object was to control the derivative of the holonomy map along the leaf with respect to a perturbation. This approach can not be adapted to remove a non-transversal intersection of strongly invariant manifolds. One cannot choose a leaf-wise path, that connects singular points with a point of non-transversal intersection. Therefore, one cannot control the intersection of invariant manifolds. In this paper we use a different approach, a more geometric one. First, we reglue the neighborhood (Subsection 2.3). Then we project the obtained manifold, together with a new foliation, to the original one. We use Theorem [19], that states that a Stein manifold has a Stein neighborhood, to construct the projection. We give a review of results on the holomorphic hulls of collections of curves in Section 3. We apply them to give geometric conditions for degenerate object to be holomorphically convex. We also review the relevant results from the Approximation Theory on Stein manifolds and apply them to pass from a local family of foliations in a neighborhood of a degenerate object to a global one. When we remove a degenerate object, e.g. a complex cycle, we do not control the foliation outside a neighborhood of the degenerate object. Therefore, it might happen that eliminating one degenerate object we create many other in different places. We solve this problem as follows: We find a countable number of places where degenerate objects can be located. For each such location we prove that the complement to the set of foliations, which have the degenerate object at this particular location, is open and everywhere dense. Then we intersect these sets and get a residual set of foliations without holomorphically convex degenerate objects. We show that if a foliation has a degenerate object, then it has a holomorphically convex degenerate object. Therefore, the residual set constructed does not have degenerate objects. We describe this strategy in detail in Section 4. This strategy was previously used in [7] and [11]. We give background information on Stein manifolds in the Appendix to make the paper accessible to the specialists, working in Dynamical Systems. There is also background information on holomorphic hulls and complex foliations in the Appendix. We also review facts on multiplicity of analytic sets, . Acknowledgements The author is grateful to Yulij Ilyashenko for the statement of the problem, numerous discussions and useful suggestions. We are also grateful to Igors Gorbovickis for proofreading an earlier version of the manuscript. 2 Local removal of degenerate objects. List of degenerate objects. As we pointed out in the introduction one can not eliminate a homoclinic saddle connection by changing the foliation only locally in a flow-box. Rather than that one needs to perturb the foliation in the neighborhood of the separatrix loop. This leads us to considering degenerate objects. Below we list degenerate objects. One can check that if a foliation does not have degenerate objects of type 1-5, then it satisfies Theorem 1.1. If all singular points of a foliation are complex hyperbolic and it does not have degenerate objects of types 1 − 6, 8 − 9, then it is Kupka-Smale. If a all singular points of a foliation are complex hyperbolic and a foliation does not have degenerate objects of type 7, then it satisfies Theorem 1.3. Definition 2.1. We say that γ is a degenerate object of a foliation F if γ is 1. A non-trivial loop on a leaf L of F , which is a representative of a non-hyperbolic cycle. 2. A union of loops γ 1 , γ 2 that belong to the same leaf L of F . We assume γ 1 and γ 2 are not multiples of the same cycle. Moreover, γ 1 , γ 2 are hyperbolic. (See Fig. 1.) Figure 1: A pair of cycles 3. A path on a saddle connection, that connects two different hyperbolic singular points a 1 and a 2 . (See Fig. 2). Fig.2): γ 1 γ 1 γ 2 γ 2 A loop on a homoclinic saddle connection S (See • a is a hyperbolic singular point; • S 1 , S 2 are local separatrices of the singular point a; S 1 = S 2 ; S 1 , S 2 ⊂ S; • γ ⊂ S passes through the singular point a, starts at S 1 , ends along S 2 . γ γ a 1 a 2 a • a 1 , a 2 are hyperbolic singular points of the foliation F ; • M 1 and M 2 are strongly invariant manifolds of a 1 and a 2 correspondingly; • p is a point of non-transversal intersection of M 1 and M 2 ; • γ 1 ⊂ M 1 and γ 2 ⊂ M 2 are paths that connect a 1 and a 2 with the point p; Fig. 5): • (γ 1 ∪ γ 2 )\(M loc 1 ∪ M loc 2 ) ⊂ L, where L is a leaf of foliation F . 7. A loop γ 1 ∪ γ 2 (See • a is hyperbolic singular point of the foliation F ; • M 1 and M 2 are strongly invariant manifold of the point a; • M loc 1 ∩ M loc 2 = a; • paths γ 1 ⊂ M 1 , γ 2 ⊂ M 2 connect a with p;• (γ 1 ∪ γ 2 )\(M loc 1 ∪ M loc 2 ) ∪ L, where L is a leaf of foliation F . 8. A union γ 1 ∪ γ 2 ∪ γ 3 ∪ γ 4 : • γ 1 , γ 2 are hyperbolic loops on leaves of F ; • M 1 , M 2 are invariant manifolds of γ 1 , γ 2 correspondingly; • γ 3 ⊂ M 1 , γ 4 ⊂ M 2 are paths that connect points on γ 1 ,γ 2 with a point of nontransversal intersection of M 1 , M 2 . • (γ 3 ∪ γ 4 )\ M loc 1 ∪ M loc 2 ⊂ L, where L is a leaf of F . 9. A union γ = γ 1 ∪ γ 2 ∪ γ 3 : • γ 1 is a hyperbolic loop on a leaf; • M 1 is an invariant manifold of γ 1 ; • a is a hyperbolic singular point; • M 2 is a strongly invariant manifold of a; • γ 2 ⊂ M 1 , γ 3 ⊂ M 2 are paths on invariant manifolds that connect a point on γ 1 and the point a correspondingly with the the point of non-transversal intersection of M 1 and M 2 • (γ 2 ∪ γ 3 )\ M loc 1 ∪ M loc 2 ⊂ L, where L is a leaf of the foliation F . Local Removal Lemma. In this section we find a neighborhood of a degenerate object and a family of holomorphic foliations in this neighborhood that eliminate the degenerate object in the neighborhood. Our technique allows us to do that only if a degenerate object is holomorphically convex. We expect, though, it should be possible to carry out for any collection of smooth enough curve. Let U be a neighborhood of the degenerate object. First, we allow not only the foliation, but the neighborhood itself to change with the parameter λ. We get a family of foliations F λ on manifolds U λ . Then we find the way to 'project' U λ to some neighborhood of the degenerate object. Thus, we produce a family of foliations in the neighborhood of the degenerate object that breaks it. The following lemma summarizes the results of the following two subsections. Let γ be a union of curves on a Stein manifold X, endowed with a foliation F 0 . Assume it is holomorphically convex. Fix a point p ∈ γ, assume that p ∈ Σ(F ). Assume that in a neighborhood of p the curve γ belongs to a leaf L of F 0 . Let α ⊂ γ be a small arc, a neighborhood of p on γ. One can fix coordinates (z 1 , . . . , z n−1 , t) in a neighborhood of the point p, so that t is a coordinate along the foliation. Consider the flow-box Π = {(z, t) : |z| < 1, t ∈ U(α)}, where U α is a neighborhood of an arc γ on the leaf L. Take a pair of points q 1 , q 2 ∈ γ\α, that lie on different sides of α and belong to the flow-box Π. Let T 1 , T 2 be transversal sections to F 0 that pass through q 1 , q 2 . Functions (z 1 , . . . , z n−1 ) work as coordinates on T 1 , T 2 . U γ p q 1 q 2 T 1 T 2 Figure 6: γ together with its neighborhood Let Φ λ be a holomorphic on λ family of germs of biholomorphisms Φ λ : C n−1 , 0 → C n−1 , 0 , Φ 0 = Id. Lemma 2.1. There exist a neighborhoodŨ of γ, that retracts to γ, such that Π ∩Ũ = {(z, t) : |z| < ε, t ∈ U(α)}, and a family of foliations F λ onŨ that depends holomorphically on λ satisfying the following conditions: 1. inŨ \Π, F λ is biholomorphic to F 0 . More precisely, there exists a holomorphic on λ family of maps π λ : Ũ \Π → X, which are biholomorphisms to their images, such that π λ maps the leaves of F 0 to the leaves of F λ , π 0 = Id; 2. The holonomy map inside the flow-box along the foliation F λ between T 1 and T 2 is biholomorphically conjugate to Φ λ , more precisely, in coordinates (z 1 , . . . , z n−1 ) on T 1 , T 2 it is (π z λ ) −1 • Φ λ • π z λ , where π z λ and (π z λ ) −1 are first (n − 1) coordinates of π λ and π −1 λ correspondingly. This lemma mimics the smooth case, where one can perturb the foliation only in the flowbox. In the holomorphic case this is not possible. Therefore, we need to adjust everything by the map π λ . Regluing. We weaken the restriction on curve γ for this section. We do not assume it to be holomorphically convex. We start by constructing manifolds U λ . They are obtained by regluing U in a flow-box around a point p. First, we describe the procedure informally and point out the technical difficulties that arise. Then we repeat the description paying attention to the technical difficulties. We take a neighborhood U that can be retracted to γ. LetÛ be a complex manifold obtained form U by doubling the preimage under retraction of a small arc, containing p. One can assume that the preimage of this small arc is a flow-box. SoÛ comes with the natural projectionÛ → U, which is one-to-one everywhere except for the two flow-boxes around the preimages of p, which are glued together by the identity map. U λ is obtained fromÛ by gluing the points in the flow-boxes by using the map (Φ λ , Id). The problem is that (Φ λ , Id) is not an isomorphism from the flow-box to itself. Thus, extra caution is needed to make U λ Hausdorff. In the rest of the section we describe these precautions. First, we choose a bigger neighborhood W that can be retracted to γ. Let ρ denote the retraction. LetŴ be a connected complex manifold that projects one-to-one on U\ρ −1 (α) and two-to-one on ρ −1 (α). Let π −1 1 , π −1 2 be the the two inverses of the projectionŴ → W , restricted to the preimage of ρ −1 (a) ⊂ W . Let V denote a flow-box around a point p in W . We assume V ⊂ ρ −1 (α). We take V small enough so that (Φ λ , Id) is a well-defined map on V and is a biholomorphism to its image. Let V 1 = π −1 1 (V ), V 2 = π −1 2 (V ) . Let T c ⊂ W be a a tube of points that are at distance c from the preimage of γ\α. Let T c ⊂Ŵ be a tube of points that are at the distance c from the preimage of γ\α. Take c small enough. Take U = T c ∪ V ,Û = V 1 ∪ V 2 ∪T c . Note that U can obtained fromÛ by gluing the points from V 1 and V 2 that project to the same point in W . Let V λ 2 = π −1 2 ((Φ λ , Id)(V )) LetÛ λ = V 1 ∪T c ∪ V λ 2 . U λ is a space obtained fromÛ λ by gluing V 1 and V λ 2 by the map (Φ λ , Id). The space U λ inherits complex structure. If one takes c and λ small enough, then it is also Hausdorff. We also consider the total space of reglued manifolds: U = {(u, λ) ∈Ŵ × Λ| u ∈ V 1 ∪ T c ∪ V λ 2 , λ ∈ Λ} U =Û/ ∼, (u, λ) ∼ ((Φ λ , Id)(u), λ) , where u ∈ V 1 , λ ∈ Λ Projection. Siu's Theorem. In this subsection we prove that for small enough λ one can take a small neighborhood of γ in U λ and project it biholomorphically to a neighborhood of γ in U. Assume γ is holomorphically convex. One can choose a neighborhood U 1 of γ, U 1 ⊂ U, such that U 1 is an analytic polyhedron, therefore, a Stein manifold ( [8]). By the theorem, formulated below there is a Stein neighborhoodŨ of U 1 in U. Theorem 2.1. ([19]) Suppose X is a complex space and A is a subvariety of X. If A is Stein, then there exists an open neighborhood Ω of A in X such that Ω is Stein. Fix an embedding ofŨ into C N . We will need the following lemma: Lemma 2.2. There exists a linear (N − n)-subspace α ⊂ C N such that the affine subspaces α x ⊂ C N parallel to α passing through points x ∈ γ are: a) transverse to U; b) pass through only one point on γ. Proof: The set of all (N − n)-subspaces of C N is n(N − n)-dimensional complex manifold Gr(N − n, N). Elements of Gr(N − n, N) that are not transverse to a given subspace of complementary dimension form a codimension 1 complex (singular) subvariety. Path γ is 1-dimensional real manifold. Therefore, subspaces that do not satisfy (a) form a subvariety of Gr(N − n, N) of real codimension 1. A couple of points on γ form a real 2-dimensional manifold. Linear subspace parallel to those that pass through two given points in C N form (n(N −n−1))-dimensional manifold. Therefore, subspaces that do not satisfy (b) form a submanifold of Gr(N − n, N) of real codimension 2(n − 1). Since n ≥ 2, a (N − n)-subspace α, that satisfies conditions (a) and (b), exists. Proof of Lemma 2.1: Take α that satisfies Lemma 2.2. Letπ λ be a projection along α from a neighborhoodŨ of γ in U to U λ , given by Lemma 2.2. One can takeŨ be small enough, so that π(λ) :Ũ → U λ is a biholomorphism to its image for all small λ. Let i λ : U\ρ 1 (α) → U λ be an identity map. It is easy to see that π λ =π −1 λ • i λ is a required map. Removal of a holomorphically convex degenerate object. A degenerate object is removed by a small perturbation if, roughly speaking, in some neighborhood of an object, there are no degenerate objects of the same kind for perturbed foliations. Let γ be a degenerate object of a foliation F 0 on a manifold X. We say that F λ is a local holomorphic family for γ if there exists a neighborhood U of γ, such that F λ are well-defined in U for all λ ∈ Λ, 0 ∈ Λ; and F λ depend holomorphically on λ. Theorem 2.2. Let γ be a holomorphically convex degenerate object of a foliation F 0 . Then there exists a local holomorphic family of foliations F λ that removes γ. In the following subsection we rigorously define what it means that a degenerate object is removed in a local holomorphic family of foliations. We also prove Theorem 2.2 for different types of degenerate objects. Removal of a non-hyperbolic cycle. Definition 2.2. Let γ be a non-hyperbolic cycle of a foliation F 0 . We say that it is removed in a local holomorphic family of foliations F λ if 1. there is a transversal section T at a point p ∈ γ to the foliation F 0 such that holonomy maps along γ for the foliation F λ , ∆ λ γ : (D r ) → T are well-defined for λ ∈ Λ, where D r ⊂ T is a disk of radius r with the center in a point p; 2. for all λ ∈ Λ\R, ∆ λ γ has a unique fixed point on D r , where R is a (possibly empty) one dimensional real-analytic set. Moreover, this fixed point is hyperbolic. Proof of Theorem 2.2 for type 1: Take a point p ∈ γ and a transversal section T to F , p ∈ T . Let ∆ γ : (T, p) → (T, p) be the corresponding holonomy map. The cycle γ is hyperbolic by the definition if and only if all the eigenvalues of ∆ γ lie not on the unit circle. First, we provide a specific perturbation of ∆ γ that has hyperbolic fixed points only. The following lemma is the standard fact: Lemma 2.3. There exists a diagonal n × n matrix D and a ∈ C n such that the map ∆ γ (z) + λ(Dz + a) is well-defined and has hyperbolic fixed points only for all λ ∈ V \R, where V is a neighborhood of 0, R is a (possibly empty) 1-dimensional real-analytic set, 0 ∈ R. Take a, D such that Lemma 2.3 is satisfied. Apply Lemma 2.1 to the cycle γ, the point p and the family of biholomorphisms Φ λ = Id + λ(Dz + a). The map ∆ λ γ = π −1 λ • (∆ γ + λ(∆z + a)) • π λ is the holonomy map along γ for the foliation F λ . For all λ outside a (possibly empty) one-dimensional real-analytic set R the map ∆ λ γ has hyperbolic fixed points only on T . Splitting cycles to different leaves Let γ = γ 1 ∪ γ 2 be a degenerate object of type 2. Definition 2.3. We say that γ is removed in a holomorphic family of foliations F λ , λ ∈ Λ if 1. there is a transversal section T at a point p ∈ γ 1 ∩ γ 2 to the foliation F 0 such that holonomy maps ∆ λ γ 1 , ∆ λ γ 2 : D r → T are well-defined for all λ ∈ Λ, where D r ⊂ T is a disk of radius r; 2. ∆ λ γ 1 and ∆ λ γ 2 do not have a common fixed point on D r for λ = 0. Thus, the degenerate object is removed if γ 1 and γ 2 split to leaves, that are different at least in U. Proof of Theorem 2.2 for type 2: Let q ∈ γ 1 \γ 2 . Assume, it is not a point of selfintersection of γ 1 . Apply Lemma 2.1 to the curve γ, the point q, and the family of biholomorphisms Φ λ = z + λ. Then π −1 λ • ∆ γ 2 • π λ is a holonomy map along γ 2 . Let T 1 be a transversal section to the foliation F 0 in a point q. The holonomy map along γ 1 for the foliation F 0 can be written as a composition ∆ γ 1 = ∆ 2 • ∆ 1 , where ∆ 1 is a holonomy map from transversal section T to T 1 , ∆ 2 is a holonomy map from T 1 to T . Then the holonomy map along γ 1 for the foliation F λ is π −1 λ • ∆ 2 • Φ λ • ∆ 1 • π λ . π −1 λ (p) is an isolated fixed point for the holonomy map along γ 2 and is not a fixed point for the holonomy map along γ 1 . Thus, cycles split to different leaves. Removal of non-transversal intersections of invariant manifolds and saddle connections. Let γ be a degenerate object of the foliation F 0 of types 3 − 9. Let F λ be a family of local foliations for γ. In the sequel the words "invariant manifold" stand for a strongly invariant manifold, or separatrix of a singular point; or stable, unstable manifolds of a complex cycle. These objects persist under the perturbation, and depend holomorphically on a foliation. The local strongly invariant manifolds and separatrices of a i and local stable/instable manifolds of γ i persist under the perturbation and depend holomorphically on λ. For each degenerate object of type 3 − 9, there are two invariant manifolds that meet nontransversally. Saddle connections are examples of non-transversal intersection. We denote the corresponding local invariant manifolds by M loc 1 and M loc 2 for the foliation F 0 ; M loc 1 (λ) and M loc 2 (λ) for the foliation F λ . Note that for the degeneracy of type 5, M loc 1 = M loc 2 . For degenerate objects of types 3 − 5, we can take p to be any point in γ\ M loc 1 ∪ M loc 2 . Notice that for all degenerate objects of type 3 − 9, γ\ M loc 1 ∪ M loc 2 ⊂ L, where L is a leaf of foliation F 0 . Therefore, holomorphic extensions M 1 (λ) and M 2 (λ) of M loc 1 (λ) and M loc 2 (λ) along γ are well-defined. Definition 2.4. We say that γ can be eliminated in a holomorphic family of foliations F λ if there exists a transversal section T to the foliation F 0 , p ∈ T , so that M 1 (λ) and M 2 (λ) intersect transversally on T . . . , z n−1 ) on T . Apply Lemma 2.1 to the curve γ, the point p and Φ λ = z + λa. Assume that points q 1 ∈ γ 1 , q 2 ∈ γ 2 . Outside of the flow-box M 1 (λ) = π λ (M 1 ), M 2 (λ) = π λ (M 2 ). In a neighborhood of the point p: T ∩ M 2 (λ) = π z λ (m 2 ) T ∩ M 1 (λ) = Φ λ • π z λ (m 1 ) . Therefore, by Sard's Theorem, for almost all a they intersect transversally. Construction of a global eliminating family. In this section we give the geometric conditions for degenerate objects to be holomorphically convex and show how to pass from a local foliation to a global one. Approximation Theory. Working in the category of smooth vector fields one can eliminate a non-transversality by perturbing the vector field only in a neighborhood of a non-transversality. In the holomorphic category there are no local perturbations allowed. However, approximation theory gives a way to work locally. In some cases you can perturb the local picture and then approximate your perturbation by a global one. In particular, for a holomorphic vector bundle on a Stein manifold holomorphic sections over a neighborhood of a holomorphically convex set can be approximated by global holomorphic sections. This follows from two theorems formulated below. Theorem 3.1. ( [12], 5.6.2) Let X be a Stein manifold and ϕ a strictly plurisubharmonic function in X such that K c = {z : z ∈ X, ϕ(z) ≤ c} ⋐ X for every real number c. Let B be an analytic vector bundle over X. Every analytic section of B over a neighborhood of K c can then be uniformly approximated on K c by global analytic sections of B. 2. ϕ < 0 in K but ϕ > 0 in X\U, 3. {z : z ∈ X, ϕ(z) < c} ⋐ X for every c ∈ R. Theorem 3.3. Let γ be a holomorphically convex degenerate object of a foliation F 0 . Then there exists a holomorphic family F λ of foliations on X, that remove γ. Proof: Let s λ be local sections that determine local foliations that eliminate γ. Let λ 0 ∈ Λ be a parameter that does not belong to exceptional real curve. By Theorems 3.1 and 3.2 there exists a global section S λ 0 that is ε-close to s λ 0 on U ′ , where γ ⋐ U ′ ⋐ U. Therefore, family of foliations determined by S λ = S 0 + λ(S λ 0 − S 0 ) eliminate the degenerate object. Holomorphic convexity of a curve. We recall the definition of a holomorphic hull and gave examples of holomorphic hulls of curves in Appendix 5.1. Consider a collection of C 1 -smooth real curves γ 1 , . . . , γ m in C N . Their holomorphic hull is described by Stolzenberg's Theorem [20]: Theorem 3.4. Let γ = γ 1 ∪ · · · ∪ γ m . Then h(γ)\γ is a (possibly empty) one-dimensional analytic subset of C N \γ. Proof: There exists a proper embedding of the Stein manifold X to C N for some large enough N ( [12], theorem 5.3.9). Let h(γ) be the holomorphic hull of γ in C N . By Stolzenberg's theorem h(γ)\γ is an analytic subset in C N \γ. Let us show that h(γ) ⊂ X. X is a maximal spectrum of functions that are equal to zero on X ( [8], Theorem VII, A18). Take a function f such that f (X) = 0. Then f (h(γ)) = 0 since γ ⊂ X and h(γ) is a holomorphic hull of γ in C N . Thus, h(γ) ⊂ X. It remains to show that h(γ) = h X (γ). Any holomorphic function on X is a restriction of holomorphic function on C N ([8], Theorem VII, A18). Therefore, h X (γ) = h(γ) ∩ X. Since h(γ) ⊂ X, h X (γ) = h(γ). Holomorphic convexity of a degenerate object. In this subsection we give the geometric conditions for the degenerate objects to be holomorphically convex. Definition 3.1. We say that a path or a loop is simple if it does not have points of selfintersection. We need to extend the analytic set, given by Stolzenberg's theorem, to the boundary. In the sequel we need the following corollary from the Stolzenberg's Theorem. Corollary 3.2. Let γ 1 , . . . , γ n be piece-wise smooth curves, such that γ i ∩ γ j consists of finite number of points. Suppose that h(γ) ⊂ γ. Then there exists an arc α ⊂ γ i , such that α ⊂ ∂ (h(γ)\γ), where γ = ∪γ i . Proof: Let π be a projection of γ 1 , . . . , γ n to a complex line C. One can choose π so that image of γ has at most finite number of points of transversal self-intersection. The image of γ separate C into several regions U i . Below we show that for each of the regions there is a following dichotomy: either π(h(γ)) ⊃ U i or π(h(γ)) ∩ int(U i ) = ∅: First, π (h(γ)\γ) is open. Therefore, the π(h(γ)\γ)∩U i is open. Second, if w ∈ ∂ π ( h( γ ) ), then since h(γ) is compact, there exists z ∈ h(γ) such that π(z) = w. Therefore, z ∈ γ. Thus, π(h(γ)\γ) ∩ U i is either empty or coincides with U i . Take a point w ∈ π(γ) that is not a point of self-intersection and belongs to the boundary of π(h(γ)). The small arc around this point also belongs to π(h(γ)) and does not contain points of self-intersection. The preimage of this arc is the desired arc on γ. Lemma 3.1. Let α ⊂ γ be a real-analytic arc, such that α ⊂ ∂(h(γ)\γ). Let C be a holomorphic curve, such that α ⊂ C. Then there exists a loopγ ⊂ γ, so that α ⊂γ ⊂ γ ∩ C andγ is null homologous on C. Proof: One can take a neighborhood U ⊂ X of the arc α, such that 1. U ∩ γ = α; 2. the connected component of C ∩ U, that contains α, is a submanifold in U; 3. the arc α separates this connected component into two pieces. Let Ω 1 , Ω 2 be these pieces. Let h 1 denote the connected component of h(γ)\γ. Apply Theorem 3.5 to the analytic sets h 1 and Ω 1 and the arc α. The closure of h 1 in U contains α. The closure of Ω 1 also contains α. Therefore, either h 1 = Ω 1 or h 1 ∪ α ∪ Ω 1 is an analytic subset of U. In the second case h 1 = Ω 2 . Thus, h 1 = Ω 1 or h 1 = Ω 2 . If two analytic sets coincide locally, then they coincide globally. Therefore, h 1 ⊂ C. By Maximum Modulus Principle, ∂h(γ) ⊂ γ. Denoteγ = ∂h 1 , then it is a loop and is null-homologous on C. Theorem 3.6. Let γ be a degenerate object of the foliation F . When γ = ∪γ i , then we assume γ i are simple piece-wise real-analytic. Suppose γ satisfies the following additional conditions: type 1 : Let γ be non-homologous to 0 on the leaf L. Then γ is holomorphically convex. Note 3.1. If γ satisfies the listed above geometric conditions, then we say that γ is a geometric degenertate object. Proof: Suppose γ is not holomorphically convex. type 1: Since γ is a simple cycle, by Lemma 3.1, γ is null-homologous on L, which contradicts the hypothesis. type 2: The proof is the same as for type 1. π −1 (γ) is a simple path onS. It does not bound a region onS. Therefore, its image does not bound a region on S. Which contradicts Lemma 3.1. type 5: The same as for type 1. type 6: Let α be an arc, given by Corollary 3.2. Suppose α ⊂ γ 1 Let C be a curve, given by 3.1. Then C is either a saddle connection or C ⊂ M 1 . Saddle connections are removed. Therefore, by Lemma 3.1 γ 1 bounds a region on C. Which contradict the hypothesis. type 7: The proof is the same as for type 6. type 8: Let α be an arc, given by Corollary 3.2. Suppose α ⊂ γ 1 , then by Lemma 3.1, γ 1 is null-homologous on L 1 , which contradicts the hypothesis. If α ⊂ γ 3 , then we proceed by the same reasoning as for type 6. type 9: The proof is the same as for type 8. 4 Simultaneous elimination of degeneracies. Landis-Petrovskii's lemma. The idea is to encode degeneracies by countably many objects. To give a feeling of the method used, we first prove a version of the Landis-Petrovskii's Lemma [16] that we need in the sequel. Lemma 4.1. For a holomorphic 1-dimensional (singular) foliation F of a Stein manifold X there exists not more than a countable number of isolated complex cycles on the leaves of the foliation. Proof: Since the manifold X is Stein, it can be embedded into C N . Take a cycle γ on a leaf L. Fix coordinates (z 1 , . . . , z N ) in C N . Let C 1 , . . . , C N be the coordinate lines, C i = {z 1 , . . . , =ẑ i = · · · = z N = 0}. Suppose that L does not belong to the hypersurface {z i = c} for any c ∈ C. By perturbing γ on the leaf L one can assume that there exists a small neighborhood U ⊃ γ so that π i | U is a biholomorphism to the image (here π i : C N → C i , π i (z) = z i is the projection). Then one can perturb γ inside U so that π i (γ) becomes a piece-wise linear curve with rational vertices. Definition 4.1. We will say that the cycle γ ′ lies over the piece-wise linear curve g ′ if there exist a representative of γ ′ and its neighborhood U ′ , such that U ′ is projected biholomorphically to its image and the representative is projected to g ′ . Note, that any cycle lies over countably many piece-wise linear curves. Take one of the vertices of π i (γ), say with coordinate z i = c. The hypersurface {z i = c} intersects X by (k − 1)-dimensional variety, such that for any cycle γ ′ , lying over π i (γ), it is transversal to the foliation in a neighborhood of γ ′ ∩ {z i = c}. The holonomy map along γ is well-define in some neighborhood of the intersection {z i = c} ∩ γ. The holonomy map does not have any other fixed points in some smaller neighborhood. Thus, each cycle that projects to the same piece-wise linear curve gives a neighborhood on the hyperplane {z i = c} ⊂ C N , so that two neighborhoods for two different cycles do not intersect each other. Therefore, there exists not more than countably many limit cycles that project to the same curve. Since there are only countably many curves, there are not more than countably many limit cycles. Landis-Petrovskii's Lemma implies that once all non-isolated cycles are eliminated, all leaves except for countably many are homeomorphic to disks. Simultaneous elimination of non-isolated cycles. If there are non-isolated cycles on the leaves of a foliation F , then the number of the cycles is obviously uncountable. However, the strategy described above can be applied. Our idea is to catch the degenerations by a countable number of holonomy maps. Proof: Since X is Stein, it can be embedded into C N . We can restrict ourselves to the foliations without leaves that belong to the hypersurfaces {z N = c}, c ∈ C. The set of such foliations is open and dense. We describe the holonomy maps that catch all the cycles for all foliations. We introduce the following notations: • A is a countable, everywhere dense subset in the set of holomorphic foliations; • G is the set of all closed piecewise-linear curves with rational vertices on {z 1 = · · · = z N −1 = 0}, with one marked vertex. • Let τ q = {z n = q} ∩ X, where q ∈ Q + iQ. Let Q q be a countable everywhere dense set on τ q . Let Q = Q q . Let z = (z 1 , . . . , z N −1 ), u = z N . Consider a 4-tuple α = (F , g, z, r) ∈ (A, G, Q q , Q), such that q is the marked point of g. We require that the holonomy map for the foliation F in the point z along g is well-defined in a neighborhood of z on the transversal section τ q and has the radius of convergence greater than r. Let ∆ α be the germ of this holonomy. One can consider the germ of the holonomy map along the lifting of g, starting at z, for foliations close to F . Therefore, we think of ∆ α as of function of two variables: a foliation close to F , and a point on the transversal section τ q . Below we fix a specific representative of ∆ α . We use the same notation for the specific representative as for the germ. Let V α be the connected component, containing F , of the set of foliations, for which the holonomy map along g in the point z is well-defined and has radius of convergence greater than r. The domain of definition of ∆ α is {(F ′ , z ′ )| F ′ ∈ V α , |z ′ − z| < r}. Note, that V α is open. From this point we consider fixed representatives, rather than germs. Lemma 4.2. Every complex cycle corresponds to a fixed point of ∆ α (F ′ , ·) for some α and F ′ ∈ V α . Proof: Let γ be a complex cycle on a leaf L of a foliation F . One can perturb γ on L so that it projects to some g ∈ G. Let u(g) be one of the vertices of the projection, and let z ∈ γ be the preimage of u(g). Consider the holonomy map along γ in a neighborhood of z in the transversal section C = {u = u(g)}. Take a point z 1 ∈ Q such that |z − z 1 | < r z (F )/4 where r z (F ) is a radius of convergence of the holonomy map in the point z along γ for the foliation F . Note, that r z 1 (F ) > r z (F )/2. One can take F 1 close to F so that r z 1 (F 1 ) > r z (F )/2. Denote by α = (F 1 , g, z 1 , r), where r ∈ Q, r z (F )/4 < r < r z (F )/2. Then r < r z 1 (F 1 ). Also, F ∈ V α , because r z 1 (F ) > r. Since r > r z (F )/4, the point z belongs to the domain of definition of ∆ α (F 1 , ·). ·) has a non-hyperbolic fixed point, so that the corresponding cycle γ satisfies additional conditions: Lemma 4.3. Fix ∆ α . The set D α ⊂ V α of foliations F such that ∆ α (F , 1. γ is simple, 2. γ is null homologous on the leaf; is closed and nowhere dense in V α . Proof: We prove that by a finite number of steps, we can perturb the foliation F so that ∆ α ( F, ·) has isolated fixed points only in the domain of definition discussed above. Assume that A is the set of fixed points of ∆ α (F , ·). Let A be k-dimensional. As we show in the appendix, one can associate multiplicity m(A) to the analytic set A. Take a point z that is a generic point of a k-dimensional stratum A i . By Theorem 3.3, there exists a neighborhood of z and a foliation F, arbitrary close to F , such that the holonomy map of F along γ has isolated fixed points only in this neighborhood. This perturbation destroys the component A i . Therefore, it either decreases the dimension of A, or it decreases the multiplicity m(A) (see Lemma 5.4). Therefore, after a finite number of steps, only isolated cycles are left. By the Theorem 3.3, they can be turned into hyperbolic by a finite number of steps as well. Proof: The construction is similar to Section 4.2. The difference is that one needs to consider pairs of holonomy maps and the analytic condition is that they do not have a common fixed point. Proof: We outline the proof for strongly invariant manifolds of different singular points. For other types of degenerate objects the proof goes along the same lines. Since X is a Stein manifold, it can be embedded into C N . We fix the countable set of data α = (F , a 1 , M 1 , a 2 , M 2 , g, z 1 , r). Simultaneous elimination of separatrices and non-transversal intersections of invariant manifolds • F ∈ A, where A is a countable every-where dense set of foliations; Foliations with hyperbolic singular points only form a residual set [4]. Therefore, we can assume that all singular point for all the foliations F ∈ A are hyperbolic. • a 1 , a 2 are hyperbolic singular points of F ; • M 1 , M 2 are strongly invariant manifolds of a 1 and a 2 correspondingly; We associate the maximal radius r i to the singular point a i . Definition 4.2. The radius r i is the maximal radius, such that M i is transversal to ∂U r (a i ) for all r < r i . Not that maximal radius is a lower semicontinuous function on the space of foliations. Let π : X → C be the projection to C = {z 1 = · · · = z N −1 = 0}, π(x 1 , . . . , x N ) = x N . • g ⊂ C is a piecewise linear curve with rational vertices. Let u 1 , u 2 be the starting and the ending points of g correspondingly. We require that u 1 ∈ π(U r 1 (a 1 )), u 2 ∈ π(U r 2 (a 2 )); • z 1 ∈ Q q , where Q q is an every where dense set on the transversal section τ 1 = {z n = u 1 = q} ∩ X in U r 1 (a 1 ); We require that there is a well-defined lift of g to the leaf L of the foliation F , that starts from a point z 1 . The lift is denoted by γ. Let z 2 be the lift u 2 . We require that z 2 ∈ U r 2 (a 2 ) Let τ 2 = {z N = u 2 } ∩ X. There is a well-defined germ ∆ : τ 1 → τ 2 of the holonomy map along γ in the point z 1 . As before, we think of ∆ as a function of two variables: a foliation G, close to F , and a point on the transversal section τ 1 . • r ∈ Q + . We require that 1. r is less than radius of convergence of ∆. 2. The disk D r (z 1 ) on the transversal section τ 1 of the radius r 1 with the center z 1 is compactly contained in U r 1 (a 1 ). 3. ∆(D r (z 1 )) is compactly contained in U r 2 (a 2 ). We fix a representative ∆ α of ∆. Below we describe the neighborhood U α of F . G belongs to U α if 1. there is a holomorphic family of foliations F λ , so that F 0 = F , F 1 = G; for all λ ∈ D 1 there are unique hyperbolic singular points a λ 1 ∈ U r 1 /2 (a 1 ) and a λ 2 ∈ U r 2 /2 (a 2 ) of the foliation F λ ; Let a ′ 1 , a ′ 2 be singular points of G, obtained via holomorphic continuation. Let M ′ 1 , M ′ 2 be the corresponding strongly invariant manifolds. Let r ′ 1 , r ′ 2 be the maximal radii for (a ′ 1 , M ′ 1 ), (a ′ 2 , M ′ 2 ). 2. z 1 ∈ U r ′ 1 (a ′ 1 ), z ′ 2 ∈ U r ′ 2 (a ′ 2 ), where z ′ 2 is the lift of u 2 along g for G. 3. D r (z 1 ) is compactly contained in U r ′ 1 (a ′ 1 ). 4. ∆(G, D r (z 1 )) is compactly contained in U r ′ 2 (a ′ 2 ). The domain of definition of ∆ α is U α × D r (z 1 ). Lemma 4.4. For any α, the set D α ⊂ U α of foliations G ⊂ U α , for which there exists a leaf L such that 1. the lift of u 1 to L is in U r 1 (a 1 ), the lift of u 2 to L is in U r 2 (a 2 ); 2. the lift of g belongs to the strongly invariant manifold M ′ 1 of the singular point a ′ 1 of G (a ′ 1 is a holomorphic continuation of a 1 ); 3. the lift of u 2 belongs to the strongly invariant manifold M ′ 2 of a singular point a ′ 2 (a ′ 2 is a holomorphic continuation of a 2 ); 4. the lift of u 2 is a point of a non-transversal intersection of M ′ 1 and M ′ 2 . is a closed and nowhere dense set. Proof: The proof follows from the local Theorem 3.3 in the same way as in Lemma 4.3. The desired residual set is obtained by intersecting the open everywhere dense sets from the above Corollary. Proof: If a leaf L is not contractible, then there exists a simple loop γ ⊂ L, non-homologous to zero on L. Foliation F does not have geometric non-isolated cycles. Hence, if L is a non-contractible leaf of a foliation F , then there is a geometric isolated cycle γ ⊂ L. By Landis-Petrovskii's Lemma (Section 4.1), there are at most countably many isolated cycles. Thus, all leaves, except for countably many, are contractible. Proofs of main theorems. If H 1 (L, Z) = 0, Z, then there exist a pair of cycles γ 1 , γ 2 ⊂ L, that satisfy geometric conditions. Since foliation F does not have a pair of geometric cycles, all non-separatrix leaves L are either contractible or H 1 (L, Z) = Z. Since foliation F does not have geometric degenerate objects of types 3-5, one can show the same way, that separatrix leaves are topological cylinders. Suppose there is a non-transversal intersection of invariant manifolds M 1 , M 2 . Let p be a point of non-transversal intersection. Let L be a leaf of foliation, such that p ∈ L. Since L ⊂ M 1 , there is a path γ 1 ⊂ L that connects p with a point q ∈ M loc 1 , one can assume that γ 1 is simple and piece-wise real analytic. The same way, there is a leaf-wise γ 2 path from p to M loc 2 . Thus we constructed a geometric degenerate object of type 6 − 9, which contradicts the hypothesis. Thus Theorem 1.2 is an immediate consequence of Theorem 1.1 and Theorem 4.3. The same way one shows that Theorem 1.3 is a corollary from Theorem 4.3. Appendix Stein manifolds. In this subsection we state the well-known facts about Stein manifolds. For the proofs and further discussion, consult [12]. Whitney Embedding Theorem states that any smooth m-dimensional manifold can be smoothly embedded into Euclidean 2m-space. For complex holomorphic manifolds the situation is different. There are complex manifolds that cannot be holomorphically embedded as submanifolds to C n . Moreover, there are ones that do not admit any global holomorphic functions, except for constants. By Maximum Modulus Theorem and Liouville's Theorem compact manifolds do not admit any nonconstant global holomorphic functions. Informally speaking, Stein manifolds are the ones which do have an ample supply of holomorphic functions. We start our discussion of Stein manifolds with the definition of the holomorphic hull. This notion plays an important role in the theory. Definition 5.1. Let K be a compact subset of a complex manifold X, the O(X)-hull of K is the set h X (K) = {u : |f (u)| ≤ max{f (x)|x ∈ K} for all f ∈ O(X)}, where O(X) are holomorphic functions on X. Note 5.1. We also call O(X)-hull, the holomorphic hull, when it is clear from the context what the ambient manifold is. The notation h(K) is used in that case. Then h C 2 (γ) = γ. Proof: The function f (z, w) = zw − 1 is equal to zero on γ. Therefore, h(γ) ⊂ {f = 0}. Take a point (z 0 , w 0 ) ∈ C 2 . • If |z 0 | > 1. Then |z 0 | > max{|z| : (z, w) ∈ γ} Function z is a global holomorphic function. Therefore, the point (z 0 , w 0 ) does not belong to h(γ). • If |z 0 | < 1, then |w 0 | > 1. |w 0 | > max{|w| : (z, w) ∈ γ} Therefore, the point (z 0 , w 0 ) does not belong to h(γ). • If |z 0 | = 1, then z 0 =w 0 . So (z 0 , w 0 ) ∈ γ. Thus, h(γ) = γ Definition 5.2. Complex analytic manifold X of dimension n is said to be a Stein manifold if 1. for every compact set K its holomorphic hull h(K) is also compact; 2. If z 1 and z 2 are two different points of X, then f (z 1 ) = f (z 2 ) for some f ∈ O(X); 3. For every z ∈ X, one can find n functions f 1 , . . . , f n ∈ O(X) which form a coordinate system at z. Below we give one more equivalent definition of a Stein manifold in terms of a plurisubharmonic function, that is often used in practice. 2. For an arbitrary z and w ∈ C n , the function τ → ϕ(z + τ w) is subharmonic in the part of C where it is defined. ∂ 2 ϕ(z)/∂z j ∂z k w jwk ≥ 0,(2) where z ∈ Ω, w ∈ C n . Definition 5.4. Function ϕ is strictly plurisubharmonic if the inequality (2) is strict. The notion of plurisubharmonicity does not depend on the choice of holomorphic coordinates. Therefore, it is well defined on all complex manifolds. Complex foliations Definitions 5.5-5.9 are from [15]. They are scatted through out the text, so we provide them here for the convenience of the reader. Definition 5.10, 5.11 can be found in [21], [3] correspondingly. Definition 5.5. Let F be a foliation of a complex manifold X. Let γ : [0, 1] → X be a path on X. Let T 0 and T 1 be two transversal sections to F , passing through γ(0) and γ(1) respectively. Then for any initial point x ∈ T 0 , close to γ(0), leaf-wise curves, starting from x, and staying close to γ, and arriving to T 1 , arrive at a well defined point ∆ γ (x). Thus, we obtain a map ∆ γ (x), which we call the holonomy map. If γ : [0, s] → X is a closed curve, and T is a transversal section to F , passing through γ(0). The map ∆ γ : T → T is called the holonomy map as well. γ T ∆(z) z Figure 8: A holonomy map Definition 5.6. A complex cycle is a nontrivial free homotopy class of loops on a leaf of a foliation. It is called isolated if it corresponds to an isolated fixed point of its holonomy map. It is hyperbolic if its holonomy map is hyperbolic, i.e. its linearization is non-degenerate, and the eigenvalues of the linearization do not belong to the unit circle. Definition 5.7. Let γ be a hyperbolic cycle. The holonomy map ∆ γ has stable and unstable manifolds m loc 1 , m loc 2 . The union of leaves that pass through m loc 1 and m loc 2 are called stable and unstable manifolds of γ. Definition 5.8. A singular point is called complex hyperbolic if it is non-degenerate and the ratio of any two eigenvalues is not real. In this paper we work only with complex hyperbolic singular points. So we reserve the word "hyperbolic" to complex hyperbolicity. Note that complex hyperbolicity plays a similar role for the theory of complex vector fields as hyperbolicity for the theory of real vector fields. In particular, if the point is complex hyperbolic, then the phase portrait of the vector field in the neighborhood of a singularity is homeomorphic to the phase portrait of its linearization [3]. See ( [15], Section 29) for thorough consideration of properties of complex hyperbolic points. Definition 5.9. A complex separatrix of a singular holomorphic foliation F at a singular point a ∈ Σ(F ) is a local leaf L ⊂ (U, a)\Σ, whose closure L ∪ a is a germ of an analytic curve. Definition 5.10. A saddle connection is a common separatrix of two singular points. See Fig.2 Definition 5.11. Suppose a is a hyperbolic singular point of the foliation F . Let λ 1 , . . . , λ n be the eigenvalues of a. Let l be a line passing through the origin in C. Let λ = (λ i 1 , . . . , λ i k ) be the eigenvalues of a that lie on one side of the line l. Let α λ be a subspace spanned by the eigenspaces of all elements of λ. The local strongly invariant manifold M loc λ is a manifold tangent to α λ . The global strongly invariant manifold M λ is obtained by taking the union of leaves that belong to the local strongly invariant manifold. Strongly invariant manifolds exist [15]. The proof can be easily modified to show that they depend holomorphically on a vector field (on a foliation). Suppose that v is a vector field that determines a foliation locally. Strongly invariant manifolds are stable and unstable manifolds of the time-one map Φ 1 cv of the vector field cv, where c ∈ C * is taken so that l becomes the imaginary axis. If one considers the real flow of the vector field cv, then locally strongly invariant manifolds coincide with stable and unstable manifolds [3]. Holomorphic vector bundle associated to a foliation Take a 1-dimensional singular holomorphic foliation F of a Stein manifold M. One can naturally associate a linear bundle B F to F . Notice that a 1-dimensional holomorphic foliation with singular locus of codimension 2 is locally determined by a holomorphic vector field [15]. Consider a covering of a Stein manifold by open contractible sets U i . On each set U i the foliation is determined by a holomorphic vector field v i . For a pair of intersecting sets U i and U j define a function g ij = v i /v j . This function is well-defined on (U i ∩ U j ) \{v j = 0}. The set {v j = 0} has codimension 2. Therefore, g ij can be extended to U i ∩ U j . The same way g ji = v j /v i can be extended to a well-defined function on U i ∩ U j . g ij g ji = 1 ⇒ g ij U i ∩U j = 0 The set of functions {g ij } form a 2-cocycle, therefore, they define a linear bundle. Lemma 5.1. 1-dimensional singular holomorphic foliation F of a Stein manifold X is determined by a global section of the vector bundle T X ⊗ B F . Proof: Lemma follows from the construction of B F . If H 2 (X, Z) = 0, then each foliation on X is determined by a global vector field. In particular, this holds for foliations on C n . Topology of uniform convergence on compact non-singular sets The description of topology on the space of foliations in C n is given for example in [11]. Let X be a Stein manifold. We fix its compact exhaustion: K 1 ⋐ · · · ⋐ K n · · · ⋐ X, where K 1 , . . . , K n are compact subsets of X, closures of open connected subsets of X; ∪ n K n = X. Let d 1 be a metric on X and d 2 be a metric on the projectivization of its tangent bundle P T X. A basis of neighborhoods of the foliation F is formed by U n,ε,δ = G| G is nonsingular in K ε,n = K n \U ε (Σ(F )) and the tangent directions to the foliations F and G are ε-close on K ε,n . Note that the obtained topology does not depend on the choice of compact exhaustion and the choice of metrics d 1 and d 2 . The set of foliations of X has countably many connected components, parametrized by Chern classes of the linear bundles, associated to the foliations. The set of sections of T X ⊗ B F is equipped with the topology of uniform convergence on compact sets. (See for the description of the linear bundle B F .) The map from the space of sections to the space of foliations is continuous. Multiplicity We consider analytic subsets A of a polydiskD n , i.e. we assume that A is an analytic subset of some neighborhood of D n . Suppose that A is given by a system of n equations f 1 = · · · = f n = 0 Assume that A is k-dimensional. We want to define the multiplicity of A which does not increase under perturbations. Lemma 5.2. There are only finitely many strata of maximal dimension. Proof: The number of strata is locally finite [5]. Since A is an analytic subset ofD n , it is globally finite. Let A 1 , . . . , A m be the strata of maximal dimension. Take a smooth point z ∈ A i . Consider a transversal section T to A i at the point z. Let f 1 , . . . ,f n be the restriction of f 1 , . . . , f n to T . The point z is an isolated solution of the system f 1 = · · · =f n = 0. where O D n−k ,z is the local ring of z ∈ D n−k , i.e. functions, regular in a neighborhood of z ∈ D n−k ; <f 1 , . . . ,f n > is the ideal in O D n−k ,z generated byf 1 , . . . ,f n . Proof: In ([1],Chapter 2,5.7) it is proved for k = 0. In general case the proof goes the same way. Definition 5.13. The multiplicity of z ∈ A i is the multiplicity the point z as an isolated solution off 1 = · · · =f n = 0. It is easy to see that multiplicity does not depend on the choice of a generic point and a transversal section T . Definition 5.14. The multiplicity of a stratum A i is the multiplicity of a generic point. The multiplicity of A is the sum of multiplicities of A i . Proof: Let T 1 , . . . , T m be transversal sections to A i 's at generic points. Every A ′ i intersect at least one of the sections T 1 , . . . , T m . One can also assume that T i 's meet A i 's transversally. On each transversal section the result follows from the Lemma 5.3. invariant manifolds of different singular points intersect transversally; 4. invariant manifolds of complex cycles intersect transversally with each other and with strongly invariant manifolds of singular points. Theorem 1 . 3 . 13For a generic 1-dimensional singular holomorphic foliation: 1. all singular points are complex hyperbolic. 2. Let a 1 be a complex hyperbolic singular point of the foliation. Let M 1 and M 2 be strongly invariant manifolds of the point a 1 , such that M loc 1 ∩M loc 2 = a 1 . Then M 1 and M 2 intersect transversally everywhere. Figure 2 :Figure 3 : 23A path on a saddle connection and a loop on a homoclinic saddle connection 5. A non-trivial loop γ on a separatrix that passes through a singular point a (See Fig. A loop on a separatrix. 6. A union of paths γ 1 and γ 2 (See Fig. 4): Figure 4 :Figure 5 : 45A non-transversal intersection of strongly invariant manifolds. The leaf L on the picture is not a separatrix, it spirals around singular points a 1 and a A homoclinic non-transversal intersection of strongly invariant manifolds. Note 2. 1 . 1Note that if M loc 1 and M loc 2 are separatrices, then the holomorphic family eliminates the saddle connection. Proof of Theorem 2.2 for types 3-9. One can assume that in a neighborhood of a point p M 1 and M 2 are biholomorphically equivalent to m 1 × D, m 2 × D, where m 1 = M 1 ∩ D 1 , m 2 = M 2 ∩ D 1 , D is a neighborhood of p on the leaf L; D 1 is a neighborhood of p on the transversal section T . Fix coordinates (z 1 , . ) Let X be a Stein manifold, K a compact subset of X and U is an open neighborhood of holomorphic hull of K. Then there exists a function ϕ ∈ C ∞ (X) such that 1. ϕ is strictly plurisubharmonic, Corollary 3. 1 . 1The statement of the theorem is true if one replaces C n by a Stein manifold. Theorem 3.5 ([5]). Let M be a connected (2p − 1)-dimensional C 1 -submanifold of a complex manifold Ω. Let A 1 , A 2 be irreducible p-dimensional analytic subsets of Ω\M such that the closure of each of them contains M. Then either A 1 = A 2 or A 1 ∪ M ∪ A 2 is an analytic subset of Ω. type 2 : 2(a) γ 1 and γ 2 have only one common point; (b) γ 1 and γ 2 are not null homologous and are not multiples of the same cycle in the homology group of L. type 5: γ is not null-homologous on S. type 8: γ 1 ⊂ L is non-homologous to 0 on L; γ 1 and γ 2 have only one common point. type 9: γ 1 ⊂ L 1 , γ 2 ⊂ L 2 are non-homologous to zero on L 1 , L 2 correspondingly, L 1 = L 2 . Curves γ 1 and γ 3 ; γ 2 and γ 4 have only one common point. type 3 : 3γ is simply connected, therefore, by Lemma 3.1 it should bound the region on S, which contradicts the hypothesis. type 4 : 4LetS be a surface obtained from S ∪ {a} by splitting the local components of S at the point a. Let π :S → S be the corresponding projection. Theorem 4 . 1 . 41There exists a residual set R 1 in the space of 1-dimensional singular holomorphic foliations, that do not have geometric degenerate objects of type 1. Corollary 4 . 1 . 41The complement of D α in the set of all foliations contains an open every where dense set. The residual set is obtained by intersecting open everywhere dense sets from the corollary above.4.3 Simultaneous splitting of cycles to different leaves. Theorem 4 . 2 . 42There exists a residual set in the space of singular holomorphic 1-dimensional foliations that do not have geometric degenerate objects of type 2. Theorem 4. 3 . 3There exists a residual set in the space of singular holomorphic 1-dimensional foliations that do not have geometric degenerate objects of types 3 − 9. Theorem 4.4. A foliation F , that does not have geometric degenerate objects of types 1 − 5 satisfies Theorem 1.1. Theorem 4. 5 . 5If a foliation F does not have geometric degenerate objects of types 1 − 6, 8 − 9, then it is complex Kupka-Smale. Proof: By Theorem 4.4 all leaves of foliation F are either contractible or cylinders. Since the foliation does not have geometric non-hyperbolic cycles, all cycles are hyperbolic. Note 5. 2 .Figure 7 : 27The holomorphic hull is a reasonable notion, only if the manifold has an ample supply of holomorphic functions. For instance, it is an important notion for the compact subsets of C n .Example 5.1. The holomorphic hull of the curve {|z| = 1} ⊂ C is {|z| ≤ 1}, i.e. the curve together with interior in C.|z| = 1 h({|z| = 1}) = {|z| ≤ 1} The holomorphic hull of the curve {|z| = 1} in C Proof: By Maximum Modulus Principle, the points z such that {|z| ≤ 1} belong to the holomorphic hull.Take a point z 0 so that |z 0 | > 1. By considering the global holomorphic function z we see that this point does not belong to the holomorphic hull. w) ∈ C 2 | |z| = 1, z =w}. Definition 5.3. A function ϕ defined in an open set Ω ⊂ C n with values in [−∞, +∞) is plurisubharmonic if 1. it is semicontinuous from above. Fact 5 . 5 . 55A complex manifold X is a Stein manifold if and only if there exists a strictly plurisubharmonic function ϕ ∈ C ∞ (X) such thatΩ c = {z| z ∈ X, ϕ(z) < c} ⋐ Xfor any real number c. The setsΩ c are the O(X)-convex. Figure 9 : 9A line, separating eigenvalues. Definition 5 . 12 . 512Let z be an isolated point of a system of equationsf 1 = · · · =f n = 0,defined in n−k-dimensional polydisk D n−k . The multiplicity m(z) of a point z is the dimension of O D n−k ,z / <f 1 , . . . ,f n >, Lemma 5. 3 . 3The multiplicity does not increase under perturbations, i.e. if z ′ 1 , . . . , z ′ m are isolated solutions of a perturbed system in a neighborhood of a point z, then m i=1 m(z ′ i ) ≤ m(z). Lemma 5 . 4 . 54The multiplicity of A does not increase under perturbations, i.e. let A ′ 1 , . . Fact 5.1. C n is a Stein manifold.Fact 5.2. Every closed submanifold of a Stein manifold is a Stein manifold. In fact there is the Embedding Theorem for Stein manifolds.Fact 5.3. Every Stein manifold can be holomorphically embedded as a closed submanifold into C N . The classification of critical points, caustics and wave fronts. V I Arnol&apos;d, S M Gusein-Zade, A N Varchenko, Singularities of differentiable maps. Boston, MABirkhauser Boston, IncIIV. I. Arnol'd, S. M. Gusein-Zade, and A. N. Varchenko, Singularities of differentiable maps. Vol.II. The classification of critical points, caustics and wave fronts: Monographs in Mathematics, Vol. 82, Birkhauser Boston, Inc., Boston, MA, 1985. Kupka-Smale Theorem for Automorphisms of C n. G T Buzzard, Duke Math. J. 93G.T. Buzzard, Kupka-Smale Theorem for Automorphisms of C n , Duke Math. J. 93 (1998), 487-503. C k -conjugacy of holomorphic flows near a singularity. Marc Chaperon, Inst. HauteÉtudes Sci. Publ. Math. 64Marc Chaperon, C k -conjugacy of holomorphic flows near a singularity., Inst. HauteÉtudes Sci. Publ. Math. 64 (1986), 143-183. Generic complex flows. Complex Geometry II: Contemporary Aspects of Mathematics and Physics. , Generic complex flows., Complex Geometry II: Contemporary Aspects of Mathematics and Physics (2004), 71-79. Complex analytic sets. E M Chirka, Kluwer, DordrechtE.M. Chirka, Complex analytic sets, Kluwer, Dordrecht, 1989. Uniformization of the leaves of a rational vector field. A Candel, X Gomez-Mont, Annales de l'Institute Fourier. 454A. Candel and X. Gomez-Mont, Uniformization of the leaves of a rational vector field, Annales de l'Institute Fourier 45(4) (1995), 1123-1133. Topology of analytic foliations in C 2 . Kupka-Smale property. T S Firsova, Proceedings of the Steklov Institute of Mathematics. 254T.S. Firsova, Topology of analytic foliations in C 2 . Kupka-Smale property, Proceedings of the Steklov Institute of Mathematics 254 (2006), 152-168. Analytic functions of several complex variables. R Ganning, H Rossi, Prentice-hall, IncEnglewood Cliffs, N.J.R. Ganning and H. Rossi, Analytic functions of several complex variables, Prentice-hall, Inc., Englewood Cliffs, N.J., 1965. Hyperbolicity of phase curves of a general polynomial vector field in C n. A Glutsyuk, Func. Anal. Appl. 282A. Glutsyuk, Hyperbolicity of phase curves of a general polynomial vector field in C n , Func. Anal. Appl. 28(2) (1994), 1-11. A generic analytic foliation in C 2 has infinitely many cylindrical leaves. T I Golenishcheva-Kutuzova, Proc. Steklov Inst. Math. 254T.I. Golenishcheva-Kutuzova, A generic analytic foliation in C 2 has infinitely many cylindrical leaves, Proc. Steklov Inst. Math. 254 (2006), 180-183. Minimality and ergodicity of a generic foliation of C 2. T Golenishcheva-Kutuzova, V Kleptsyn, Ergod. Th. & Dynam. Sys. 28T. Golenishcheva-Kutuzova and V. Kleptsyn, Minimality and ergodicity of a generic foliation of C 2 , Ergod. Th. & Dynam. Sys 28 (2008), 1533-1544. An Introduction to complex analysis in several variables, third. L Hörmander, North HollandNeitherlandsL. Hörmander, An Introduction to complex analysis in several variables, third, North Holland, Neitherlands, 1990. Selected topics in differential equations with real and complex time. Normal forms, bifurcations and finiteness problems in differential equations. Yu, Ilyashenko, NATO Sci. Ser. II Math. Phys. Chem. 137Yu. Ilyashenko, Selected topics in differential equations with real and complex time. Normal forms, bifurca- tions and finiteness problems in differential equations, NATO Sci. Ser. II Math. Phys. Chem. 137 (2004), 317-354. Some open problems in real and complex dynamical systems. Nonlinearity. 217, Some open problems in real and complex dynamical systems, Nonlinearity 21(7) (2008), 101-107. Yu Ilyashenko, S Yakovenko, Lectures on analytic differential equations. Amer. Math. Soc86Graduate Studies in MathematicsYu. Ilyashenko and S. Yakovenko, Lectures on analytic differential equations, Graduate Studies in Mathe- matics, vol. 86, Amer. Math. Soc., 2008. On the number of limit cycles of the equation dy dx = P (x,y) Q(x,y) , where P and Q of second degree (Russian). E Landis, I Petrovskii, Mat. sbornik. 3779E. Landis and I. Petrovskii, On the number of limit cycles of the equation dy dx = P (x,y) Q(x,y) , where P and Q of second degree (Russian), Mat. sbornik 37(79), 2 (1955). Simultaneous uniformization for the leaves of projective foliations by curves. A Lins Neto, Bol. Soc. Brasil. Mat. (N.S.). 252A.Lins Neto, Simultaneous uniformization for the leaves of projective foliations by curves, Bol. Soc. Brasil. Mat. (N.S.) 25, no.2 (1994), 181-206. New generic properties of complex and real dynamical systems. V Moldavskis, Cornell UniversityPhD thesisV. Moldavskis, New generic properties of complex and real dynamical systems, PhD thesis, Cornell Univer- sity (2007). Every Stein subvariety admits a Stein Neighborhood. Yum-Tong Siu, Inventiones Math. 38Yum-Tong Siu, Every Stein subvariety admits a Stein Neighborhood, Inventiones Math. 38 (1976), 89-100. Uniform approximation on smooth curves. G Stolzenberg, Acta. Math. 115G. Stolzenberg, Uniform approximation on smooth curves, Acta. Math 115, 3-4 (1966), 185-198. The density of separatrix connections in the space of polynomial foliations in CP 2. D S Volk, Proc. Steklov Inst. Math. 3254D.S. Volk, The density of separatrix connections in the space of polynomial foliations in CP 2 , Proc. Steklov Inst. Math. 3(254) (2006), 169-179. The hull of curve in C n. J Wermer, Annals of Mathematics. 683J. Wermer, The hull of curve in C n , Annals of Mathematics 68, 3 (1958).
[]
[ "Graphs with Flexible Labelings allowing Injective Realizations", "Graphs with Flexible Labelings allowing Injective Realizations" ]
[ "Georg Grasegger ", "Jan Legerský ", "Josef Schicho " ]
[]
[]
We consider realizations of a graph in the plane such that the distances between adjacent vertices satisfy the constraints given by an edge labeling. If there are infinitely many such realizations, counted modulo rigid motions, the labeling is called flexible. The existence of a flexible labeling, possibly non-generic, has been characterized combinatorially by the existence of a so called NAC-coloring. Nevertheless, the corresponding realizations are often non-injective. In this paper, we focus on flexible labelings with infinitely many injective realizations. We provide a necessary combinatorial condition on existence of such a labeling based also on NAC-colorings of the graph. By introducing new tools for the construction of such labelings, we show that the necessary condition is also sufficient up to 8 vertices, but this is not true in general for more vertices.
10.1016/j.disc.2019.111713
[ "https://arxiv.org/pdf/1811.06709v1.pdf" ]
119,604,882
1811.06709
82c31e1e3b8f24f03e2478b2abf37c3f83cd0b0d
Graphs with Flexible Labelings allowing Injective Realizations Georg Grasegger Jan Legerský Josef Schicho Graphs with Flexible Labelings allowing Injective Realizations We consider realizations of a graph in the plane such that the distances between adjacent vertices satisfy the constraints given by an edge labeling. If there are infinitely many such realizations, counted modulo rigid motions, the labeling is called flexible. The existence of a flexible labeling, possibly non-generic, has been characterized combinatorially by the existence of a so called NAC-coloring. Nevertheless, the corresponding realizations are often non-injective. In this paper, we focus on flexible labelings with infinitely many injective realizations. We provide a necessary combinatorial condition on existence of such a labeling based also on NAC-colorings of the graph. By introducing new tools for the construction of such labelings, we show that the necessary condition is also sufficient up to 8 vertices, but this is not true in general for more vertices. Introduction A widely studied question in Rigidity Theory is the number of realizations of a graph in R 2 such that the distances of adjacent vertices are equal to a given labeling of edges by positive real numbers. Such a labeling is called flexible if the number of realizations, counted modulo rigid transformations, is infinite. Otherwise, the labeling is called rigid. We call a graph movable if there is a flexible labeling with infinitely many injective realizations, modulo rigid transformations. In other words, we disallow realizations that identify two vertices; we do not care if edges intersect or even if edges overlap in a line segment. One can model such a movable graph as a planar linkage, where the vertices are rotational joints and the edges correspond to links of the length given by the labeling. A result of Pollaczek-Geiringer [10], rediscovered by Laman [6], shows that a generic realization of a graph defines a rigid labeling if and only if the graph contains a Laman subgraph with the same set of vertices. A graph G = (V G , E G ) is called Laman if |E G |= 2|V G |−3, and |E H |≤ 2|V H |−3 for all subgraphs H of G. Hence, if a graph is not spanned by a Laman graph, then a generic labeling is flexible, i.e., the graph is movable. The study of movable overconstrained graphs has a long history. Two ways of making the bipartite Laman graph K 3,3 movable were given by Dixon more than one hundred years ago [4,11,14]. The first one works for any bipartite graph, placing the vertices of one part on the x-axis and of the other on the y-axis. The second construction applies to K 4,4 and hence also to K 3,3 . Walter and Husty proved that these two give all flexible labelings of K 3,3 with injective realizations [12]. Other constructions are Burmester's focal point mechanisms [1], a graph with 9 vertices and 16 edges, and two constructions by Wunderlich [13,15] for bipartite graphs based on geometric theorems. The main question in this paper is the following: is a given graph movable? In [5], we already provide a combinatorial characterization of graphs with a flexible labeling: there is a flexible labeling if and only if the graph has a so called NAC-coloring. A NAC-coloring is a coloring of edges by two colors such that in every cycle, either all edges have the same color or there are at least two edges of each color. Many Laman graphs indeed have a NAC-coloring, but the corresponding realizations are in general not injective, i.e., in order to be flexible, some non-adjacent vertices coincide. Here, we are more restrictive -infinitely many realizations of a movable graph must be injective. We give a necessary combinatorial condition on a graph being movable, based on the concept of NAC-colorings. The idea is that edges can be added to a graph if their endpoints are connected by a path that is unicolor in every NAC-coloring, without having effect on being movable. If the augmented graph does not have any NAC-coloring, it cannot be movable as it has no flexible labeling. On the other hand, we provide constructions making some graphs movable. They are based on NAC-colorings or combining movable subgraphs. In combination with the necessary condition, we give a complete list of movable graphs up to 8 vertices. Animations with the movable graphs can be found in [7]. Figure 1 provides some examples illustrating the results: the left graph has no NACcoloring, hence, it has no flexible labeling. The graph in the middle has a NAC-coloring, namely, it has a flexible labeling, but it is not movable since it does not satisfy the necessary condition based on augmenting by edges whose endpoints are connected by a unicolor path. In other words, all motions require some vertices to coincide. The third graph is movable using one of our constructions. The structure of the paper is the following. In Section 1, we specify the system of equations describing the problem and recall the definition of NAC-coloring and some previous results. A few technical lemmas about NAC-colorings are also proven. In Section 2, we prove the necessary combinatorial condition on being movable. We list all graphs spanned by a Laman graph up to 8 vertices that satisfy it. All these graphs are shown to be movable in Section 3. Moreover, an example that the necessary condition is not sufficient is also presented. Figure 1: The left graph has no flexible labeling, the middle one has a flexible labeling, but it is not movable, and the right one is movable Preliminaries In the whole paper, all graphs are assumed to be connected and containing at least one edge. We denote the set of vertices of a graph G by V G and the set of edges by E G . In this section, we recall the definition of NAC-coloring and flexible labeling of a graph. Next, we introduce the notion of proper flexible labeling and movable graph by the requirement of injective realizations. We define an algebraic motion of a graph with a flexible labeling and assign a certain set of active NAC-colorings to this motion. These active NAC-colorings come from the proof of the theorem characterizing existence of a flexible labeling. The active NAC-colorings are illustrated on the motion of a deltoid. The section concludes with three lemmas, which guarantee that the introduced notions are independent on certain choices of edges and swapping colors. (i) A path, resp. cycle, in G is called unicolor, if all its edges have the same color. (ii) A cycle in G is an almost red cycle, resp. almost blue cycle, if exactly one of its edges is blue, resp. red. A coloring δ is called a NAC-coloring, if it is surjective and there are no almost blue cycles or almost red cycles in G. In other words every cycle is either unicolor or contains at least 2 edges in each color. The set of all NAC-colorings of G is denoted by NAC G . Now, the abbreviation NAC can be explained -it stands for "No Almost Cycle". Clearly, if we swap colors of a NAC-coloring of G, we obtain another NAC-coloring of G. Definition 1.2. Let G be a graph. If δ, δ ∈ NAC G are such that δ(e) = blue ⇐⇒ δ(e) = red for all e ∈ E G , then they are called conjugated. The following definition describes the constraints on a realization in the plane given by a labeling of edges. The realizations must be counted properly, i.e., modulo rigid motions, in order to say whether the labeling is flexible. Definition 1.3. Let G be a graph such that |E G |≥ 1 and let λ: E G → R + be an edge labeling of G. A map ρ = (ρ x , ρ y ): V G → R 2 is a realization of G compatible with λ iff ρ(u) − ρ(v) = λ(uv) for all edges uv ∈ E G . We say that two realizations ρ 1 and ρ 2 are congruent iff there exists a direct Euclidean isometry σ of R 2 such that ρ 1 = σ • ρ 2 . The labeling λ is called flexible if the number of realizations of G compatible with λ up to the congruence is infinite. We remark that if a labeling has a positive finite number of realizations, then it is called rigid. The constraints given by edge lengths λ uv = λ(uv) can be modeled by the following system of equations for coordinates (x u , y u ) for u ∈ V G . In order to remove rigid motions, the position of an edgeūv is fixed: xū = 0 , yū = 0 , xv = λūv , yv = 0 ,(1)(x u − x v ) 2 + (y u − y v ) 2 = λ 2 uv for all uv ∈ E G \ {ūv}. The labeling λ is flexible if and only if there are infinitely many solutions of the system. So far, the realizations have not been required to be injective. Namely, it could happen that two non-adjacent vertices were mapped to the same point in R 2 . Sections 2 and 3 are focused on the graphs that have a labeling with infinitely many injective compatible realizations. This corresponds to adding the inequalities ( x u − x v ) 2 + (y u − y v ) 2 = 0 for all u, v ∈ V G such that u = v and uv / ∈ E G . Definition 1.4. A flexible labeling λ of a graph G is called proper, if there exists infinitely many injective realizations ρ of G compatible with λ, modulo rigid transformations. We say that a graph is movable if it has a proper flexible labeling. We remark that graphs that are not movable are called absolutely 2-rigid in [8]. Considering irreducible components of the solution set of the equation (1) allows us to use the notion of a function field, whose valuations give rise to a relation with NAC-colorings, as we will see later. Definition 1.5. Let λ be a flexible labeling of G. Let R(G, λ) ⊆ (R 2 ) V G be the set of all realizations of G compatible with λ. We say that C is an algebraic motion of (G, λ) w.r.t. an edgeūv, if it is an irreducible algebraic curve in R(G, λ), such that ρ(ū) = (0, 0) and ρ(v) = (λ(ūv), 0) for all ρ ∈ C. Since in many situations the role ofūv does not matter, we also simply say that C is an algebraic motion of (G, λ). We call F (C) the complex function field of C. The fact that the choice of the fixed edge does not change the function field is proven at the end of this section. The functions in the function field related to NAC-colorings are given by the following definition. Definition 1.6. Let λ be a flexible labeling of a graph G. Let F (C) be the complex function field of an algebraic motion C of (G, λ). For every u, v ∈ V G such that uv ∈ E G , we define W u,v , Z u,v ∈ F (C) by W u,v = (x v − x u ) + i(y v − y u ) , Z u,v = (x v − x u ) − i(y v − y u ) . We use Wūv u,v , resp. Zūv u,v , if we want to specify that C is w.r.t. a fixed edgeūv. We remark that W u,v = −W v,u and Z u,v = −Z v,u , i.e., they depend on the order of u, v. In case we are computing valuations where the sign does not matter we might also write W e for an edge e. Using (1), we have Wū ,v = λūv , Zū ,v = λūv , W u,v Z u,v = λ 2 uv for all uv ∈ E G . By the definition of W uv and Z uv , the equations n i=0 W u i ,u i+1 = 0 , n i=0 Z u i ,u i+1 = 0 hold for every cycle (u 0 , u 1 , . . . , u n , u n+1 = u 0 ) in G. Recall that the valuation of a product is the sum of valuations and the valuation of a sum is the minimum of valuations. A consequence is that if a sum of functions equals zero, than there are at least two summands with the minimal valuation. These, together with Chevalley's theorem (see [3]), are the main ingredients for one implication of the following theorem that was proven in [5]. Actually, the following statement can be deduced from the proof of Theorem 1.7 with only minor modification -replacing 0 by α. This theorem explains how the functions W e and Z e yield a NAC-coloring. Theorem 1.8. Let λ be a flexible labeling of a graph G. Let F (C) be the complex function field of an algebraic motion C of (G, λ). If α ∈ Q and ν is a valuation of F (C) such that there exists edges e, e in E G with ν(W e ) = α and ν(W e ) > α, then δ : E G → {red, blue} given by δ(uv) = red ⇐⇒ ν(W u,v ) > α , δ(uv) = blue ⇐⇒ ν(W u,v ) ≤ α .(2) is a NAC-coloring. This motivates assignment of some NAC-colorings to an algebraic motion. Definition 1.9. Let C be an algebraic motion of (G, λ). A NAC-coloring δ ∈ NAC G is called active w.r.t. C if there exists a valuation ν of F (C) and α ∈ Q such that (2) holds. The set of all active NAC-colorings of G w.r.t. C is denoted by NAC G (C). For illustration, we compute the active NAC-colorings of the non-degenerated algebraic motion of a deltoid. There is an algebraic motion C of (Q, λ) that can be parametrized by ρ t (1) = (0, 0) , ρ t (2) = (1, 0) , ρ t (3) = 4(t 2 − 2) t 2 + 4 , 12t t 2 + 4 , ρ t (4) = t 4 − 13t 2 + 4 t 4 + 5t 2 + 4 , 6t(t 2 − 2) t 4 + 5t 2 + 4 for t ∈ R. Now, we have W 1,2 = 1 , W 2,3 = 3(t + 2i) t − 2i , W 3,4 = −3(t + i) t − i , W 4,1 = −(t + i)(t + 2i) (t − i)(t − 2i) . Hence, the only non-trivial valuations correspond to the polynomials t±i and t±2i. They give two pairs of conjugated NAC-colorings by taking a suitable threshold α ∈ {−1, 0}, see Table 1 and Figure 2. We remark that |NAC Q (C)|= 4, whereas |NAC Q |= 6. The two non-active NAC-colorings correspond to the degenerated motion of (Q, λ), where the vertices 2 and 4 coincide. We conclude this section by three technical lemmas, which show that the active NACcolorings do not depend on the choice of the fixed edge and that conjugated NACcolorings are either both active or both non-active. The first lemma says that the function field does not depend on the choice of the edge. Lemma 1.11. Let λ be a flexible labeling of G. Let Cū ,v be an algebraic motion of (G, λ) w.r.t. an edgeūv. If u v ∈ E G and ϕ u ,v : R(G, λ) → R(G, λ) is given by edge λ ν t+i δ 1 ν t−i δ 1 ν t+2i δ 2 ν t−2i δ 2 {1, 2} 1 0 blue 0 red 0 blue 0 red {2, 3} 3 0 blue 0 red 1 red −1 blue {3, 4} 3 1 red −1 blue 0 blue 0 red {1, 4} 1 1 red −1 blue 1 red −1 blue(x w , y w ) w∈V G → (x w − x u )(x v − x u ) + (y w − y u )(y v − y u ) λ(u v ) , (y w − y u )(x v − x u ) − (x w − x u )(y v − y u ) λ(u v ) w∈V G , then C u ,v = ϕ u ,v (Cū ,v ) is an algebraic motion of (G, λ) w.r.t. an edge u v and ϕ u ,v : Cū ,v → C u ,v is birational. Proof. By direct computation, one can check that u v is indeed fixed in C u ,v and that all realizations in C u ,v are compatible with λ. The rational inverse of ϕ u ,v is ϕū ,v . The following lemma shows that active NAC-colorings are independent on the choice of the fixed edge. Lemma 1.12. Let G be a graph with a flexible labeling λ. If C u ,v and Cū ,v are as in Lemma 1.11, then NAC G (C u ,v ) = NAC G (Cū ,v ). Proof. Let δ ∈ NAC G (Cū ,v ), i.e., there exists a valuationν of F (Cū ,v ) and α ∈ Q such that δ(uv) = red ⇐⇒ν(Wūv u,v ) > α for all uv ∈ E G . Let ϕ u ,v : Cū ,v → C u ,v be the birational map from Lemma 1.11. Hence, there is a function field isomorphism φ : F (C u ,v ) → F (Cū ,v ) given by f → f • ϕ u ,v . We define a valuation ν of F (C u ,v ) by ν (f ) :=ν(φ(f )). If W u v u,v ∈ F (C u ,v ), then W u v u,v • ϕ u ,v = (x v − x u )(x v − x u ) + (y v − y u )(y v − y u ) /λ(u v ) + i (y v − y u )(x v − x u ) − (x v − x u )(y v − y u ) /λ(u v ) = (x v − x u ) + i(y v − y u ) · (x v − x u ) − i(y v − y u ) /λ(u v ) . Therefore, ν (W u v u,v ) =ν(Wūv u,v ) +ν(Zūv u ,v ) . This concludes the proof, since δ(uv) = red ⇐⇒ν(Wūv u,v ) > α ⇐⇒ ν (W u v u,v ) > α +ν(Zūv u ,v ) . Finally, we show that the set of active NAC-colorings is closed under conjugation. Lemma 1.13. Let λ be a flexible labeling of a graph G. Let C be an algebraic motion of (G, λ). If δ, δ ∈ NAC G are conjugated, then δ ∈ NAC G (C) if and only if δ ∈ NAC G (C). Proof. Let δ be an active NAC-coloring of G w.r.t. C given by a valuation ν of F (C) and a threshold α. Since the algebraic motion C is a real algebraic curve, it has complex conjugation defined on its complex points. This induces another valuation ν of F (C) given by ν(f ) := ν(f ) for any f ∈ F (C), where f is given by f (ρ) := f (ρ) for every ρ ∈ C. If β = max{ν(Z e ): ν(Z e ) < −α, e ∈ E G }, then ν and β satisfy (2) for δ, since for every edge e ∈ E G : δ(e) = red ⇐⇒ α < ν(W e ) ⇐⇒ −α > ν(Z e ) ⇐⇒ β ≥ ν(Z e ) ⇐⇒ β ≥ ν(W e ) ⇐⇒ δ(e) = blue . Namely, δ is in NAC G (C). Combinatorial tools From now on, we are interested only in proper flexible labelings, namely, the question, whether a graph is movable. One of our main tools is introduced in this section: An edge uv can be added to a graph without changing its algebraic motion, if the vertices u and v are connected by a path that is unicolor in every active NAC-coloring. This leads to the notion of constant distance closure -augmenting the graph by edges with the property above, taking into account all NAC-colorings of the graph instead of active ones. Hence, we obtain a necessary combinatorial condition on movability: a graph can be movable only if its constant distance closure has a NAC-coloring. Based on this necessary condition, we show that so called tree-decomposable graphs are not movable. At the end of the section, we list all maximal constant distance closures of graphs up to 8 vertices having a spanning Laman graph that satisfy the necessary condition. The following statement guarantees that adding an edge uv with the mentioned property preserves an algebraic motion, since the distance between u and v is constant during the motion. Lemma 2.1. Let G be a graph, λ a flexible labeling of G and u, v ∈ V G where uv ∈ E G . Let C be an algebraic motion of (G, λ) such that ∀ρ ∈ C : ρ(u) = ρ(v). If there exists a uv-path P in G such that P is unicolor for all δ ∈ NAC G (C), then λ has a unique extension λ of G = (V G , E G ∪ {uv}), such that C is an algebraic motion of (G , λ ) and NAC G (C) = {δ ∈ NAC G : δ | E G ∈ NAC G (C)}. Proof. Let S = { ρ(u) − ρ(v) ∈ R + : ρ ∈ C}. We first show that S is finite. By Lemma 1.12, we can assume that the first edge e 1 of the path P is the fixed one in C. If there is any e k in P such that W e k is transcendental, then there is a valuation ν such that ν(W e k ) > 0 by Chevalley's Theorem (see [3]). Hence, an active NAC-coloring can be constructed by Theorem 1.8 with ν(W e 1 ) = 0, which contradicts that P is unicolor. Therefore, W e k is algebraic for all e k in P . Then there are only finitely many values for W e k . These values correspond to possible angles of the line given by the realization of the vertices of e k . Hence, there can only be finitely many elements in S. Indeed, we can show that |S| = 1. Assume S = {s 1 , . . . , s }, then C = i∈{1,..., } {ρ ∈ C: ρ(u) − ρ(v) 2 = s 2 i }. Since C is irreducible, then = 1. We define λ by λ | E G = λ and λ (uv) = s 1 . The restriction of any active NAC-coloring δ ∈ NAC G (C) to E G is clearly in NAC G (C). On the other hand, every active NAC-coloring of G is extended uniquely to an active NAC-coloring of G , since the path P is unicolor. Notice that it is sufficient to check the assumption only for non-conjugated active NAC-colorings due to Lemma 1.13. Removal of an edge also preserves movability, since edge lengths are assumed to be positive. Together with the fact that NAC G (C) ⊂ NAC G for any algebraic motion C, this gives the following corollary. G be a graph and u, Corollary 2.2. Let v ∈ V G be such that uv / ∈ E G . If there exists a uv-path P in G such that P is unicolor for all δ ∈ NAC G , then G is movable if and only if G = (V G , E G ∪ {uv}) is movable. Proof. Let λ be a proper flexible labeling of G . Clearly, λ = λ | E G is a flexible labeling of G. A realization ρ of G compatible with λ maps u and v to distinct points, since ρ(u) − ρ(v) = λ (uv) = 0. Clearly, ρ is also a realization of G and it is compatible with λ. The other direction follows from Lemma 2.1. Let us point out that there is no specific algebraic motion assumed in the previous corollary. Hence, it can be used for proving that a graph is not movable in purely combinatorial way. This is demonstrated by the following example. ∈ E G and there exists a path from u to v which is unicolor for all δ ∈ NAC G . If there exists a sequence of graphs G 0 , . . . , G n such that (i) G = G 0 , (ii) G i = (V G i−1 , E G i−1 ∪ U(G i−1 )) for i ∈ {1, . . . , n}, (iii) U(G n ) = ∅, then the graph G n is called the constant distance closure of G, denoted by CDC(G). The idea of repetitive augmenting the graph G by edges in U(G) is to decrease the number of inequalities checking injectivity of compatible realizations -adjacent vertices must be always mapped to different points. This can be seen in Example 2.3 -the construction of a flexible labeling from a NAC-coloring described in [5] always coincides the vertices 1 and 4, or 2 and 5. But the edges {1, 4} and {2, 5} are added to the constant distance closure already in the first iteration. In other words, considering the constant distance closure, while seeking for a proper flexible labeling, utilizes more information from the graph than taking the graph itself. For instance, since NAC CDC(G) ⊂ NAC G , the NAC-colorings of G that are active only for motions with non-injective realizations might be eliminated. This is summarized by the following statement. Theorem 2.5. A graph G is movable if and only if the constant distance closure of G is movable. Proof. The theorem follows by recursive application of Corollary 2.2. An immediate consequence is that a graph G can be movable only if the constant distance closure CDC(G) has a flexible labeling, namely, we relax the requirement on the labeling to be proper. By Theorem 1.7, it is equivalent to say that if G is movable, then CDC(G) has a NAC-coloring. We can reformulate this necessary condition using the next two lemmas. Lemma 2.6. Let G be a graph. If H is a subgraph of G, then the constant distance closure CDC(H) is a subgraph of the constant distance closure CDC(G). Proof. If we show that U(H) ⊂ U(G), then the claim follows by induction. Let a nonedge uv be in U(H), namely, there exists a path P from u to v such that it is unicolor for all NAC-colorings in NAC H . But then uv is also in U(G), since the path P is unicolor also for all δ ∈ NAC G , because P is a subgraph of H and δ| E H ∈ NAC H . Now, we can show that having a NAC-coloring and being non-complete is the same for a constant distance closure. Lemma 2.7. Let G be a graph. The constant distance closure CDC(G) is the complete graph if and only if there exists a spanning subgraph of CDC(G) that has no NACcoloring. Proof. If CDC(G) is the complete graph, then it has clearly no NAC-coloring. For the opposite implication, assume that there is a spanning subgraph H of CDC(G) that has no NAC-coloring. Trivially, U(H) consists of all nonedges of H. Hence, the constant distance closure of H is the complete graph. By Lemma 2.6, CDC(G) is also complete. The previous statement clarifies that the necessary condition obtained from Theorem 2.5 can be expressed as follows by relaxing the requirement of a flexible labeling being proper. Corollary 2.8. Let G be a graph. If the constant distance closure CDC(G) is the complete graph, then G is not movable. Let us use this necessary condition to prove that a certain class of Laman graphs is not movable. We would like to thank Meera Sitharam for pointing us to this class. Definition 2.9. A graph G is tree-decomposable if it is a single edge, or there are three tree-decomposable subgraphs H 1 , H 2 and H 3 of G such that V G = V H 1 ∪ V H 2 ∪ V H 3 , E G = E H 1 ∪ E H 2 ∪ E H 3 and V H 1 ∩ V H 2 = {u}, V H 2 ∩ V H 3 = {v} and V H 1 ∩ V H 3 = {w} for three distinct vertices u, v, w ∈ V G . One could prove geometrically that the tree-decomposable graphs are not movable, but the notion of constant distance closure allows to do it in a combinatorial way. Theorem 2.10. If a graph is tree-decomposable, then it is not movable. Proof. Let G be a tree-decomposable graph. It is sufficient to show that the constant distance closure CDC(G) is the complete graph and use Corollary 2.8. We proceed by induction on the tree-decomposable construction. Clearly, the constant distance closure of a single edge is the edge itself which is K 2 . Let H 1 , H 2 and H 3 be tree-decomposable subgraphs of G as in Definition 2.9, with the pairwise common vertices u, v and w. By Lemma 2.6 and induction assumption, the subgraphs H 1 , H 2 and H 3 of CDC(G) induced by V H 1 , V H 2 and V H 3 respectively are complete. Thus, there is no NAC-coloring of H = (V H 1 ∪ V H 2 ∪ V H 3 , E H 1 ∪ E H 2 ∪ E H 3 ) , since all edges in a complete graph must have the same color and H 1 , H 2 and H 3 contain each an edge of the triangle induced by u, v, w. By Lemma 2.7, CDC(G) is complete, since H is its spanning subgraph . We remark that the class of so called H1 graphs is a subset of tree-decomposable graphs, hence, they are not movable. A graph is called H1 if it can be constructed from a single edge by a sequence of Henneberg I steps -each step adds a new vertex by linking it to two existing ones. The next statement recalls the known fact that Henneberg I steps do not affect movability. Since the previous lemma justifies that the question of movability of a graph with vertices of degree two reduces to a smaller graph with all degrees being different from two, we can provide a list of "interesting" graphs regarding movability. By "interesting", we mean, besides all vertices having degree at least three, also the fact that they are spanned by a Laman graph. Recall that graphs that are not spanned by a Laman graph are clearly movable, since a generic labeling is proper flexible. We are interested in graphs that can be movable only due to a non-generic labeling. So we conclude this section by the list of the "interesting" constant distance closures up to 8 vertices: Theorem 2.12. Let G be a graph with at most 8 vertices such that it has a spanning Laman subgraph and CDC(G) has no vertex of degree two. If G satisfies the necessary condition of movability, i.e, the constant distance closure CDC(G) is not complete, then CDC(G) is one of the graphs K 3,3 , K 3,4 , K 3, 5 , K 4,4 , L 1 , . . . , L 6 , Q 1 , . . . , Q 6 , S 1 , . . . , S 5 , or a spanning subgraph thereof, where the non-bipartite graphs are given by Figure 4. Proof. Using the list of Laman graphs [2], one can compute constant distance closures of all graphs spanned by a Laman graph with at most 8 vertices. The computation shows that each constant distance closure is either a complete graph, or it has a vertex of degree two, or it is a spanning subgraph (or the full graph) of one of K 3,3 , K 3,4 , K 3,5 , K 4,4 or the graphs in Figure 4. L 1 L 2 L 3 L 4 L 5 L 6 Q 1 Q 2 Q 3 Q 4 Q 5 Q 6 S 1 S 2 S 3 S 4 S 5 Construction of proper flexible labelings The goal of this section is to prove that all graphs listed in Theorem 2.12 are actually movable. By this we also show that the necessary condition of movability (the constant distance closure is non-complete) is also sufficient for graphs up to eight vertices. Four general ways of constructing a proper flexible labeling are presented. The first two are known -the Dixon I construction for bipartite graphs [4] and the construction from a single NAC-coloring presented in [5]. We describe a new construction that produces an algebraic motion with two active NAC-colorings based on a certain injective embedding of vertices in R 3 . The fourth method assumes two movable subgraphs whose union spans the whole graph and whose motions coincide on the intersection. We provide a proper flexible labeling for S 5 ad hoc, since none of the four methods applies. Animations for the movable graphs can be found in [7]. In the conclusion, we give an example showing that the necessary condition is not sufficient for graphs with arbitrary number of vertices. In order to be self-contained, we recall Dixon's construction. Lemma 3.1. Every bipartite graph with at least three vertices is movable. Proof. Let (X, Y ) be a bipartite partition of a graph G. Hence, a realization with the vertices of one partition set on the x-axis, and vertices of the other on the y-axis induces a proper flexible labeling by Dixon's construction [4,11]: ρ t (v) =    ( x 2 v − t 2 , 0) , if v ∈ X , (0, y 2 v + t 2 ) , if v ∈ Y , where x v , y v are arbitrary nonzero real numbers. Let λ(uv) := x 2 v + y 2 v for all u ∈ X and v ∈ Y . By the Pythagorean Theorem, ρ t is compatible with λ for every sufficiently small t. The following method from [5] was used in the proof of Theorem 1.7 but without the assumption guaranteeing injectivity of realizations. δ be a NAC-coloring of a graph G. Let R 1 , . . . , R m , resp. B 1 , . . . , B n , be the sets of vertices of connected components of the graph obtained from G by keeping only red, resp. blue, edges. If |R i ∩ B j | ≤ 1 for all i, j, then G is movable. Lemma 3.2. Let Proof. For α ∈ [0, 2π), we define a realization ρ α : V G → R 2 by ρ α (v) = i · (1, 0) + j · (cos α, sin α) , where i and j are such that v ∈ R i ∩ B j . Now, the realization ρ α is compatible with the labeling λ : E G → R + , given by λ(uv) = ρ π/2 (u) − ρ π/2 (v) , for every α ∈ [0, 2π). The induced flexible labeling λ is proper, since all realizations ρ α , α / ∈ {0, π}, are injective by the assumption |R i ∩ B j | ≤ 1. The construction yields proper flexible labelings for L 1 , . . . , L 6 , since there are NACcolorings satisfying the assumption, see Figure 5. The displayed proper flexible labelings can be obtained by the more general "zikzag" construction from [5]. Now, we present a construction assuming a special injective embedding in R 3 . The lemma also gives a hint, how existence of such an embedding can be checked (and an embedding found), if we know all NAC-colorings of the given graph. Proof. Let ρ t : {1, 2, 3, 4} → R 2 be a parametrization of an algebraic motion of the 4-cycle with a labeling λ. We define three functions from R to R 2 by f 1 (t) = ρ t (2) − ρ t (1), f 2 (t) = ρ t (3) − ρ t (2), f 3 (t) = ρ t (4) − ρ t (3). The norms f 1 , f 2 , f 3 and −f 1 (t) − f 2 (t) − f 3 (t) are the corresponding values of λ, i.e., they are independent of t. For each t ∈ R, we define ρ t : V → R 2 , u → ω 1 (u)f 1 (t) + ω 2 (u)f 2 (t) + ω 3 (u)f 3 (t) , where ω(u) = (ω 1 (u), ω 2 (u), ω 3 (u)). For any edge uv, ρ t (u) − ρ t (v) is a multiple of f 1 , f 2 , f 3 or −f 1 (t) − f 2 (t) − f 3 (t) by assumption. Thus, the distance ρ t (u) − ρ t (v) is independent of t and different from zero. Hence, the set of all ρ t is an algebraic motion; this proves the first statement. In order to construct an algebraic motion with two active NAC-colorings, we take ρ t to be the parametrization of the deltoid in Example 1.10. For any edge uv, the function W u,v is just a scalar multiple of one of the functions in the example. Hence, there are only two active NAC-colorings modulo conjugation, see Table 1. Remark. Clearly, we can construct also an algebraic motion with three non-conjugated active NAC-colorings by taking ρ t from an algebraic motion of the 4-cycle with a general edge lengths, which has three non-conjugated active NAC-colorings. This also shows that if δ 1 and δ 2 are the two active NAC-colorings from the second statement of the lemma, then the coloring δ 3 , given by δ 3 (e) = blue if and only if δ 1 (e) = δ 2 (e), is also a NAC-coloring of G. This follows from the fact that there are only three non-conjugated NAC-colorings of a 4-cycle with one chosen edge being always blue and they are related as given above. Lemma 3.4 allows to compute algebraic motions with exactly two active NAC-colorings: For any pair of NAC-colorings, try to find an embedding ω : V → R 3 with edge directions (1, 0, 0), (0, 1, 0), (0, 0, 1) or (−1, −1, −1) depending on the colors in these two colorings. This leads to a system of linear equations. If it has a non-trivial solution, check if a general solution is injective. Figure 6: The graph Q 1 with a pair of NAC-colorings giving an embedding in R 3 . Example 3.5. In order to find an embedding ω : V → R 3 for the graph Q 1 using the NAC-colorings in Figure 6, such that every edge colored with blue/blue is parallel to (1, 0, 0), every edge colored with blue/red is parallel to (0, 1, 0), every edge colored with red/blue is parallel to (0, 0, 1), and every edge colored with red/red is parallel to (−1, −1, −1), we put ω(1) to the origin and introduce variables x i , y i , z i for p i := ω(i), i = 2, . . . , 7. For each edge, we have two linear equations. We obtain the system 0 = y 7 = z 7 = y 7 − y 2 = z 7 − z 2 = y 2 = z 2 = y 3 − y 5 = z 3 − z 5 = y 4 − y 6 , 0 = z 4 − z 6 = x 5 = z 5 = x 2 − x 3 = z 2 − z 3 = x 3 − x 4 = y 3 − y 4 = x 5 − x 6 , 0 = y 5 − y 6 = x 7 − x 6 − y 7 + y 6 = x 7 − x 6 − z 7 + z 6 = x 4 − y 4 = x 4 − z 4 , with the general solution parametrized by t ∈ R (p 1 , . . . , p 7 ) = ((0, 0, 0), (t, 0, 0), (t, t, 0), (t, t, t), (0, t, 0), (0, t, t), (−t, 0, 0)). The solution is injective for t = 0. If we take t = 1 and the parametrization of the deltoid from Example 1.10, then we obtain an algebraic motion of the graph such that its projection to the 4-cycle induced by {1, 2, 3, 4} is precisely the motion of the deltoid. Any other parametrization of the 4-cycle also yields an algebraic motion of the whole graph. Figure 7 illustrates using the deltoid and also a general quadrilateral. We remark that the triangle {1, 2, 7} is degenerated independently of the choice of parametrization of the 4-cycle. Moreover, the 4-cycles {1, 2, 3, 5}, {3, 5, 6, 4} and {1, 4, 6, 7} are always parallelograms. By applying the described procedure to all pairs of NAC-colorings for the graphs in the list, we obtain the following: Corollary 3.6. The graphs Q 1 , . . . , Q 6 , see Figure 4 or 8, are movable. Proof. Figure 8 shows the pairs of NAC-colorings of the graphs in Q 1 , . . . , Q 6 that can be used to construct an injective embedding in R 3 analogously to Example 3.5. Hence, they are movable by Lemma 3.4. Figure 7: Embeddings of the graph Q 1 compatible with proper flexible labelings induced by a deltoid and a general quadrilateral (colors indicate edges with same lengths). Note that the vertices 1,2,7 form a degenerate triangle. For the graphs S 1 , . . . , S 4 , we take advantage of the fact that they contain other graphs in the list as subgraphs. The next lemma formalizes the general construction based on movable subgraphs. Lemma 3.7. Let G be a graph. Let G 1 and G 2 be two subgraphs of G such that Q 1 Q 2 Q 3 Q 4 Q 5 Q 6V G = V G 1 ∪ V G 2 , E G = E G 1 ∪ E G 2 and E G 1 ∩ E G 2 = ∅. Let W = V G 1 ∩ V G 2 . Let λ 1 and λ 2 be proper flexible labelings of G 1 and G 2 respectively. If there are algebraic motions C 1 of (G 1 , λ 1 ) and C 2 of (G 2 , λ 2 ) such that: (i) the projections of C 1 and C 2 to W are the same, and Proof. We define a labeling λ of G by λ| E G 1 = λ 1 and λ| E G 2 = λ 2 . This is well-defined, (ii) for all v 1 ∈ V G 1 \ W and v 2 ∈ V G 2 \ W ,since λ 1 | E G 1 ∩E G 2 = λ 2 | E G 1 ∩E G 2 by (i) . Now, every realization in the projection of C 1 to W can be extended to a realization of G that is compatible with λ. Hence, λ is flexible. It is also proper, since all extended realizations are injective by the second assumption. Now, we identify the suitable subgraphs and motions for S 1 , S 2 and S 3 . Movability of S 4 does not follow from the previous lemma, but it is straightforward. Proof. Figure 9 shows vertex-labelings of the graphs S 1 , S 2 and S 3 that are used in the proof. The labelings given by the displayed edge lengths are actually proper flexible. Edges with same lengths have the same color. Notice that the subgraphs G 1 and G 2 of S 1 induced by the vertices {1, . . . , 6} and {3, . . . , 8} are isomorphic to L 1 and K 3,3 respectively. Since G 1 and G 2 satisfy the assumptions of Lemma 3.8, it is sufficient to take proper flexible labelings of G 1 and G 2 , given by Lemma 3.2 and 3.1, such that the quadrilateral (3,4,5,6) in both graphs moves as a non-degenerated rhombus, i.e., λ(3, 4) = λ(4, 5) = λ(5, 6) = λ (3,6). Recall that a proper flexible labeling of K 4,4 according to Dixon II is induced by placing the nodes of the two partition sets to the vertices of two cocentric rectangles in orthogonal position. By removing two vertices, one can easily obtain a motion of K 3,3 . The graph S 2 has a subgraph H Q 1 induced by vertices {1, . . . , 7}, which is isomorphic to Q 1 , and H K 3,3 induced by {1, . . . , 5, 8} isomorphic to K 3,3 . We consider a proper flexible labeling of the subgraph H K 3,3 with an algebraic motion by Dixon II according to Figure 9. Now, we can use the motion of the 4-cycle {1, 2, 3, 4} to construct a motion of H Q 1 following Example 3.5. Since the 4-cycle {1, 2, 3, 5} is a parallelogram in the motions of H K 3,3 and H Q 1 , the subgraphs satisfy the assumption of Lemma 3.7. Hence, S 2 is movable. Similarly, we construct a proper flexible labeling of S 3 , since the vertices {1, . . . , 7} and {1, 3, 4, 5, 6, 8} induce subgraphs isomorphic to Q 1 and K 3,3 , respectively. See Figure 9 for placing the vertices according to Dixon II. Now, the 4-cycle {1, 4, 3, 5} is used to construct the motion according to Example 3.5. A proper flexible labeling of S 4 can be clearly obtained by extending a proper flexible labeling of its K 3,3 subgraph. Finally, only the graph S 5 is missing to be proven to be movable. Unfortunately, none of the previous constructions applies in this case. Hence, we provide a parametrization of its algebraic motion ad hoc. Lemma 3.9. The graph S 5 is movable. Based on the previous corollary, one might want to conjecture that the statement holds independently of the number of vertices. Nevertheless, the graph G 25 in Figure 11 serves as a counter example. This graph was proposed by Tibor Jordán as a counter example for some conjectures characterizing movable graphs within informal discussions with his students. The constant distance closure of G 25 is the graph itself, since there is no unicolor path of length at least two: for every two incident edges uv and vw, there exists a NAC-coloring δ such that δ(uv) = δ(vw). Namely, we can define δ by δ(e) = blue if and only if w is a vertex of e. Hence, the necessary condition is satisfied. An explanation that G 25 is not movable is the following: it contains five subgraphs isomorphic to the bipartite graph K 5,5 , each of them induced by the vertices on two neighboring lines in the figure. The only way to construct a proper flexible labeling of K 5,5 , with partition sets V 1 and V 2 , is placing the vertices of V 1 on a line and the vertices of V 2 on another line that is perpendicular to the first one [9]. Therefore, constructing a proper flexible labeling of G 25 would require that the vertices on every two neighboring lines in the figure are on perpendicular lines, which is not possible. Conclusion The newly introduced notion of active NAC-colorings bridges the combinatorial properties of a movable graph with its motion. Motivated by invariance of the motion under adding new edges whose endpoints are connected by a path that is unicolor in all active NAC-colorings, the constant distance closure of a graph is defined purely combinatorially. This augmented graph being non-complete serves as a necessary condition of movability of the original graph. We focused on the graphs up to 8 vertices satisfying the condition and developed tools showing that all are movable. Since it was shown that the necessary condition is not always sufficient, the question on a (combinatorial) characterization of movability remains open. Similarly, the characterization of all possible algebraic motions of a given graph is subject to future research. A particularly interesting open problem is to determine possible subsets of active NAC-colorings and then construct a corresponding proper flexible labeling. Definition 1. 1 . 1Let G be a graph and δ: E G → {blue, red} be a coloring of edges. Theorem 1.7. A connected graph G with at least one edge has a flexible labeling iff it has a NAC-coloring. Example 1 . 10 . 110Let Q be a 4-cycle with a labeling λ given by λ({1, 2}) = λ({1, 4}) = 1 and λ({2, 3}) = λ({3, 4}) = 3. Figure 2 : 2Active NAC-colorings of a deltoid Example 2. 3 . 3The graph G inFigure 3is not movable: since the vertices 1 and 4 are connected by the path (1, 3, 4) which is unicolor in every NAC-coloring, and similarly for 2 and 5 with the path(2, 3, 5), G is movable if and only ifG = (V G , E G ∪ {{1, 4}, {2,5}})is movable. But G has no flexible labeling by Theorem 1.7, since it has no NAC-coloring. Figure 3 : 3All non-conjugated NAC-colorings of a Laman graph with no proper flexible labeling The corollary and example motivate the next definition. The name is inspired by the constant distance between vertices u and v in Lemma 2.1 during the motion. Definition 2.4. Let G be a graph. Let U(G) denote the set of all pairs {u, v} ⊂ V G such that uv / Lemma 2 . 11 . 211Let G be a graph and u ∈ V G be a vertex of degree two. The graph G is movable if and only if G = G \ u is movable.Proof. Let v and w be the neighbours of u. If λ is a proper flexible labeling of G , then λ :E G → R + given by λ | E G = λ and λ(uv) = λ(uw) = L,where L is the maximal distance between v and w in all realizations compatible with λ , is a proper flexible labeling of G. On the other hand, the restriction of a proper flexible labeling of G to G is a proper flexible labeling, since there are only two possible points where u can be placed if v and w are mapped to distinct points, i.e., there must be infinitely many realization of G . Figure 4 : 4Maximal non-bipartite constant distance closures of graphs with a spanning Laman subgraph, at most 8 vertices and no vertex of degree two. Corollary 3. 3 . 3The graphs L 1 , . . . , L 6 , seeFigure 4or 5, are movable. Figure 5 : 5The NAC-colorings inducing a proper flexible labeling. Lemma 3. 4 . 4Let G = (V, E) be a graph with an injective embedding ω : V → R 3 such that for every edge uv ∈ E, the vector ω(u) − ω(v) is parallel to one of the four vectors (1, and all four directions are present. Then G is movable. Moreover, there exists an algebraic motion of G with exactly two active NAC-colorings modulo conjugation. Two edges are parallel in the embedding ω if and only if they receive the same pair of colors in the two active NAC-colorings. Figure 8 : 8Pairs of NAC-colorings used for construction of injective embeddings in R 3 satisfying the assumption of Lemma 3.4. the projections of C 1 to v 1 and C 2 to v 2 are different, then there exists a proper flexible labeling of G. Corollary 3. 8 . 8The graphs S 1 , S 2 , S 3 and S 4 , seeFigure 4or 9, are movable. Figure 9 : 9The graphs S 1 , S 2 and S 3 with proper flexible labelings (same colors mean same lengths). Note that the vertices 1,2,7 in S 2 and S 3 form a degenerate triangle. Figure 10 : 10The graph S 5 and its embedding inducing a proper flexible labeling. The same colors mean same edge lengths. Figure 11 : 11Graph G 25 Table 1 : 1Valuations giving the active NAC-colorings of a deltoid AcknowledgmentWe thank Meera Sitharam for the discussion which led to Theorem 2.10 and happened during the workshop on Rigidity and Flexibility of Geometric Structures organized by the Erwin Schödinger International Institute for Mathematics and Physics in Vienna in September 2018. Furthermore, we thank Tibor Jordán for discussions on the counterexample.Proof. In order to construct a proper flexible labeling for the graph S 5 , we assume the following: the triangles(1,2,3)and(1,4,5)are degenerated into lines, the quadrilaterals(1,4,6,2)and(1,4,7,3)are antiparallelograms, the quadrilateral(4,7,8,6)is a rhombus and the quadrilaterals(4,5,8,7)and(4,5,8,6)are deltoids, seeFigure 10. We scale the lengths so that λ 1,4 = 1 and λ 1,2 =: a > 1. Now, we define an injective realization ρ θ parametrized by the position of vertex 4. Let ρ θ (1) = (0, 0), ρ θ (2) = (−a, 0), ρ θ (3) = (a, 0), ρ θ (4) = (cos θ, sin θ) .Since the coordinates of a missing vertex of an antiparallelogram can be obtained by folding the parallelogram with the same edges along a diagonal, we getThe intersection of the line given by ρ θ(6)and ρ θ(7)with the line given by ρ θ (1) and ρ θ (4) givesThe position of 8 can be easily obtained by the fact that (4, 7, 8, 6) is a rhombus:One can verify that the induced labeling λ is independent of θ and hence it is flexible:The labeling is proper for a generic a.Since all graphs in the list were proven to be movable, we can conclude that the necessary condition is also sufficient up to 8 vertices. Proof. We can assume that G is spanned by a Laman graph, otherwise there exists a generic proper flexible labeling. By Lemma 2.11, we can assume that G has no vertex of degree two. Corollary 2.8 gives the necessary condition for movability. For the opposite implication, Theorem 2.12 lists all constant distance closures that are not complete, Lemma 3.1 and 3.9, and Corollary 3.3, 3.6 and 3.8 show that all these graphs are movable. Hence, also all their subgraphs are movable. L Burmester, Die Brennpunktmechanismen. Zeitschrift für Mathematik und Physik. 38L. Burmester. Die Brennpunktmechanismen. Zeitschrift für Mathematik und Physik, 38:193-223, 1893. The number of realizations of all Laman graphs with at most 12 vertices. Zenodo. J Capco, M Gallet, G Grasegger, C Koutschan, N Lubbes, J Schicho, 10.5281/zenodo.1245517J. Capco, M. Gallet, G. Grasegger, C. Koutschan, N. Lubbes, and J. Schicho. The number of realizations of all Laman graphs with at most 12 vertices. Zenodo, May 2018. doi:10.5281/zenodo.1245517. Lectures on the theory of algebraic functions of one variable. M Deuring, Lecture notes in mathematics. 314SpringerM. Deuring. Lectures on the theory of algebraic functions of one variable, volume 314 of Lecture notes in mathematics. Springer, 1973. On certain deformable frameworks. A C Dixon, Messenger. 292A. C. Dixon. On certain deformable frameworks. Messenger, 29(2):1-21, 1899. Graphs with Flexible Labelings. Discrete & Computational Geometry. G Grasegger, J Legerský, J Schicho, 10.1007/s00454-018-0026-9G. Grasegger, J. Legerský, and J. Schicho. Graphs with Flexible Labelings. Discrete & Computational Geometry, 2018. doi:10.1007/s00454-018-0026-9. On graphs and rigidity of plane skeletal structures. G Laman, Journal of Engineering Mathematics. 4G. Laman. On graphs and rigidity of plane skeletal structures. Journal of Engi- neering Mathematics, 4:331-340, 1970. Movable graphs. J Legerský, J. Legerský. Movable graphs, 2018. http://jan.legersky.cz/project/ movablegraphs/. Geometry of frameworks. H Maehara, Yokohama Mathematical Journal. 47H. Maehara. Geometry of frameworks. Yokohama Mathematical Journal, 47:41-65, 1999. When does a planar bipartite framework admit a continuous deformation?. H Maehara, N Tokushige, Theoretical Computer Science. 2631-2H. Maehara and N. Tokushige. When does a planar bipartite framework admit a continuous deformation? Theoretical Computer Science, 263(1-2):345-354, 2001. H Pollaczek-Geiringer, Über die Gliederung ebener Fachwerke. Zeitschrift für Angewandte Mathematik und Mechanik (ZAMM). 7H. Pollaczek-Geiringer. Über die Gliederung ebener Fachwerke. Zeitschrift für Angewandte Mathematik und Mechanik (ZAMM), 7:58-72, 1927. On the flexibility and symmetry of overconstrained mechanisms. H Stachel, Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences. 372H. Stachel. On the flexibility and symmetry of overconstrained mechanisms. Philo- sophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 372, 2013. On a nine-bar linkage, its possible configurations and conditions for paradoxical mobility. D Walter, M L Husty, 12th World Congress on Mechanism and Machine Science. IFToMMD. Walter and M. L. Husty. On a nine-bar linkage, its possible configurations and conditions for paradoxical mobility. In 12th World Congress on Mechanism and Machine Science, IFToMM 2007, 2007. Ein merkwürdiges Zwölfstabgetriebe. W Wunderlich, Österreichisches Ingenieur-Archiv. 8W. Wunderlich. Ein merkwürdiges Zwölfstabgetriebe. Österreichisches Ingenieur- Archiv, 8:224-228, 1954. On deformable nine-bar linkages with six triple joints. W Wunderlich, Indagationes Mathematicae (Proceedings). 793W. Wunderlich. On deformable nine-bar linkages with six triple joints. Indagationes Mathematicae (Proceedings), 79(3):257-262, 1976. Mechanisms related to Poncelet's closure theorem. Mechanisms and Machine Theory. W Wunderlich, 16W. Wunderlich. Mechanisms related to Poncelet's closure theorem. Mechanisms and Machine Theory, 16:611-620, 1981.
[]
[ "The Width Paradox and the Internal Structure of a Black-Hole", "The Width Paradox and the Internal Structure of a Black-Hole" ]
[ "Marcelo Schiffer \nPhysics Department\nAriel University. *\n\n" ]
[ "Physics Department\nAriel University. *\n" ]
[]
In the early days of Black Hole Thermodynamics, Bekenstein calculated the mass dispersion of a macroscopic black hole that results from the stochasticity of the thermal radiation it emitsit turned out to be negative for black holes massive than M > ∼ 10 30 g. He named it the "mass width paradox". Here we revisit his early calculation, in an axiomatic approach with a set of more economical assumptions and reach similar conclusions. We argue that the mass paradox results from considering a black hole as a classical system, without an inner quantum structure. As a matter of fact, when we take into account the discreteness of the area levels and assume identical probability transition between contiguous quantum states [2], the paradox disappears. In the process we obtain the probability of finding a black-hole in some area eigenstate for a given averaged area. As a byproduct, the quantum scenario also points towards a possible solution of the black hole information conundrum. Apparently we might have have killed two birds with a single stone.
null
[ "https://arxiv.org/pdf/2003.04106v2.pdf" ]
212,634,003
2003.04106
16d772a530f5c0f8b91d0411ec3bea251101fff8
The Width Paradox and the Internal Structure of a Black-Hole 7 Mar 2022 (Dated: March 8, 2022) Marcelo Schiffer Physics Department Ariel University. * The Width Paradox and the Internal Structure of a Black-Hole 7 Mar 2022 (Dated: March 8, 2022)numbers: :0470Bw0470Dy7820Ci7845+h * schiffer@arielacil In the early days of Black Hole Thermodynamics, Bekenstein calculated the mass dispersion of a macroscopic black hole that results from the stochasticity of the thermal radiation it emitsit turned out to be negative for black holes massive than M > ∼ 10 30 g. He named it the "mass width paradox". Here we revisit his early calculation, in an axiomatic approach with a set of more economical assumptions and reach similar conclusions. We argue that the mass paradox results from considering a black hole as a classical system, without an inner quantum structure. As a matter of fact, when we take into account the discreteness of the area levels and assume identical probability transition between contiguous quantum states [2], the paradox disappears. In the process we obtain the probability of finding a black-hole in some area eigenstate for a given averaged area. As a byproduct, the quantum scenario also points towards a possible solution of the black hole information conundrum. Apparently we might have have killed two birds with a single stone. I. THE WIDTH PARADOX REVISITED We start by defining a set of very general postulates regarding the nature of black-hole mass fluctuations assuming the black hole is a classical system. M 2 = ∞ 0 M 2 P (M, M )dM .(1b) The dependence of the probability distribution for a Schwarzschild black hole on a single parameter (the averaged mass) is dictated by the "no-hair theorem". Axiom 2 : For a macroscopic black hole the probability is sufficiently peaked around a very large mass such that: lim M→∞ ∂ n P (M, M ) ∂M n = 0 ,(2)lim M→ 0 ∂ n P (M, M ) ∂M n = 0 ,(3) for all n. This condition allows us to disregard boundary terms when integrating by parts. Axiom 3 The black hole mass dispersion is defined by Σ 2 (M ) = M 2 − M 2 ,(4) where M 2 = ∞ 0 M 2 P (M, M )dM .(5) Axiom 4 The total energy (BH+radiation) is conserved in the radiation process. When a quanta in the energy mode ǫ i is emitted the black hole mass decreases M ′ = M − ǫ i with ǫ i << M Axiom 5 The emission of the various modes are uncorrelated and the emission probability in each mode is given by ([3]) p(n i ) = (1 − e −γi )e −γiǫi ,(6) where the mean number of quanta emitted in a specific mode satisfies [4] n i = Γ(M , ǫ i ) e 8πGMǫi/ − 1 . Here, M represents the averaged black hole mass and Γ(M ǫ i ) is the black hole absorptivity for a given mode. Axiom 6 Laking any internal degrees of freedom, black hole mass fluctuations can result exclusively from the stochastic nature of the Hawking radiation. II. THE MASS DISPERSION A black hole whose original mass is M * emits quanta in various energy modes . Its mass decreases by M = M * − i n i ǫ i , where n i is the the number of quanta emitted in the ǫ i mode. The other way around, the original mass is M * = M + i n i ǫ i . Thus, after de emission P (M, M ) = {ni} P (M + i ǫ i n i , M * ) i p i (ǫ i ) .(8) where postulates (4),(5),(6) were called for. We expand the argument of the probability distribution around M , P (M, M ) = P (M, M * ) + P ′ (M, M * ) i ni ǫ i n i p(n i )(9)+ 1 2 P ′′ (M, M T * )   i ni ǫ 2 i n 2 i p(n i ) + i =j ǫ i ǫ j ni,nj n i n j p(n i )p(n j )   + N >2 ∆ N N ! P (N ) (M, M * ) .(10) Primes denote first and second derivatives of P (M, M ) with respect to the actual mass M , P (N ) represents higher order derivatives and ∆ N higher moments . Since i ni ǫ i n i p(n i ) = i ǫ i n i ,(11) i ni ǫ 2 i n 2 i p(n i ) = i ǫ 2 i n 2 i(12) and furthermore i =j ni,nj ǫ i ǫ j n i n j p(n i )p(n j ) = i =j ǫ i ǫ j n i n j = i ǫ i n i   j ǫ j n j − ǫ i n i   ,(13) It follows that P (M, M ) − P (M, M * ) = P ′ (M, M * ) ǫ + 1 2 σ 2 + ǫ 2 P ′′ (M, M * ) + N >2 ∆ N N ! P (N ) (M, M * ) ,(14) where we defined the mean energy of the radiation emitted in all modes and the corresponding deviation ǫ ≡ i ǫ i n i(15)σ 2 ≡ i ǫ 2 i n 2 i − n 2 i .(16) All results in this section stems from the above equation. Indeed, integrating both sides of eq.(14) and recalling eq.(1a): ∞ 0 P ′ (M, M * )ǫdM + 1 2 ∞ 0 P ′′ (M, M * ) σ 2 + ǫ 2 dM + 1 N ! N >2 ∞ 0 P (N ) (M, M * )∆ N dM = 0 .(17) This expression should vanish identically. This happens iff ǫ , σ 2 and∆ N can be moved outside the integrals ( all the derivatives vanish at infinity) . Consequenlty all the moments must depend upon the (black hole) averaged mass M but not upon its instantaneous mass M . Let us multiplicate eq. [14] by the mass M and integrate it: ∆M ≡ M * − M = − ǫ ∞ 0 M P ′ (M, M * )dM − 1 2 σ 2 + ǫ 2 ∞ 0 M P ′′ (M, M * )dM − N >2 ∆ N N ! ∞ 0 M P (N ) (M, M * ) .(18) Integrating once these integrals by parts, only the first one remains. The change of the black hole average mass is identical to the average energy of the emitted radiation, as as expected from energy conservation grounds: ∆M = ǫ .(19) Last, we calculate the bh mass dispersion Σ 2 (M ) = M 2 − M 2 .(20) For this end, we multiply eq.(14) by M 2 and integrate over the mass, M * 2 − M 2 = − ǫ ∞ 0 M 2 P ′ (M, M * )dM − 1 2 σ 2 + ǫ 2 ∞ 0 M 2 P ′′ (M, M * )dM − N >2 ∆ N N ! ∞ 0 M 2 P (N ) (M, M ) = 0 .(21) Integrating twice this expression by parts, the last integral drops out and we are left with M * 2 − M 2 = 2M ǫ − σ 2 − ǫ 2 .(22) Together with the identity: M * 2 − M 2 = Σ * 2 − Σ 2 + 2 M ǫ + ǫ 2 ,(23) where Σ * 2 = Σ 2 (M * ), gives: Σ * 2 − Σ 2 = −σ 2 − 2ǫ 2 .(24) Clearly for a macroscopic black hole black hole ǫ << M , Σ * 2 − Σ 2 ≈ ∂Σ 2 ∂ M ǫ(25) So, ∂Σ 2 ∂ M = − σ 2 ǫ + 2ǫ .(26) This equation relates the dispersion of the black hole to statistical properties of the emitted radiation (average energy and dispersion). We move on by calculating the rhs of the previous equation for the emitted radiation. As we mentioned earlier the parameter γ i in the distribution eq. [6] is obtained by matching the mean number of emitted quanta n i = 1 e γi − 1 = Γ i e xi − 1 ,(27)where x i = 8πM ǫ i /m 2 p . Clearly, ǫ = j ǫ i Γ i e xi − 1 = m 2 p 8πM j x i Γ i e xi − 1(28) and σ 2 = ǫ 2 i n 2 i − n i 2 = ǫ 2 i e γi (e γi − 1) 2 = m 2 p 8πM 2 x 2 i (e γi − 1) 2 + x 2 i e γi − 1 .(29) Equivalently, σ 2 = m 2 p 8πM 2 x 2 i Γ 2 i (e xi − 1) 2 + x 2 i Γ i e xi − 1 .(30) For macroscopic black holes whose mass M >> m e /m 2 p ∼ 10 17 g (m e stands for the electron mass) only massless quanta are emitted as the emission of massive particles is exponentially suppressed. Lacking any additional dimensional parameter, the black hole absorption coefficient Γ i = Γ(M ǫ i /m 2 p ) = Γ(x i ). Furthermore, it is safe to take the continuous approximation and translate sums into integrals: ǫ = A m 2 p 8πM , σ 2 = B m 2 p 8πM 2 ,(31) where A and B are numerical coefficients: A = ∞ 0 xΓ(x) e x − 1 dx ; B = ∞ 0 x 2 Γ(x) Γ(x) + e x − 1 (e x − 1) 2 dx ,(32)Σ 2 = Σ 2 0 − m 2 p 8π B A + 2A ln M M 0 / (33) Our result reproduces to Bekenstein's original calculation, with different numerical coefficients. In the above expression Σ 2 and M 0 are the mass dispersion and average mass at some reference point where the black hole is emitting massless quanta alone (M 0 > 10 17 g) and A and B are dimensionless parameters of order one. At this reference point only massless quanta are being emitted, in the lack of additional parameters with dimensions of mass besides the Planck mass necessariely Σ 2 0 ∼ m 2 p : for sufficiently massive black holes the mass dispersion becomes eventually negative. This is the width paradox. In the original paper it was argued that the paradox can be circumvented if the average number of emitted quanta is a function of the actual mass M instead of the averaged mass M . But, this as we have seen this is at odds with the normalization condition and energy conservation (∆M = ǫ ). III. THE BLACK HOLE HAS AN INNER STRUCTURE The above paradoxical result arises from the assumption the black hole is a structureless object that emits grey-body radiation at the Bekenstein-Hawking temperature. The situation reminds the early days of quantum theory, where Planck derived the black body radiation from the absorption and emission of radiation by harmonic oscillators that undergo transitions between discrete energy levels. Instead of harmonic oscillators, following Bekenstein-Mukhanov's proposal [5] we assume that the black hole area is quantized in units of the Planck length. Evidence of area quantization arises from various different theoretical approaches like loop quantum gravity, [16] or more qualitative arguments like the gedanken experiment of absorption of quanta [13], and also excitation of the black hole's ring modes [9]. Recently it has been claimed that the late echo detected in the merging of black holes might be an imprint of the area quantization [10]. Any way: A(n) = 4κl 2 p n .(34) Here κ is a numerical parameter of order one. In the lack of a comprehensive theory of quantum gravity, its value depends upon the quantization argument. Very much like in atomic physics, emission of quanta results from transitions among energy levels. The mass, angular momentum and entropy changes due to the emission of a quantum in the continuous approximation satisfy the first law: δM = T bh ∆S + Ω bh δJ ,(35) where δM = − ω and δJ = −m (m = 0, ±1, ±2...) are the energy and angular momentum of the emitted quantum. Recalling that A bh = 4l 2 p S bh , in the transition among contiguous area levels the quantum numbers of the emitted photon emission satisfy the constraint x ≡ (ω − mΩ bh ) T bh = κ .(36) It is assumed [2] that further transitions occur in a cascade chain n − 1 → n − 2; n − 2 → n − 3, . . . . For a macroscopic black hole T bh remains practically constant along this chain, meaning that the radiation is very nearly monochromatic. Accordingly, as the black hole cascades into lower levels, say from the n → m it emits radiation consisting of effectively monochromatic n − m quanta. The very nearly monochromatic feature of the radiation means that the timescale for the emission is very large and adiabaticity ensures that the transition probability remains very nearly unchanged in the following transitions. Call e −α the black hole decay probability to a contiguous level. Accordingly, the transition probability from n to m levels is W n→m = Ce −α(n−m) n ≥ m .(37) The normalization of the transition probability n m=0 W n→m = 1 requires C = 1−e −α 1−e −(n+1)α ; nevertheless in the limit n >> 1 the normalization constant C ≈ 1 − e −α , the system is effectively translational invariant. In other words, W n→m represents the probability of a black hole decaying from n → m level and 1 − e −α represents the probability of not making any additional transition and remaining in that state. Let P n (t) represent the probability of finding the black hole in a specific area level at time t measured in some unit of time. Then, at time t + 1 after undergoing a transition : P m (t + 1) = n≥m W n→m P n (t) .(39) From this we can calculate the area change after each emission m(t + 1) = m mP m (t + 1) = n≥m mW n→m P n (t) = (1 − e −α ) n P n (t)e −αn m≤n me αm (40) = (1 − e −α ) n P n (t)e −αn d dα 1 − e α(n+1) 1 − e α = n P n (t) n − 1 e α − 1 + e −αn e α − 1) . Now, the probability distribution for a macroscopic black hole peaks for some very large value of n in which case the exponential term is largely suppressed . Thus, the average area change in each step is constant A(t + 1) = A(t) − 4kl 2 p e α − 1 .(42) Starting from a given initial reference time t = 0, the area decreases linearly: A(t + 1) = A(0) − t 4kl 2 p e α − 1 .(43) From similar considerations regarding m 2 (t + 1) = (1 − e −α ) n P n (t)e −αn d 2 dα 2 1 − e α(n+1) 1 − e α (44) = n P n (t) n 2 − 2n e α − 1 − e −nα e α + 1 (e α − 1) 2 + e α + 1 (e α − 1) 2 ,(45) or m 2 (t + 1) = m 2 (t) − 2 m(t) e α − 1 + e α + 1 (e α − 1) 2 ,(46) and we dropped again the exponential term. The area dispersion grows linearly with time ∆A 2 (t + 1) = ∆A 2 (0) + 16k 2 l 4 p 1 − e −α t ,(47) very much like in the one dimensional random walk. Can we obtain the probability distribution of the various black hole area levels ? Let us assume that at the initial time t = 0 (say, at the horizon formation ) the black hole in an area eigenstate n 0 , that is to say, P n (0) = δ n,n0 . The first iteration of eq.(39) gives P m (1) = (1 − e −α )e −α(n0−m) .(48) It can be easily checked that further iterations preserve this general form P m (t) = A m (t)(1 − e −α ) t e −α(n0−m) ,(49) where A m (t + 1) = n<m A n (t) .(50) The easiest way to find A m is through the normalisation condition m P m (t) = 1. Defining z = e −α and reindexing s = n 0 − m it follows that s≥0 A n0−s (t)z s = 1 (1 − z) t .(51) Consequently A n0−s (t) = 1 s! d dz s (1 − z) −t z=0 = t + s − 1 t − 1 ,(52) and the probability of finding the black hole in a given area eigenstate m is negative binomial: P m (t) = t + n 0 − m − 1 n 0 − m (1 − e −α ) t e −α(n0−m) .(53) Here it is tacitly assumed that t << n 0 , as to avoid approaching the Planckian region . The black hole mass deviation entails the calculation of m − √ m 2 . The inverse Laplace transform provides an integral representation of √ m : √ m = 1 2πi γ+i∞ γ−i∞ √ π 2 dz z 3/2 e mz ,(54) where the integration runs at a vertical axis in the complex plane at distance γ from the origin such that all poles remain to the left side of this line. Thus, m(t) = m≤n0 √ mP m (t) = − i 4 √ π (1 − e −α ) t γ+i∞ γ−i∞ dz z 3/2 ∞ s=0 e (n0−s)z t + s − 1 s e −αs ,(55) which can be rewritten as m(t) = − i 4 √ π (1 − e −α ) t γ+i∞ γ−i∞ dz z 3/2 e n0z 1 1 − e −(α+z) t ∞ s=0 t + s − 1 s 1 − e −(α+z) t e −(α+z)s .(56) The sum is formally the normalization of the binomial probability distribution, therefore m(t) = − i 4 √ π (1 − e −α ) t I (57) with I = γ+i∞ γ−i∞ e n0z z 3/2 (1 − e −(z+α) ) t dz .(58) The poles of this expression are located at z = −α + 2πni, α > 0 and accordingly the integration is to be performed along any vertical line with ℑ(z) = γ > 0. Notice that there is a branch cut running from z → −∞ to z = 0, R − . Γ e n0z z 3/2 (1 − e −(z+α) ) t dz = 2πi Residues . The contour integral is Γ = γ + C R + L 1 + L 2 + C r with R → ∞; L 1 runs from −∞ to the origin above the branch cut and L 2 runs back to −∞ beneath the branch cut and C r is the semi-circle connecting these lines with r → 0. Last, C R → 0 is the semi circle with R → ∞. According to Jordan's lemma C R → 0. In the principal branch z = |z|e iθ , −π < θ < π. For convenience we define h(z) = e n0z (1 − e −(z+α) ) t . (60) After integration by parts: L1+L2+Cr dz z 3/2 h(z) = −2 h(z) √ z −∞−iπ −∞+iπ + 2 0 −∞ dx (−x) 1/2 (e iπ/2 − e −iπ/2 ) h ′ (x) + 2 Cr dz (z) 1/2 h ′ (z) .(61) The first term vanishes and so does the last one for an infinitesimal semi-circle around the origin. Changing x → −y 2 L1+L2+Cr dz z 3/2 h(z) = −i ∞ −∞ dyh ′ (−y 2 ) .(62) Explicitly, h ′ (−y 2 ) = √ n 0 π (1 − e y 2 −α ) t − t π n 0 e y 2 −α (1 − e y 2 −α ) t+1 δ(y) ,(63) and we defined the function δ(y) = n 0 π e −n0y 2 .(64) Now, n 0 ∼ A/l 2 p is a huge number, for all purposes the above function is the Gaussian representation of the delta function. Thus: L1+L2+Cr dz z 3/2 h(z) = i √ n 0 π (1 − e −α ) t − t π n 0 e −α (1 − e −α ) t+1 .(65) The poles of h(z) at z = −α + 2nπi are of order t and accordingly the residues are: 1 (t − 1)! d t−1 dz t−1 e n0z z 3/2 −α+2nπi .(66) This expression is proportional to e −n0α , the contribution of the residues to the integral is exponentially small. Putting all these pieces together m(t) = √ n 0 − t √ n 0 1 e α − 1 .(67) From eq.(42) we know that m = n 0 − t e α − 1 ,(68) therefore, ∆M = k 4π m p t e α − 1 − t 2 A 0 4κl 2 p (e α − 1) 2 .(69) This expression never becomes negative as we are assuming that t << n 0 . What is the time scale for each one of these transitions ? The black hole emissivity ∼ M −4 multiplied by the horizon area gives a mass loss rateṀ ∼ −M −2 or equivalently the area loss rateȦ ∼ −M −1 . Calling the time scale of each transition τ , comparisson to eq.(42) gives the estimative τ ∼ M G e α − 1(70) As a last remark, the value of the one step decay probability e −α can be obtained relying on the correspondence principle. The average number of transitions must match the mean number of quanta emitted in the corresponding mode 1 e α − 1 = Γ(x) e x − 1 .(71) IV. ETERNAL BLACK HOLES The probability P m (t) obtained in eq(53) for an initial eigenstate n 0 is the conditional probability of finding the black hole in an area eigenstate m at time t , given that it started from to some initial state n 0 . Replacing n 0 → n it represents the conditional probability P t (m|n) of finding at time t the black hole at state m given that it was initially at the state n. For an eternal black hole, there is no initial eigenstate , any moment can be rearded as the initial time. Put in another way, the probability distribution of finding the black hole in a given eigenstate must be a universal function, calculated at different times. Let us call the probability distribution at time t as q m (t) Clearly: q m (t + τ ) = m≤n P τ (m|n)q n ((t) .(72) More explicitly q m (t + τ ) = n≤m τ + n − m − 1 n − m (1 − e −α ) τ e −α(n−m) q n (t) .(73) Multiplying this equation by e imϕ , summing both sides over m while defining the generating function Q(t, ϕ) = q m (t)e imϕ gives: Q(t + τ, ϕ) = m n≤m τ + n − m − 1 n − m (1 − e −α ) τ e −α(n−m) e imϕ q n (t) .(74) Defining s = n − m, after reshuffling indexes : Q(t + τ, ϕ) = (1 − e −α ) τ   s≥0 τ + s − 1 s e −α e −iϕ s   Q(t, ϕ) .(75) With the aid of the identity r≥0 τ + s − 1 s x r = 1 (1 − x) τ ,(76) we can write Q(t + τ, ϕ) = 1 − e −α 1 − e −iϕ e −α τ Q(t, ϕ) .(77) Normalization of the probabilities calls for Q(t, 0) = 1 : Q(t, ϕ) = 1 − e −α 1 − e −α e −iϕ t .(78) The probability distribution are the Fourier coefficients of the generating function q m (t) = 1 − e −α t 1 2π 2π 0 e imϕ (1 − e −α e −iϕ ) t dϕ .(79)Calling z = e iϕ q m (t) = − 1 − e −α t i 2πi z t+m−1 (z − e −α ) t dz .(80) From the residues theorem it follows that the distribution is also negative binomial q m (t) = t + m − 1 t − −1 1 − e −α t e −αm .(81) Clearly the time t is not a physically sound variable, but we can parametrize the evolution as a function of the average area. V. THE INFORMATION PARADOX AND CONCLUDING REMARKS Assuming that (i) the black hole radiation is grey body; that (ii) its mass fluctuations originate from the randomness of the radiation emitted and (iii) that the black hole has no inner structure leads to the paradoxical situation were mass fluctuations become negative for sufficiently large black holes. The discrepancy is solved assuming that the black hole has internal degrees of freedom ( the various discrete quantum area eigenstates ) and an a constant transition probability among contiguous area levels. From a different perspective, should a black hole reach thermal equilibrium with its own radiation the detailed balance condition must hold and this also requires discrete energy levels. Either way, consistency with Quantum Statistics demands discretization of black hole area states. The probability distribution we obtained for an eternal black hole was based on very solid premisses and a robust calculation.This is a true quantum gravity result that should be substantiated someday by a full quantum theory of gravity . For an eternal black hole the probability distribution should not be calculated at time t, which clearly is not a useful parameter as the origin of time can always be shifted; the averaged area is the only meaningful parameter to express the probability distribution. Shannon's information of an eternal black hole should be regarded as pure noise, it has no truly informational content. Naturally, the information content of a black hole formed at some time must be represented by subtracting from its Shannon's information the noise, that is to say, Shannon's information of the eternal black hole (both compared for the same value of averaged horizon area). Specifically I(t) = I BH (t) − I EBH ,(82) where I BH (t) = − n Π n (t) ln Π n (t) ,(83) with Π n (t) = − m p t (n|m)Π m (0) ,(84) where Π m (0) represents the quantum state (the probability distribution )at the time the horizon is formed. Furthermore I EBH = − m q n (t ′ ) ln q m (t ′ ) .(85) We recall that the time parameter t ′ is to be chosen such that the averaged horizon areas agree. Clearly for very long times the I BH → I EBH and all the original information is washed out. The time derivative of the I(t) gives the rate information is degenerated into noise. On the other hand, the entropy radiated by a black hole iṡ S = 1 125 3 T 3 BH A ,(86) where T BH is Bekenstein's-Hawking temperature and A, the event horizon area . We can express the entropy flow in terms of the area eigenvalue [ eq.(34)]:Ṡ n = = 1 16 √ κ e −λn λ −1/2 dλ . As we discussed, he black hole is not in an eigenstate area, it is in a superposition of quantum states with probability given by eq.(81). Thus the information carried by the radiation is actually: İ BH (t) = 1 16 √ κ λ −1/2 dλ n e −λn Π n (t) .(88) With the help of the binomial distribution probability eq.(53) and the convolution eq. (84) we can express the information flow as İ BH (t) = (1 − e −α ) t 16 √ κ ∞ 0 λ −1/2 dλ (1 − e (λ−α) ) t n e −αn Π n (0) . This information embodies both signal and the noise, the later is represented by the "information" flow of an eternal black hole. A similar calculation yields İ EBH (t ′ ) = (1 − e −α ) t ′ 16 √ κ ∞ 0 λ −1/2 dλ (1 − e (λ−α) ) t ′(90) the time parameter t ′ has to be chosen such that eternal black hole average mass is identical to black-hole itself. The net information (the signal) flow at time t is the difference İ BH (t) − İ EBH (t) . The important question to be asked is whether the rate useful information is degraded is compensated by the rate net information flows . Should this be the case, the black hole conundrum might have found a solution. This is under investigation. Axiom 1 : 1For a macroscopic Schwarzschild black hole there is a smooth probability distribution function P (M, M ). The first two moments are defined in the usual way, (M, M )dM , ACKNOWLEDGMENT I am thankful to Nadav Sherb and Mikhail Zubkov for enlightening discussions. To Fulfill a Vision: Jerusalem Einstein Centennial Symposium on Gauge Theories and Unification. J D Bekenstein, Gravitation, the Quantum and Statistical Physics. Y. Ne'eman YCambridge, MassAddison WesleyBekenstein JD. (1981) Gravitation, the Quantum and Statistical Physics. In: Y. Ne'eman Y. To Fulfill a Vision: Jerusalem Einstein Centennial Symposium on Gauge Theories and Unification. Addison Wesley, Cambridge, Mass, pp. 42-62. Bekenstein Statistics of Black Hole Radiance and the Horizon Area Spectrum. J , Phys. Rev. D91. 124052J.D. Bekenstein Statistics of Black Hole Radiance and the Horizon Area Spectrum. Phys. Rev. D91, 124052, (2015). . Jacob D Bekenstein, Amnon Meisels, Phys. Rev. D. 152775Jacob D. Bekenstein and Amnon Meisels, Phys. Rev. D 15, 2775 (1977). . D N Page, Phys. Rev. 13198D. N. Page, Phys. Rev. D13, 198 (1976). . J D Bekenstein, V F Mukhanov, Phys. Lett. 360J. D. Bekenstein and V. F. Mukhanov, Phys. Lett. B360, (1995) . Black Holes in Loop Quantum Gravity. Alejandro Perez, Rept.Prog.Phys. 8012Alejandro Perez. Black Holes in Loop Quantum Gravity. Rept.Prog.Phys, 80 (12), (2017); . Mod. Phys. Lett. A. 3131Mod. Phys. Lett. A, Vol. 31, No. 31 (2016); . C Rovelli, L Smolin, Nucl. Phys. B. 442C. Rovelli, L. Smolin, Nucl. Phys. B 442 (1995) ; . R De Pietri, C Rovelli, Phys. Rev. D. 54R. De Pietri, C. Rovelli, Phys. Rev. D 54 (1996) ; . A Ashtekar, J Lewandowski, Class. Quantum Grav. 14A. Ashtekar, J. Lewandowski, Class. Quantum Grav. 14 (1997). A. Davidson ? . S Hod, Phys. Rev. Lett. 814293Hod S 1998 Phys. Rev. Lett. 81 4293. . P Saurya Das, U A Ramadevi, Yajnik, Modern Physics Letters A. 17Saurya Das, P. Ramadevi, U. A. Yajnik,Modern Physics Letters A Vol. 17 (2002). Quasinormal modes, the area spectrum, and black hole entropy. O Dreyer, Phys. Rev. Lett. 90O. Dreyer, Quasinormal modes, the area spectrum, and black hole entropy, Phys. Rev. Lett. 90 (2003) ; E Berti, V Cardoso, A O Starinets, Quasinormal modes of black holes and black branes. 26E. Berti, V. Cardoso and A. O. Starinets, Quasinormal modes of black holes and black branes, Class. Quant. Grav. 26 (2009). . Vitor Cardoso, F Valentino, Matthew Foit, Klebanar, Xiv:1902.10164v1Vitor Cardoso, Valentino F. Foit, Matthew Klebanar, ,Xiv:1902.10164v1 (2019) . S W Hawking, Commun. Math Phys. 43199S. W. Hawking, Commun. Math Phys. 43, 199 (1975). . J. D. Bekenstein, Lett. Nuovo Cimento. 11467J. D. Bekenstein, Lett. Nuovo Cimento 11, 467 (1974). . J D , Bekenstein in Cosmology and Gravitation. AtlantisciencesJ. D. Bekenstein in Cosmology and Gravitation, M. Novello, , pp. 1-85 d. (Atlantisciences, France 2000) . S Hod, Phys. Rev. Lett. 814293S. Hod, Phys. Rev. Lett. 81, 4293 (1998) ; . Y Kwon, S Nam, ; E Abdalla, K H C Castello-Branco, A Lima-Santos, Journal of Physics: Conference Series. 4621435Mod. Phys. Lett. AY. Kwon and S. Nam' Journal of Physics: Conference Series 462,012040 (2013) . E. Abdalla, K. H. C. Castello-Branco, and A. Lima-Santos, Mod. Phys. Lett. A 18, 1435 (2003). . G Kunstatter, Phys. Rev. Lett. 90161301Kunstatter G 2003 Phys. Rev. Lett. 90 161301 . O Dreyer, Phys. Rev. Lett. 9081301O. Dreyer, Phys. Rev. Lett. 90, 081301 (2003)
[]
[ "SMOOTHING EFFECTS AND INFINITE TIME BLOWUP FOR REACTION-DIFFUSION EQUATIONS: AN APPROACH VIA SOBOLEV AND POINCARÉ INEQUALITIES", "SMOOTHING EFFECTS AND INFINITE TIME BLOWUP FOR REACTION-DIFFUSION EQUATIONS: AN APPROACH VIA SOBOLEV AND POINCARÉ INEQUALITIES" ]
[ "Gabriele Grillo ", "ANDGiulia Meglioli ", "Fabio Punzo " ]
[]
[]
We consider reaction-diffusion equations either posed on Riemannian manifolds or in the Euclidean weighted setting, with power-type nonlinearity and slow diffusion of porous medium time. We consider the particularly delicate case p < m in problem (1.1), a case largely left open in [21] even when the initial datum is smooth and compactly supported. We prove global existence for L m data, and a smoothing effect for the evolution, i.e. that solutions corresponding to such data are bounded at all positive times with a quantitative bound on their L ∞ norm. As a consequence of this fact and of a result of[21], it follows that on Cartan-Hadamard manifolds with curvature pinched between two strictly negative constants, solutions corresponding to sufficiently large L m data give rise to solutions that blow up pointwise everywhere in infinite time, a fact that has no Euclidean analogue. The methods of proof of the smoothing effect are functional analytic in character, as they depend solely on the validity of the Sobolev inequality and on the fact that the L 2 spectrum of ∆ on M is bounded away from zero (namely on the validity of a Poincaré inequality on M ). As such, they are applicable to different situations, among which we single out the case of (mass) weighted reaction-diffusion equation in the Euclidean setting. In this latter setting, a modification of the methods of[37]allows to deal also, with stronger results for large times, with the case of globally integrable weights.2010 Mathematics Subject Classification. Primary: 35K57. Secondary: 35B44, 58J35, 35K65, 35R01.
10.1016/j.matpur.2021.04.011
[ "https://arxiv.org/pdf/2006.10354v1.pdf" ]
219,792,727
2006.10354
0833d65965e743aa54d0bd93c7768edcd38c3890
SMOOTHING EFFECTS AND INFINITE TIME BLOWUP FOR REACTION-DIFFUSION EQUATIONS: AN APPROACH VIA SOBOLEV AND POINCARÉ INEQUALITIES 18 Jun 2020 Gabriele Grillo ANDGiulia Meglioli Fabio Punzo SMOOTHING EFFECTS AND INFINITE TIME BLOWUP FOR REACTION-DIFFUSION EQUATIONS: AN APPROACH VIA SOBOLEV AND POINCARÉ INEQUALITIES 18 Jun 2020 We consider reaction-diffusion equations either posed on Riemannian manifolds or in the Euclidean weighted setting, with power-type nonlinearity and slow diffusion of porous medium time. We consider the particularly delicate case p < m in problem (1.1), a case largely left open in [21] even when the initial datum is smooth and compactly supported. We prove global existence for L m data, and a smoothing effect for the evolution, i.e. that solutions corresponding to such data are bounded at all positive times with a quantitative bound on their L ∞ norm. As a consequence of this fact and of a result of[21], it follows that on Cartan-Hadamard manifolds with curvature pinched between two strictly negative constants, solutions corresponding to sufficiently large L m data give rise to solutions that blow up pointwise everywhere in infinite time, a fact that has no Euclidean analogue. The methods of proof of the smoothing effect are functional analytic in character, as they depend solely on the validity of the Sobolev inequality and on the fact that the L 2 spectrum of ∆ on M is bounded away from zero (namely on the validity of a Poincaré inequality on M ). As such, they are applicable to different situations, among which we single out the case of (mass) weighted reaction-diffusion equation in the Euclidean setting. In this latter setting, a modification of the methods of[37]allows to deal also, with stronger results for large times, with the case of globally integrable weights.2010 Mathematics Subject Classification. Primary: 35K57. Secondary: 35B44, 58J35, 35K65, 35R01. Introduction Let M be a complete noncompact Riemannian manifold of infinite volume. Let us consider the following Cauchy problem, for any T > 0 u t = ∆u m + u p in M × (0, T ) u = u 0 in M × {0} (1.1) where ∆ is the Laplace-Beltrami operator. We shall assume throughout this paper that 1 < p < m and that the initial datum u 0 is nonnegative. We let L q (M ) be as usual the space of those measurable functions f such that |f | q is integrable w.r.t. the Riemannian measure µ and make the following basic assumptions on M , which amount to assuming the validity of both the Poincaré and the Sobolev inequalities on M : (Poincaré inequality) v L 2 (M ) ≤ 1 C p ∇v L 2 (M ) for any v ∈ C ∞ c (M ); (1.2) (Sobolev inequality) v L 2 * (M ) ≤ 1 C s ∇v L 2 (M ) for any v ∈ C ∞ c (M ),(1.3) where C p and C s are numerical constants and 2 * := 2N N −2 . The validity of (1.2), (1.3) puts constraints on M , and we comment that it is e.g. well known that, on Cartan-Hadamard manifolds, namely complete and simply connected manifolds that have everywhere non-positive sectional curvature, (1.3) always holds. Furthermore, when M is Cartan-Hadamard and, besides, sec ≤ −c < 0 everywhere, sec indicating sectional curvature, it is known that (1.2) holds as well, see e.g. [13,14]. Thus, both (1.2), (1.3) hold when M is Cartan-Hadamard and sec ≤ −c < 0 everywhere, a case that strongly departs from the Euclidean situation but covers a wide class of manifolds, including e.g. the fundamental example of the hyperbolic space H n , namely that Cartan-Hadamard manifold whose sectional curvatures equal -1 everywhere (or the similar case in which sec = −k everywhere, for a given k > 0). The behaviour of solutions to (1.1) is influenced by competing phenomena. First of all there is a diffusive pattern associated with the so-called porous medium equation, namely the equation u t = ∆u m in M × (0, T ) ,(1.4) where the fact that we keep on assuming m > 1 puts us in the slow diffusion case. It is known that when M = R n and, more generally, e.g. when M is a Cartan-Hadamard manifold, solutions corresponding to compactly supported data have compact support for all time, in contrast with the properties valid for solutions to the heat equation, see [41]. But it is also well-known that, qualitatively speaking, negative curvature accelerates diffusions, a fact that is apparent first of all from the behaviour of solutions of the classical heat equation. In fact, it can be shown that the standard deviation of a Brownian particle on the hyperbolic space H n behaves linearly in time, whereas in the Euclidean situation it is proportional to √ t. Similarly, the heat kernel decays exponentially as t → +∞ whereas one has a power-type decay in the Euclidean situation. In the Riemannian setting the study of (1.4) has started recently, see e.g. [15], [16], [17], [19], [20], [22], [33], [42], noting that in some of those papers also the case m < 1 in (1.4), usually referred to as the fast diffusion case, is studied. Nonlinear diffusion gives rise to speedup phenomena as well. In fact, considering again the particularly important example of the hyperbolic space H n (cf. [42], [17]), the L ∞ norm of a solution to (1.4) satisfies u(t) ∞ ≍ log t t 1/(m−1) as t → +∞, a time decay which is faster than the corresponding Euclidean bound. Besides, if the initial datum is compactly supported, the volume V(t) of the support of the solution u(t) satisfies V(t) ≍ t 1/(m−1) as t → +∞, while in the Euclidean situation one has V(t) ≍ t β(N,m) with β(N, m) < 1/(m − 1). The second driving factor influencing the behaviour of solutions to (1.1) is the reaction term u p , which has the positive sign and, thus, might drive solutions towards blow-up. This kind of problems has been widely studied in the Euclidean case M = R N , especially in the case m = 1 (linear diffusion). The literature for this problem is huge and there is no hope to give a comprehensive review here, hence we just mention that blow-up occurs for all nontrivial nonnegative data when p ≤ 1 + 2/N , while global existence prevails for p > 1 + 2/N (for specific results see e.g. [7], [9], [10], [11], [23], [26], [35], [36], [45], [46]). On the other hand, it is known that when M = H N and m = 1, for all p > 1 and sufficiently small nonnegative data there exists a global in time solution, see [4], [43], [44], [34]. As concerns the slow diffusion case m > 1, in the Euclidean setting it is shown in [38] that, when the initial datum is nonnegative, nontrivial and compactly supported, for any p > 1, all sufficiently large data give rise to solutions blowing up in finite time. Besides, if p ∈ 1, m + 2 N , all such solutions blow up in finite time. Finally, if p > m + 2 N , all sufficiently small data give rise to global solutions. For subsequent, very detailed results e.g. about the type of possible blow-up and, in some case, on continuation after blow-up, see [12], [32], [39] and references quoted therein. In the Riemannian setting, existence of global solutions and blow-up in finite time for problem (1.1) have been first studied in [47], under the assumption that the volume of geodesic balls of radius R grows as R α with α ≥ 2; this kind of assumption is typically associated to nonnegative curvature, thus the opposite situation w.r.t. the one we are studying here, in which the volume of geodesic balls grows at least exponentially as a function of the radius R. The results in the setting studied in [47] are qualitatively similar to the Euclidean ones. The situation on negatively curved manifolds is significantly different, and the first results in this connection have been shown in [21], where only the case of nonnegative, compactly supported data is considered. Among the results of that paper, we mention the case that a dichotomy phenomenon holds when p > m, in the sense that under appropriate curvature conditions, compatible with the assumptions made in the present paper, all sufficiently small data give rise to solutions existing globally in time, whereas sufficiently large data give rise to solutions blowing up in finite time. Results were only partial when p < m, since it has been shown that when p ∈ 1, 1+m 2 and again under suitable curvature conditions, all solutions corresponding to compactly supported initial data exist globally in time, and blow up everywhere pointwise in infinite time. When p ∈ 1+m 2 , m , precise information on the asymptotic behaviour is not known, since blowup is shown to occur at worse in infinite time, but could in principle occur before. We extend here the results of [21] in two substantial aspects. In fact, we summarize our main results as follows. • The methods of [21] rely heavily on explicit barrier arguments, that by their very same nature are applicable to compactly supported data only and, in addition, require explicit curvature bounds in order to be applicable. We prove here global existence for L m data and prove smoothing effects for solutions to (1.1), where by smoothing effect we mean the fact that L m data give rise to global solutions u(t) such that u(t) ∈ L ∞ for all t > 0, with quantitative bounds on their L ∞ norm. This will be a consequence only of the validity of Sobolev and Poincaré inequalities (1.3), (1.2), see Theorem 2.2. • As a consequence, combining this fact with some results proved in [21], we can prove that, on manifolds satisfying e.g. −c 1 ≤ sec ≤ −c 2 with c 1 ≥ c 2 > 0, thus encompassing the particularly important case of the hyperbolic space H n (somewhat weaker lower curvature bounds can be assumed), any solution u(t) to (1.1) corresponding to an initial datum u 0 ∈ L m exists globally and, provided u 0 is sufficiently large, it satisfies the property lim t→+∞ u(x, t) = +∞ ∀x ∈ M, namely complete blowup in infinite time occurs for such solutions to (1.1) in the whole range p ∈ (1, m), see Theorem 2.3. Our results can also be seen as an extension of some of the results proved in [37]. However, the proof of the smoothing estimate given in [37,Theorem 1.3] is crucially based on the assumption that the measure of the domain where the problem is posed is finite. This is not true in our setting. So, even if we use some general idea introduced in [37], our proofs and results are in general quite different from those in [37]. For detailed reference to smoothing effect for linear evolution equations see [8], whereas we refer to [40] for a general treatment of smoothing effects for nonlinear diffusions, and to [5,18,17] for connections with functional inequalities in the nonlinear setting. The main result given in Theorem 2.2 depend essentially only on the validity of inequalities (1.2) and (1.3), and as such is almost immediately generalizable to different contexts. As a particularly significant situation, we single out the case of Euclidean, mass-weighted reaction diffusion equations. In fact we consider the problem 5) in the Euclidean setting, where ρ : R N → R is strictly positive, continuous and bounded, and represents a mass density . The problem is naturally posed in the weighted spaces ρ u t = ∆u m + ρ u p in R N × (0, T ) u = u 0 in R N × {0},(1.L q ρ (R N ) = v : R N → R measurable , v L q ρ := R N v q ρ(x) dx 1/q < +∞ , This kind of models originates in a physical model provided in [24]. There are choices of ρ ensuring that the following analogues of (1.2) and (1.3) hold: v L 2 ρ (R N ) ≤ 1 C p ∇v L 2 (R N ) for any v ∈ C ∞ c (R N ) (1.6) and v L 2 * ρ (R N ) ≤ 1 C s ∇v L 2 (R N ) for any v ∈ C ∞ c (R N ) (1.7) for suitable positive constants. In fact, in order to make a relevant example, if ρ(x) ≍ |x| −a for a suitable a > 0, it can be shown that (1.6) holds if a ≥ 2 (see e.g. [18] and references therein), whereas also (1.7) is obviously true for any a > 0 because of the validity of the usual, unweighted Sobolev inequality and of the assumptions on ρ. Of course more general cases having a similar nature but where the analogue of (1.7) is not a priori trivial, could be considered, but we focus on that example since it is widely studied in the literature and because of its physical significance. In [27,28] a large class of nonlinear reaction-diffusion equations, including in particular problem (1.5) under certain conditions on ρ, is investigated. It is proved that a global solution exists, (see [27,Theorem 1]) provided that ρ(x) = |x| −a with a ∈ (0, 2), p > m + 2 − a N − a , and u 0 ≥ 0 is small enough. In addition, a smoothing estimate holds. On the other hand, if ρ(x) = |x| −a or ρ(x) = (1 + |x|) −a with a ∈ [0, 2), u 0 ≡ 0 and 1 < p < m + 2 − a N − a , then any nonnegative solution blows up in a suitable sense. Such results have also been generalized to more general initial data, decaying at infinity with a certain rate (see [28]). Finally, in [27,Theorem 2], it is shown that if p > m, ρ(x) = (1 + |x|) −a with a > 2, and u 0 is small enough, a global solution exists. Problem (1.5) has also been studied in [30], [31], by constructing and using suitable barriers, initial data being continuous and compactly supported. In particular, in [30] the case that ρ(x) ≍ |x| −a for |x| → +∞ with a ∈ (0, 2) is addressed. It is proved that for any p > 1, if u 0 is large enough, then blowup occurs. On the other hand, if p >p, for a certainp > m depending on m, p and ρ, and u 0 is small enough, then global existence of bounded solutions prevails. Moreover, in [31] the case that a ≥ 2 is investigated. For a = 2, blowup is shown to occur when u 0 is big enough, whereas global existence holds when u 0 is small enough. For a > 2 it is proved that if p > m, u 0 ∈ L ∞ loc (R N ) and goes to 0 at infinity with a suitable rate, then there exists a global bounded solution. Furthermore, for the same initial datum u 0 , if 1 < p < m, then there exists a global solution, which could blow up as t → +∞ . Our main results in this setting can be summarized as follows. • We prove in Theorem 2.5 global existence and smoothing effects for solutions to (1.5), assuming that the weight ρ : R N → R is strictly positive, smooth and bounded, so that (1.7) necessarily holds, and assuming the validity of (1.6). In particular, L m data give rise to global solutions u(t) such that u(t) ∈ L ∞ for all t > 0, with quantitative bounds on their L ∞ norm. By constructing a specific, delicate example, we show in Proposition 6.6 that the bound on the L ∞ norm (which involves a quantity diverging as t → +∞) is qualitatively sharp, in the sense that there are examples of weights for which our running assumption holds and for which blow-up of solutions in infinite time holds pointwise everywhere (we refer to this property by saying that complete blowup in infinite time occurs). We also prove, by similar methods which follow the lines of [37], different smoothing effects which are stronger for large times, when ρ is in addition assumed to be integrable, see Theorem 2.6. Let us mention that the results in [31] for 1 < p < m are improved here in various directions. In fact, now we consider a larger class of initial data u 0 , since we do not require that they are locally bounded; moreover, in [31] no smoothing estimates are addressed. Furthermore, the fact that for integrable weights ρ we have global existence of bounded solutions does not have a counterpart in [31], nor has the blowup results in infinite time. The paper is organized as follows. In Section 2 we collect the relevant definitions and state our main results, both in the setting of Riemannian manifolds and in the Euclidean, weighted case. In Section 3 we prove some crucial results for an auxiliary elliptic problem, that will then be used in Section 4 to show bounds on the L p norms of solutions to certain evolution problems posed on geodesic balls. In Section 5 we conclude the proof of our main results for the case of reaction-diffusion problems on manifolds. In Section 6 we briefly comment on the adaptation to be done to deal with the weighted Euclidean case, and prove the additional results valid in the case of an integrable weight. We also discuss there a delicate example showing that complete blowup in infinite time may occur under the running assumptions. Preliminaries and statement of main results We first define the concept of solution to (1.1) that we shall use hereafter. It will be meant in the very weak, or distributional, sense. Definition 2.1. Let M be a complete noncompact Riemannian manifold of infinite volume. Let 1 < p < m and u 0 ∈ L m (M ), u 0 ≥ 0. We say that the function u is a solution to problem (1.1) in the time interval [0, T ) if u ∈ L m (M × (0, T )) , and for any ϕ ∈ C ∞ c (M × [0, T ]) such that ϕ(x, T ) = 0 for any x ∈ M , u satisfies the equality: Moreover for any T > τ > 0 one has u ∈ L ∞ (M × (τ, T )) and there exist numerical constants c 1 , c 2 > 0, independent of T , such that, for all t > 0 one has − T 0 M u ϕ t dµ dt = T 0 M u m ∆ϕ dµ dt + T 0 M u p ϕ dµ dt + M u 0 (x) ϕ(x, 0) dµ.u(t) L ∞ (M ) ≤ c 1 e c 2 t      u 0 2m Np+(m−p)(N+2) L m (M ) + u 0 2m N+(m−1)(N+2) L m (M ) t 2 N+(m−1)(N+2)      . (2.1) Besides, if q > 1 and u 0 ∈ L q (M ) ∩ L m (M ), then there exists C(q) > 0 such that u(t) L q (M ) ≤ e C(q)t u 0 L q (M ) for all t > 0 . (2.2) One may wonder whether the upper bound in (2.1) is qualitatively sharp, since its r.h.s. involves a function of time that tends to +∞ as t → +∞. This is indeed the case, since there is a wide class of situations covered by Theorem 2.2 in which classes of solutions do indeed satisfy u(t) ∞ → +∞ as t → +∞ and show even the much stronger property of blowing up pointwise everywhere in infinite time. In fact, as a direct consequence of Theorem 2.2, of known geometrical conditions for the validity of (1.2) and (1.3), and of some results given in [21], we can prove the following result. We stress that this property has no Euclidean analogue for the corresponding reaction-diffusion problem. Theorem 2.3. Let M be a Cartan-Hadamard manifold and let sec denote sectional curvature, Ric o denote the Ricci tensor in the radial direction with respect to a given pole o ∈ M . Assume that the following curvature bounds hold everywhere on M , for suitable k 1 ≥ k 2 > 0: Ric o (x) ≥ −k 1 ; sec ≤ −k 2 . Then the results of Theorem 2.2 hold. Besides, consider any nonnegative solution u to (1.1) corresponding to an initial datum u 0 ∈ L m (M ) which is sufficiently large in the sense that 2.1. Weighted reaction-diffusion equations in the Euclidean space. As mentioned in the Introduction, the methods used in proving Theorem 2.2 are general enough, being based on functional inequalities only, to be easily generalized to different contexts. We single out here the one in which reaction-diffusion equations are considered in the Euclidean setting, but in which diffusion takes place in a medium having a nonhomogeneous density, see e.g. [24], [27], [28], [29] and references quoted therein. u 0 ≥ v 0 for a suitable function v 0 ∈ C 0 c (M ), v 0 > 0 in a geodesic ball B R with R > We consider a weight ρ : R N → R such that ρ ∈ C(R N ) ∩ L ∞ (R N ), ρ(x) > 0 for any x ∈ R N ,(2.3) and the associated weighted Lebesgue spaces L q ρ (R N ) = {v : R N → R measurable | v L q ρ < +∞}, where v L q ρ := R N ρ(x) |v(x)| q dx. Moreover, we assume that ρ is such that the weighted Poincaré inequality (1.6) holds. By construction and by the assumptions in (2.3) it follows that the weighted Sobolev inequality (1.7) also holds, as a consequence of the usual Sobolev inequality in R N and of (2.3). Moreover, we let u 0 : R N → R be such that u 0 ∈ L m ρ (R N ), u 0 (x) ≥ 0 for a.e. x ∈ R N and consider, for any T > 0 and for any 1 < p < m, problem (1.5). The definition of solution we use will be again the very weak one, adapted to the present case. Definition 2.4. Let 1 < p < m and u 0 ∈ L m ρ (R N ), u 0 ≥ 0. Let the weight ρ satisfy (2.3). We say that the function u is a solution to problem (1.5) in the interval [0, T ) if u ∈ L m ρ (R N × (0, T )) and for any ϕ ∈ C ∞ c (R N × [0, T ]) such that ϕ(x, T ) = 0 for any x ∈ R N , u satisfies the equality: − T 0 R N u ϕ t ρ(x) dx dt = T 0 R N u m ∆ϕ dx dt + T 0 R N u p ϕ ρ(x) dx dt + R N u 0 (x) ϕ(x, 0) ρ(x) dx. (2.4) Theorem 2.5. Let ρ satisfy (2. 3) and assume that the weighted Poincaré inequality (1.6) holds. Let 1 < p < m and u 0 ∈ L m ρ (R N ), u 0 ≥ 0. Then problem (1.5) admits a solution for any T > 0, in the sense of Definition 2.4. Moreover for any T > τ > 0 one has u ∈ L ∞ (R N × (τ, T )) and there exist numerical constants c 1 , c 2 > 0, independent of T , such that, for all t > 0 one has u(t) L ∞ (R N ) ≤ c 1 e c 2 t      u 0 2m Np+(m−p)(N+2) L m ρ (R N ) + u 0 2m N+(m−1)(N+2) L m ρ (R N ) t 2 N+(m−1)(N+2)      . (2.5) Besides, if q > 1 and u 0 ∈ L q ρ (R N ) ∩ L m ρ (R N ), then there exists C(q) > 0 such that u(t) L q ρ (R N ) ≤ e C(q)t u 0 L q ρ (R N ) for all t > 0 . Finally, there are examples of weights satisfying the assumptions of the present Theorem and such that sufficiently large initial data u 0 give rise to solutions u(x, t) blowing up pointwise everywhere in infinite time, i.e. such that lim t→+∞ u(x, t) = +∞ for all x ∈ R N , so that in particular u(t) ∞ → +∞ as t → +∞ and hence the upper bound in (2.5) is qualitatively sharp. One can take e.g. ρ ≍ |x| −2 as |x| → +∞ for this to hold. In the case of integrable weights one can adapt the methods of [37] to prove a stronger result. Theorem 2.6. Let ρ satisfy (2.3) and ρ ∈ L 1 (R N ). Let 1 < p < m and u 0 ∈ L 1 ρ (R N ), u 0 ≥ 0. Then problem (1.5) admits a solution for any T > 0, in the sense of Definition 2.4. Moreover for any T > τ > 0 one has u ∈ L ∞ (R N × (τ, T )) and there exists C = C(m, p, N, ρ L 1 (R N ) ) > 0, independent of the initial datum u 0 , such that, for all t > 0, one has u(t) L ∞ (R N ) ≤ C 1 + 1 (m − 1)t 1 m−1 . (2.6) Remark 2.7. • The bound (2.6) cannot be replaced by a similar one in which the r.h.s. is replaced by C (m−1)t , that would entail u(t) ∞ → 0 as t → +∞, as customary e.g. in the case of solutions to the Porous Medium Equation posed in bounded, Euclidean domains (see [41]). In fact, it is possible that stationary, bounded solutions to (1.5) exist, provided a positive bounded solution U to the equation − ∆U = ρU a (2.7) exists, where a = p/m < 1. If this fact holds, V := U 1 m is a stationary, bounded, positive solution to the differential equation in (1.5), whose L ∞ norm is of course constant in time. In turn, a celebrated results of [6] entails that positive, bounded solutions to (2.7) exist if e.g. ρ ≍ |x| −2−ǫ for some ǫ > 0 as |x| → +∞ (in fact, a full characterization of the weights for which this holds is given in [6]), a condition which is of course compatible with the assumptions of Theorem 2.6. • Of course, the bound (2.5), which gives stronger information when t → 0, continues to hold under the assumptions of Theorem 2.6. Auxiliary results for elliptic problems Let x 0 , x ∈ M be given. We denote by r(x) = dist (x 0 , x) the Riemannian distance between x 0 and x. Moreover, we let B R (x 0 ) := {x ∈ M, dist (x 0 , x) < R} be the geodesics ball with center x 0 ∈ M and radius R > 0. Let x 0 ∈ M any fixed reference point. We set B R ≡ B R (x 0 ) . As mentioned above, we denote by µ the Riemannian measure on M . For any given function v, we define for any k ∈ R + T k (v) :=      k if v ≥ k v if |v| < k −k if v ≤ −k . For every R > 0, k > 0, consider the problem      u t = ∆u m + T k (u p ) in B R × (0, +∞) u = 0 in ∂B R × (0, +∞) u = u 0 in B R × {0},(3.1) where u 0 ∈ L ∞ (B R ), u 0 ≥ 0. Solutions to problem (3.1) are meant in the weak sense as follows. Definition 3.1. Let p < m. Let u 0 ∈ L ∞ (B R ), u 0 ≥ 0. We say that a nonnegative function u is a solution to problem (3.1) if u ∈ L ∞ (B R × (0, +∞)), u m ∈ L 2 (0, T ); H 1 0 (B R ) for any T > 0, and for any T > 0, ϕ ∈ C ∞ c (B R × [0, T ]) such that ϕ(x, T ) = 0 for every x ∈ B R , u satisfies the equality: − T 0 B R u ϕ t dµ dt = − T 0 B R ∇u m , ∇ϕ dµ dt + T 0 B R T k (u p ) ϕ dµ dt + B R u 0 (x) ϕ(x, 0) dµ. We also consider elliptic problems of the type −∆u = f in B R u = 0 in ∂B R , (3.2) with f ∈ L q (B R ) for some q > 1. Definition 3.2. We say that u ∈ H 1 0 (B R ), u ≥ 0 is a weak subsolution to problem (3.2) if B R ∇u, ∇ϕ dµ ≤ B R f ϕ dµ, for any ϕ ∈ H 1 0 (B R ), ϕ ≥ 0 . The following proposition contains an estimate in the spirit of the celebrated L ∞ estimate of Stampacchia (see, e.g., [25], [3] and references therein). However, the obtained bound and the proof are different. This is due to the fact that we need an estimate independent of the measure of B R , in order to let R → +∞ when we apply such estimate in the proof of global existence for problem (1.1) (see Remark 3.4 below). Indeed recall that, obviously, since M has infinite measure, µ(B R ) → +∞ as R → +∞. Proposition 3.3. Let f 1 ∈ L m 1 (B R ) and f 2 ∈ L m 2 (B R ) where m 1 > N 2 , m 2 > N 2 . Assume that v ∈ H 1 0 (B R ), v ≥ 0 is a subsolution to problem −∆v = (f 1 + f 2 ) in B R v = 0 on ∂B R . (3.3) in the sense of Definition 3.2. Letk > 0. Then v L ∞ (B R ) ≤ C 1 f 1 L m 1 (B R ) + C 2 f 2 L m 2 (B R ) s−1 s v s−1 s L 1 (B R ) +k, (3.4) where s = 1 + 2 N − 1 l , (3.5) N 2 < l < min{m 1 , m 2 }, (3.6) C 1 = s s − 1 s s−1 1 C 2 s 2 k 1 l − 1 m 1 , C 2 = s s − 1 s s−1 1 C 2 s 2 k 1 l − 1 m 2 ,(3. 7) and C 1 = C 1 v 1 l − 1 m 1 L 1 (B R ) , C 2 = C 2 v 1 l − 1 m 2 L 1 (B R ) . (3.8) Remark 3.4. If in Proposition 3.3 we further assume that there exists a constant k 0 > 0 such that max( v L 1 (B R ) , f 1 L m 1 (B R ) , f 2 L m 2 (B R ) ) ≤ k 0 for all R > 0, then from (3.4), we infer that the bound from above on v L ∞ (B R ) is independent of R. This fact will have a key role in the proof of global existence for problem (1.1). Proof of Proposition 3.3. Let us first define G k (v) := v − T k (v) , (3.9) g(k) := B R |G k (v)| dµ. For any R > 0, for v ∈ L 1 (B R ), we set A k := {x ∈ B R : |v(x)| > k}. (3.10) We first state two technical Lemmas. Lemma 3.5. Let v ∈ L 1 (B R ). Then g(k) is differentiable almost everywhere in (0, +∞) and g ′ (k) = −µ(A k ). We omit the proof since it is identical to the one given in [3]. Then v ∈ L ∞ (B R ) and v L ∞ (B R ) ≤ C 1 s s s − 1 v 1− 1 s L 1 (B R ) +k. (3.12) Remark 3.7. Observe that if C in (3.11) does not depend on R and, for some k 0 > 0, v L 1 (B R ) ≤ k 0 for all R > 0, then, in view of the estimate (3.12), the bound on v L ∞ (B R ) is independent of R. Proof of Lemma 3.6. Thanks to Lemma 3.5 together with hypotheses (3.11) we have that g ′ (k) = −µ(A k ) ≤ − C −1 g(k) 1 s , hence g(k) ≤ C [−g ′ (k)] s . Integrating betweenk and k we get k k − 1 C 1 s dτ ≥ k k g ′ (τ ) g(τ ) − 1 s dg,(3.13) that is: −C − 1 s (k −k) ≥ s s − 1 g(k) 1− 1 s − g(k) 1− 1 s . Using the definition of g, this can be rewritten as g(k) 1− 1 s ≤ g k 1− 1 s − s − 1 s C − 1 s (k −k) ≤ v 1− 1 s L 1 (B R ) − s − 1 s C − 1 s (k −k) for any k >k. Choose k = k 0 = C 1 s v 1− 1 s L 1 (B R ) s s − 1 +k, and substitute it in the last inequality. Then g(k 0 ) ≤ 0. Due to the definition of g this is equivalent to B R |G k 0 (v)| dµ = 0 ⇐⇒ |G k 0 (v)| = 0 ⇐⇒ |v| ≤ k 0 . As a consequence we have v L ∞ (B R ) ≤ k 0 = s s − 1 C 1 s v 1− 1 s L 1 (B R ) +k. Proof of Proposition 3.3. Take G k (v) as in (3.9) and A k as in (3.10). From now one we write, with a slight abuse of notation, f L q (B R ) = f L q . Since G k (v) ∈ H 1 0 (B R ) and G k (v) ≥ 0, we can take G k (v) as test function in problem (3.3). Then, by means of (1.3), we get B R ∇u · ∇G k (v) dµ ≥ A k |∇v| 2 dµ ≥ B R |∇G k (v)| 2 dµ ≥ C 2 s B R |G k (v)| 2 * dµ 2 2 * . (3.14) If we now integrate on the right hand side of (3.3), thanks to Hölder inequality, we get B R (f 1 + f 2 ) G k (v) dµ = A k f 1 G k (v) dµ + A k f 2 G k (v) dµ ≤ A k |G k (v)| 2 * dµ 1 2 * A k |f 1 | 2N N+2 dµ N+2 2N + A k |f 2 | 2N N+2 dµ N+2 2N ≤ B R |G k (v)| 2 * dµ 1 2 * f 1 L m 1 µ(A k ) N+2 2N 1− 2N m 1 (N+2) + f 2 L m 2 µ(A k ) N+2 2N 1− 2N m 2 (N+2) . (3.15) Combining (3.14) and (3.15) we have C 2 s B R |G k (v)| 2 * dµ 1 2 * ≤ f 1 L m 1 µ(A k ) N+2 2N 1− 2N m 1 (N+2) + f 2 L m 2 µ(A k ) N+2 2N 1− 2N m 2 (N+2) . (3.16) Observe that B R |G k (v)| dµ ≤ B R |G k (v)| 2 * dµ 1 2 * µ(A k ) N+2 2N . (3.17) We substitute (3.17) in (3.16) and we obtain B R |G k (v)| dµ ≤ 1 C 2 s f 1 L m 1 µ(A k ) 1+ 2 N − 1 m 1 + f 2 L m 2 µ(A k ) 1+ 2 N − 1 m 2 . Using the definition of l in (3.6), for any k ≥ k, we can write B R |G k (v)| dµ ≤ 1 C 2 s µ(A k ) 1+ 2 N − 1 l f 1 L m 1 µ(A k ) 1 l − 1 m 1 + f 2 L m 2 µ(A k ) 1 l − 1 m 2 (3.18) Set C = 1 C 2 s f 1 L m 1 2 k v L 1 (B R ) 1 l − 1 m 1 + f 2 L m 2 2 k v L 1 (B R ) 1 l − 1 m 2 . Hence, by means of Chebychev inequality, (3.18) reads, for any k ≥k, B R |G k (v)| dµ ≤ C µ(A k ) s ,(3.19) where s has been defined in (3.5). Now, (3.19) corresponds to the hypotheses of Lemma 3.6, hence the thesis of such lemma follows and we have v L ∞ ≤ C 1− 1 s s s − 1 v 1− 1 s L 1 +k . Then the thesis follows thanks to (3.8). 4. L q and smoothing estimates Lemma 4.1. Let 1 < p < m. Let M be such that inequality (1.2) holds. Suppose that u 0 ∈ L ∞ (B R ), u 0 ≥ 0. Let u be the solution of problem (3.1). Then, for any 1 < q < +∞, for some constant C = C(q) > 0, one has u(t) L q (B R ) ≤ e C(q)t u 0 L q (B R ) for all t > 0 . (4.1) Proof. Let x ∈ R, x ≥ 0, 1 < p < m, ε > 0. Then, for any 1 < q < +∞, due to Young's inequality, it follows that x p+q−1 = x (m+q−1)( p−1 m−1 ) x q( m−p m−1 ) ≤ εx (m+q−1)( p−1 m−1 )( m−1 p−1 ) + 1 ε p − 1 m − 1 p−1 m−p x q( m−p m−1 )( m−1 m−p ) = εx m+q−1 + 1 ε p − 1 m − 1 p−1 m−p x q . (4.2) Since u 0 is bounded and T k (u p ) is a bounded and Lipschitz function, by standard results, there exists a unique solution of problem (3.1) in the sense of Definition 3.1; moreover, u ∈ C [0, T ]; L q (B R ) . We now multiply both sides of the differential equation in problem (3.1) by u q−1 and integrate by parts. This can be justified by standard tools, by an approximation procedure. Using the fact that T k (u p ) ≤ u p , thanks to the Poincaré inequality, we obtain for all t > 0 1 q d dt u(t) q L q (B R ) ≤ − 4m(q − 1) (m + q − 1) 2 C 2 p u(t) m+q−1 L m+q−1 (B R ) + u(t) p+q−1 L p+q−1 (B R ) . Now, using inequality (4.2), we obtain 1 q d dt u(t) q L q (B R ) ≤ − 4m(q − 1) (m + q − 1) 2 C 2 p u(t) m+q−1 L m+q−1 (B R ) + ε u(t) m+q−1 L m+q−1 (B R ) + C(ε) u(t) q L q (B R ) , where C(ε) = 1 ε p−1 m−1 p−1 m−p . Thus, for every ε > 0 so small that 0 < ε < 4m(q − 1) (m + q − 1) 2 C 2 p , we have 1 q d dt u(t) q L q (B R ) ≤ C(ε) u(t) q L q (B R ) . Hence, we can find C = C(q) > 0 such that d dt u(t) q L q (B R ) ≤ C(q) u(t) q L q (B R ) for all t > 0 . If we set y(t) := u(t) q L q (B R ) , the previous inequality reads y ′ (t) ≤ C(q)y(t) for all t ∈ (0, T ) . Thus the thesis follows. Note that for the constant C(q) in Lemma 4.1 does not depend on R and k > 0; moreover, we have that C(q) → +∞ as q → +∞ . We shall use the following Aronson-Benilan type estimate (see [1]; see also [37, Proposition 2.3]). Proposition 4.2. Let 1 < p < m, u 0 ∈ H 1 0 (B R ) ∩ L ∞ (B R ), u 0 ≥ 0. Let u be the solution to problem (3.1). Then, for a.e. t ∈ (0, T ), Proof of Proposition 4.3. Let us set w = u(·, t). Observe that w m ∈ H 1 0 (B R ) and w ≥ 0. Due to Proposition 4.2 we know that −∆u m (·, t) ≤ u p (·, t) + 1 (m − 1)t u(·, t) in D ′ (B R ). Proof. By arguing as in [1], [37, Proposition 2.3] we get −∆u m (·, t) ≤ T k [u p (·, t)] + 1 (m − 1)t u(·, t) ≤ u p (·, t) + 1 (m − 1)t u(·, t) in D ′ (B R ), since T k (u p ) ≤ u p .− ∆(w m ) ≤ w p + w (m − 1)t . (4.4) Observe that, since u 0 ∈ L ∞ (B R ) also w ∈ L ∞ (B R ). Let q ≥ 1 and r 1 > max q p , N 2 , r 2 > max q, N 2 . We can apply to Proposition 3.3 with r 1 = m 1 , r 2 = m 2 , N 2 < l < min{m 1 , m 2 } . So, we have that w m L ∞ (B R ) ≤ C 1 (r 1 ) w p L r 1 (B R ) + γC 2 (r 2 ) w L r 2 (B R ) s−1 s w m s−1 s L m (B R ) +k ,(4.5) where s = 1 + 2 N . Thanks to Hölder inequality and Young's inequality with exponents α 1 = sm p − q r 1 (s − 1) > 1, β 1 = sm sm − (s − 1) p − q r 1 > 1. we obtain, for any ε 1 > 0 w p L r 1 (B R ) = w p−q/r 1 +q/r 1 L r 1 (B R ) = B R w r 1 (p−q/r 1 ) w q dµ 1 r 1 ≤ w p−q/r 1 L ∞ (B R ) w q L 1 (B R ) 1 r 1 = w p−q/r 1 L ∞ (B R ) B R w q dµ 1 r 1 = w p−q/r 1 L ∞ (B R ) w q/r 1 L q (B R ) ≤ ε α 1 1 α 1 w m p−q/r 1 (p−q/r 1 ) s s−1 L ∞ (B R ) + α 1 − 1 α 1 ε − α 1 α 1 −1 1 w β 1 q r 1 L q (B R ) . (4.6) We set δ 1 := ε α 1 1 α 1 , η(x) = x − 1 x x x−1 . Thus from (4.6) we obtain Similarly, again thanks to Hölder inequality and Young's inequality with exponents α 2 = sm 1 − q r 2 (s − 1) > 1, β 2 = sm sm − (s − 1) 1 − q r 2 > 1. we obtain, for any ε 2 > 0 w L r 2 (B R ) ≤ w 1−q/r 2 +q/r 2 L r 2 (B R ) ≤ w 1−q/r 2 L ∞ (B R ) w q/r 2 L q (B R ) ≤ ε α 2 2 α 2 w m 1−q/r 2 1− q r 2 s s−1 L ∞ (B R ) + α 2 − 1 α 2 ε − α 2 α 2 −1 2 w β 2 q r 2 L q (B R ) . We set δ 2 := ε α 2 2 α 2 and thus we obtain w L r 2 (B R ) ≤ δ 2 w m s s−1 L ∞ (B R ) + η(α 2 )L ∞ (B R ) ≤ 2 1 s−1 C 1 w p L r 1 (B R ) + γC 2 w L r 2 (B R ) w m L m (B R ) +k s s−1 ≤ 2 1 s−1    C 1   δ 1 w m s s−1 L ∞ (B R ) + η(α 1 ) δ 1 α 1 −1 1 w smq r 1 1 s(m−p+q/r 1 )+(p−q/r 1 ) L q (B R )   +γC 2   δ 2 w m s s−1 L ∞ (B R ) + η(α 2 ) δ 1 α 2 −1 2 w smq r 2 1 s(m−1+q/r 2 )+(1−q/r 2 ) L q (B R )      w m L m (B R ) + 2 1 s−1k s s−1 . Without loss of generality we can assume that w m L m (B R ) = 0. Choosing ε 1 , ε 2 such that δ 1 = 1 4C 1 w m L m (B R ) δ 2 = 1 4γ C 2 w m L m (B R ) we thus have 1 2 w m s s−1 L ∞ (B R ) ≤ 2 1 s−1 η(α 1 ) 4C α 1 1 w mα 1 L m (B R ) 1 α 1 −1 w smq r 1 1 s(m−p+q/r 1 )+(p−q/r 1 ) L q (B R ) + 2 1 s−1 η(α 2 ) 4γ α 2 C α 2 2 w mα 2 L m (B R ) 1 α 2 −1 w smq r 2 1 s(m−1+q/r 2 )+(1−q/r 2 ) L q (B R ) + 2 1 s−1k s s−1 . This reduces to Now we use the definitions of C 1 , C 2 , C 1 , C 2 introduced in (3.8) and (3.7), obtaining w L ∞ (B R ) ≤ 2 s s−1 η(α 1 )(4C α 1 1 ) 1 α 1 −1 1 m s−1 s w α 1 α 1 −1 s−1 s L m (B R ) w smq r 1 1 s(m−p+q/r 1 )+(p−q/r 1 ) L q (B R ) + 2 s s−1 η(α 2 )(4γ α 2 C α 2 2 ) 1 α 2 −1w L ∞ (B R ) ≤ 2 s s−1 η(α 1 )(4C α 1 1 ) 1 α 1 −1 1 m s−1 s w α 1 α 1 −1 s−1 s 1+ 1 l − 1 r 1 L m (B R ) w smq r 1 1 s(m−p+q/r 1 )+(p−q/r 1 ) L q (B R ) + 2 s s−1 η(α 2 )(4γ α 2 C α 2 2 ) 1 α 2 −1 1 m s−1 s w α 2 α 2 −1 s−1 s 1+ 1 l − 1 r 2 L m (B R ) w smq r 2 1 s(m−1+q/r 2 )+(1−q/r 2 ) L q (B R ) + 2k 1 m . By taking limits as r 1 −→ +∞ and r 2 −→ +∞ we have Hence by (4.9) we get α 1 α 1 − 1 −→ m m − p + p s ; α 2 α 2 − 1 −→ m m − 1 + 1 s ; η(α 1 ) −→ p(s − 1) ms p(s−1) ms−p(s−1) 1 − p(s − 1) ms ; η(α 2 ) −→ s − 1 ms s−1 ms−(s−1) 1 − s − 1 ms .w L ∞ (B R ) ≤Γ w m m−p+p/s s−1 s (1+ 1 l ) L m (B R ) + w m m−1+1/s s−1 s (1+ 1 l ) L m (B R ) γ s−1 ms−(s−1) + 2k 1 m . (4.10) Letting l → +∞ in (4.10), we can infer that w L ∞ (B R ) ≤ Γ w 2m mN+2(m−p) L m (B R ) + w 2m mN+2(m−1) L m (B R ) γ 2 mN+2(m−1) + 2k 1 m , (4.11) where Γ 1 =      2 N+2 2 2p m(N + 2) 2p mN+2(m−p) 4 N + 2 N N+2 N 1 C 2 s m(N+2) mN+2(m−p)      2 m(N+2) , Γ 2 =      2 N+2 2 2 m(N + 2) 2 mN+2(m−1) 4 N + 2 N N+2 N 1 C 2 s m(N+2) mN+2(m−1)      2 m(N+2) . Γ := max{Γ 1 , Γ 2 } . Lettingk → 0 in (4.11) we obtain w L ∞ (B R ) ≤ Γ w 2m mN+2(m−p) L m (B R ) + w 2m mN+2(m−1) L m (B R ) γ 2 mN+2(m−1) . (4.12) Finally, since u 0 ∈ L ∞ (B R ), we can apply Lemma 4.1 to w with q = m. Thus from (4.1) with q = m and (4.12), the thesis follows. Proof of Theorems 2.2, 2.3 Proof of Theorem 2.2. Let {u 0,h } h≥0 be a sequence of functions such that u 0,h ∈ L ∞ (M ) ∩ C ∞ c (M ) for all h ≥ 0, u 0,h ≥ 0 for all h ≥ 0, u 0,h 1 ≤ u 0,h 2 for any h 1 < h 2 , u 0,h −→ u 0 in L m (M ) as h → +∞ . For any R > 0, k > 0, h > 0, consider the problem      u t = ∆u m + T k (u p ) in B R × (0, +∞) u = 0 in ∂B R × (0, ∞) u = u 0,h in B R × {0} . (5.1) From standard results it follows that problem (5.1) has a solution u R h,k in the sense of Definition 3.1; moreover, u R h,k ∈ C [0, T ]; L q (B R ) for any q > 1. Hence, it satisfies the inequalities in Lemma 4.1 and in Proposition 4.3, i.e., for any t ∈ (0, +∞), u R h,k (t) L m (B R ) ≤ e Ct u 0,h L m (B R ) ; (5.2) u R h,k (t) L ∞ (B R ) ≤ Γ e Ct u 0,h L m (B R ) 2m mN+2(m−p) + e Ct u 0,h L m (B R ) 2m mN+2(m−1) 1 (m − 1)t 2 mN+2(m−1) . (5.3) In addition, for any τ ∈ (0, T ), ζ ∈ C 1 c ((τ, T )), ζ ≥ 0, max [τ,T ] ζ ′ > 0, T τ ζ(t) (u R h,k ) m+1 2 t 2 dµdt ≤ max [τ,T ] ζ ′C B R (u R h,k ) m+1 (x, τ )dµ +C max [τ,T ] ζ B R F u R h,k (x, T ) dµ ≤ max [τ,T ] ζ ′ (t)C u R h,k (τ ) L ∞ (B R ) u R h,k (τ ) m L m (B R ) +C m + p u R h,k (T ) p L ∞ (B R ) u R h,k (T ) m L m (B R ) (5.4) where F (u) =x ∈ B R , u R h,k satisfies − T 0 B R u R h,k ϕ t dµ dt = T 0 B R (u R h,k ) m ∆ϕ dµ dt + T 0 B R T k [(u R h,k ) p ] ϕ dµ dt + B R u 0,h (x) ϕ(x, 0) dµ. (5.5) Observe that all the integrals in (5.5) are finite. Indeed, due to (5.2), u R h,k ∈ L m (B R × (0, T )) hence, since p < m, u R h,k ∈ L p (B R × (0, T )) and u R h,k ∈ L 1 (B R × (0, T )). Moreover, observe that, for any h > 0 and R > 0 the sequence of solutions {u R h,k } k≥0 is monotone increasing in k hence it has a pointwise limit for k → ∞. Let u R h be such limit so that we have u R h,k −→ u R h as k → ∞ pointwise. In view of (5.2), (5.3), the right hand side of (5.4) is independent of k. So, (u R h ) m+1 2 ∈ H 1 ((τ, T ); L 2 (B R )). Therefore, (u R h ) m+1 2 ∈ C [τ, T ]; L 2 (B R ) . We can now pass to the limit as k → +∞ in inequalities (5.2) and (5.3) arguing as follows. From inequality (5.2), thanks to the Fatou's Lemma, one has for all t > 0 u R h (t) L m (B R ) ≤ e Ct u 0,h L m (B R ) . (5.6) On the other hand, from (5.3), since u R h,k −→ u R h as k → ∞ pointwise and the right hand side of (5.3) is independent of k, one has for all t > 0 u R h (t) L ∞ (B R ) ≤ Γ e Ct u 0,h L m (B R ) 2m mN+2(m−p) + e Ct u 0,h L m (B R ) 2m mN+2(m−1) 1 (m − 1)t 2 mN+2(m−1) . (5.7) Note that both (5.6) and (5.7) hold for all t > 0, in view of the continuity property of u deduced above. Moreover, thanks to Beppo Levi's monotone convergence Theorem, it is possible to compute the limit as k → +∞ in the integrals of equality (5.5) and hence obtain that, for any ϕ ∈ C ∞ c (B R × (0, T )) such that ϕ(x, T ) = 0 for any x ∈ B R , the function u R h satisfies − T 0 B R u R h ϕ t dµ dt = T 0 B R (u R h ) m ∆ϕ dµ dt + T 0 B R (u R h ) p ϕ dµ dt + B R u 0,h (x) ϕ(x, 0) dµ. (5.8) Observe that, due to inequality (5.6), all the integrals in (5.8) are finite, hence u R h is a solution to problem (5.1), where we replace T k (u p ) with u p itself, in the sense of Definition 3.1. Let us now observe that, for any h > 0, the sequence of solutions {u R h } R>0 is monotone increasing in R, hence it has a pointwise limit as R → +∞. We call its limit function u h so that u R h −→ u h as R → +∞ pointwise. In view of (5.2), (5.3), (5.6), (5.7), the right hand side of (5.4) is independent of k and R. So, (u h ) m+1 2 ∈ H 1 ((τ, T ); L 2 (M )). Therefore, (u h ) m+1 2 ∈ C [τ, T ]; L 2 (M ) . Since u 0 ∈ L m (M ), there exists k 0 > 0 such that u 0h L m (B R ) ≤ k 0 ∀ h > 0, R > 0 . (5.9) Note that, in view of (5.9), the norms in (5.6) and (5.7) do not depend on R (see Proposition 4.3, Lemma 4.1 and Remark 4.4). Therefore, we pass to the limit as R → +∞ in (5.6) and (5.7). By Fatou's Lemma, u h (t) L m (M ) ≤ e Ct u 0,h L m (M ) ; (5.10) furthermore, since u R h −→ u h as R → +∞ pointwise, u h (t) L ∞ (M ) ≤ Γ e Ct u 0,h L m (M ) 2m mN+2(m−p) + e Ct u 0,h L m (M ) 2m mN+2(m−1) 1 (m − 1)t 2 mN+2(m−1) . (5.11) Note that both (5.10) and (5.11) hold for all t > 0, in view of the continuity property of u R h deduced above. Moreover, again by monotone convergence, it is possible to compute the limit as R → +∞ in the integrals of equality (5.8) and hence obtain that, for any ϕ ∈ C ∞ c (M × (0, T )) such that ϕ(x, T ) = 0 for any x ∈ M , the function u h satisfies, − T 0 M u h ϕ t dµ dt = T 0 M (u h ) m ∆ϕ dµ dt + T 0 M (u h ) p ϕ dµ dt + M u 0,h (x) ϕ(x, 0) dµ. (5.12) Observe that, due to inequality (5.10), all the integrals in (5.12) are well posed hence u h is a solution to problem (1.1), where we replace u 0 with u 0,h , in the sense of Definition 2.1. Finally, let us observe that {u 0,h } h≥0 has been chosen in such a way that u 0,h −→ u 0 in L m (M ) Observe also that {u h } h≥0 is a monotone increasing function in h hence it has a limit as h → +∞. We call u the limit function. In view (5.2), (5.3), (5.6), (5.7), (5.10), (5.11), the right hand side of (5.4) is independent of k, R and h. So, u m+1 2 ∈ H 1 ((τ, T ); L 2 (M )). Therefore, u m+1 2 ∈ C [τ, T ]; L 2 (M ) . Hence, we can pass to the limit as h → +∞ in (5.10) and (5.11) and similarly to what we have seen above, we get u(t) L m (M ) ≤ e Ct u 0 L m (M ) ,(5.13) and u(t) L ∞ (M ) ≤ Γ e mCt u 0 m L m (M ) 1 m+p/s−p s−1 s + e mCt u 0 m L m (M ) m(s−1) s(m−1)+1 1 (m − 1)t s−1 1+s(m−1) . (5.14) Note that both (5.13) and (5.14) hold for all t > 0, in view of the continuity property of u deduced above. Moreover, again by monotone convergence, it is possible to compute the limit as h → +∞ in the integrals of equality (5.12) and hence obtain that, for any ϕ ∈ C ∞ c (M × (0, T )) such that ϕ(x, T ) = 0 for any x ∈ M , the function u satisfies, Finally, let us discuss (2.2). Let q > 1. If u 0 ∈ L q (M ) ∩ L m (M ), we choose the sequence u 0h so that it further satisfies − T 0 M u ϕ t dµ dt = T 0 M u m ∆ϕ dµ dt + T 0 M u p ϕ dµ dt + M u 0 (x) ϕ(x, 0) dµ.u 0h → u 0 in L q (M ) as h → +∞ . We have that u R h,k (t) L q (B R ) ≤ e Ct u 0,h L q (B R ) . (5.16) Hence, due to (5.16), letting k → +∞, R → +∞, h → +∞, by Fatou's Lemma we deduce (2.2). Proof of Theorem 2.3. We note in first place that the geometrical assumptions on M , in particular the upper curvature bound sec ≤ −k 2 < 0, ensure that inequalities (1.2) and (1.3) both hold on M , see e.g. [13,14]. Hence, all the result of Theorem 2.2 hold, in particular solutions corresponding to data u 0 ∈ L m (M ) exist globally in time. Besides, it has been shown in [21] that if u 0 is a continuous, nonnegative, nontrivial datum, which is sufficiently large in the sense given in the statement, under the lower curvature bound being assumed here the corresponding solution v satisfies the bound u(x, t) ≥ Cζ(t) 1 − r a η(t) 1 m−1 + ∀t ∈ (0, S), ∀x ∈ M, possibly up to a finite time explotion time S, which has however been proved in the present paper not to exist. Here, the functions η, ζ are given by: ζ(t) := (τ + t) α , η(t) := (τ + t) −β for every t ∈ [0, ∞) , where C, τ, R 0 , inf B R 0 u 0 must be large enough and one can take 0 < α < 1 m−1 , β = α(m−1)+1 2 . Clearly, v then satisfies lim t→+∞ v(x, t) = +∞ for all x ∈ M , and hence u enjoys the same property by comparison. Proof of Theorems 2.5, 2.6 For any R > 0 we consider the following approximate problem      ρ(x)u t = ∆u m + ρ(x)u p in B R × (0, T ) u = 0 in ∂B R × (0, T ) u = u 0 in B R × {0} ,(6.1) here B R denotes the Euclidean ball with radius R and centre in O. We shall use the following Aronson-Benilan type estimate (see [1]; see also [37,Proposition 2.3]). Proposition 6.1. Let 1 < p < m, u 0 ∈ H 1 0 (B R ) ∩ L ∞ (B R ), u 0 ≥ 0. Let u be the solution to problem (6.1). Then, for a.e. t ∈ (0, T ), −∆u m (·, t) ≤ ρu p (·, t) + ρ (m − 1)t u(·, t) in D ′ (B R ). Proof of Theorem 2.5. The conclusion follows using step by step the same arguments given in the proof of Theorem 2.2, since the necessary functional inequalities are being assumed. We use Proposition 6.1 instead of 4.2. The last statement of the Theorem will be proved later on in Section 6.1 In order to prove Theorem 2.6 we adapt the strategy of [37] to the present case, so we shall be concise and limit ourselves to identifying the main steps and differences. Define dµ := ρ(x)dx . For any R > 0, for v ∈ L 1 ρ (B R ), we set A k := {x ∈ B R : |v(x)| > k}. Lemma 6.2. Let v ∈ L 1 ρ (B R ). Suppose that there exist C > 0 and s > 1 such that g(k) ≤ Cµ(A k ) s for any k ∈ R + . Then v ∈ L ∞ (B R ) and v L ∞ (B R ) ≤ C s s − 1 s ρ s−1 L 1 (R N ) . Proof. Arguing as in the proof of Lemma 3.6, we integrate inequality (3.13) between 0 and k and using the definition of g, we obtain g(k) 1− 1 s ≤ v 1− 1 s L 1 ρ (B R ) − s − 1 s C − 1 s k for any k ∈ R + . Choose k = k 0 = C 1 s v 1− 1 s L 1 ρ (B R ) s s − 1 , and substitute it in the last inequality. Then we have g(k 0 ) ≤ 0 ⇐⇒ B R |G k 0 (v)| dµ = 0 ⇐⇒ |G k 0 (v)| = 0 ⇐⇒ |v| ≤ k 0 ⇐⇒ |v| ≤ C 1 s v 1− 1 s L 1 ρ (B R ) s s − 1 . Thanks to the assumption that ρ ∈ L 1 (R N ), we can apply the weighted Hölder inequality to get v L ∞ (B R ) ≤ s s − 1 C 1 s v 1− 1 s L ∞ (B R ) ρ 1− 1 s . Rearranging the terms in the previous inequality we obtain the thesis. Lemma 6.3. Let ρ satisfy (2.3) and ρ ∈ L 1 (R N ). Let f 1 ∈ L m 1 ρ (B R ) and f 2 ∈ L m 2 ρ (B R ) where m 1 > N 2 , m 2 > N 2 . Assume that v ∈ H 1 0 (B R ), v ≥ 0 is a subsolution to problem −∆v = ρ(f 1 + f 2 ) in B R v = 0 on ∂B R . Then v L ∞ (B R ) ≤ C 1 f 1 L m 1 ρ (B R ) + C 2 f 2 L m 2 ρ (B R ) ,(6. 2) where C 1 = 1 C 2 s s s − 1 s ρ 2 N − 1 m 1 L 1 (R N ) , C 2 = 1 C 2 s s s − 1 s ρ 2 N − 1 m 2 L 1 (R N ) , (6.3) with s given by (3.5) . Remark 6.4. If in Lemma 6.3 we further assume that there exists a constant k 0 > 0 such that f 1 L m 1 ρ (B R ) ≤ k 0 , f 2 L m 2 ρ (B R ) ≤ k 0 for all R > 0, then from (6.2), we infer that the bound from above on v L ∞ (B R ) is independent of R. This fact will have a key role in the proof of global existence for problem (1.5). Proof of Lemma 6.3. By arguing as in the proof of Proposition 3.3, we get B R |G k (v)| dµ ≤ 1 C 2 s f 1 L m 1 ρ µ(A k ) 1+ 2 N − 1 m 1 + f 2 L m 2 ρ µ(A k ) 1+ 2 N − 1 m 2 . Thus B R |G k (v)| dµ ≤ 1 C 2 s µ(A k ) 1+ 2 N − 1 l f 1 L m 1 ρ ρ 1 l − 1 m 1 L 1 (R N ) + f 2 L m 2 ρ ρ 1 l − 1 m 2 L 1 (R N ) . Now, definingC = 1 C 2 s f 1 L m 1 (B R ) ρ 1 l − 1 m 1 L 1 (R N ) + f 2 L m 2 (B R ) ρ 1 l − 1 m 2 L 1 (R N ) , the last inequality is equivalent to B R |G k (v)| dµ ≤C µ(A k ) s , for any k ∈ R + , where s has been defined in (3.5). Hence, it is possible to apply Lemma 6.2. By using the definitions of C 1 and C 2 in (6.3), we thus have v L ∞ (B R ) ≤ C 1 f 1 L m 1 ρ (B R ) + C 2 f 2 L m 2 ρ (B R ) . Proposition 6.5. Let 1 < p < m, R > 0, u 0 ∈ L ∞ (B R ), u 0 ≥ 0. Let u be the solution to problem (6.1). Let inequality (1.7) holds. Then there exists C = C(p, m, N, C s , ρ L 1 (R N ) ) > 0 such that, for all t > 0, u(t) L ∞ (B R ) ≤ C 1 + 1 (m − 1)t 1 m−1 . Proof. We proceed as in the proof of Proposition 4.3, up to inequality (4.8). Thanks to the fact that ρ ∈ L 1 (R N ), we can apply to (4.4) the thesis of Lemma 6.3. Thus we obtain w m L ∞ (B R ) ≤ C 1 w p L r 1 ρ (B R ) + γC 2 w L r 2 ρ (B R ) . (6.4) Now the constants are α 1 = m p − q r 1 ; α 2 = m 1 − q r 2 ; ε 1 such that δ 1 = 1 4C 1 ; ε 2 such that δ 2 = 1 4γC 2 . Plugging (4.7) and (4.8) into (6.4) we obtain w m L ∞ (B R ) ≤ C 1 w p L r 1 ρ (B R ) + γC 2 w L r 2 ρ (B R ) ≤ C 1   δ 1 w m L ∞ (B R ) + η(α 1 ) δ 1 α 1 −1 1 w mq r 1 1 m−p+q/r 1 L q ρ (B R )   + γC 2   δ 2 w m L ∞ (B R ) + η(α 2 ) δ 1 α 2 −1 2 w mq r 2 1 m−1+q/r 2 L q ρ (B R )   . (6.5) Inequality (6.5) can be rewritten as w L ∞ (B R ) ≤ 2η(α 1 ) (4C α 1 1 ) 1 α 1 −1 1 m w q r 1 1 m−p+q/r 1 L q ρ (B R ) + 2η(α 2 ) (4γ α 2 C α 2 2 ) 1 α 2 −1 1 m w q r 2 1 m−1+q/r 2 L q ρ (B R ) . Computing the limits as r 1 −→ ∞ and r 2 −→ ∞ we have η(α 1 ) −→ p m p m−p 1 − p m ; η(α 2 ) −→ 1 m 1 m−1 1 − 1 m ; w q r 1 1 (m−p+q/r 1 ) L q ρ (B R ) −→ 1; w q r 2 1 (m−1+q/r 2 ) L q ρ (B R ) −→ 1. Moreover we define C := max{Γ 1 , Γ 2 } and notice that, by the above construction, the thesis follows with this choice of C. Proof of Theorem 2.6. The conclusion follows by the same arguments as in the proof of Theorem 2.2. However, some minor differences are in order. We replace Proposition 4.3 by Proposition 6.5. Moreover, since u 0 ∈ L 1 ρ (R N ), the family of functions {u 0h } is as follows: u 0,h ∈ L ∞ (R N ) ∩ C ∞ c (R N ) for all h ≥ 0, u 0,h ≥ 0 for all h ≥ 0, u 0,h 1 ≤ u 0,h 2 for any h 1 < h 2 , u 0,h −→ u 0 in L 1 ρ (R N ) as h → +∞ . Furthermore, instead of (5.2), (5.6), (5.10), (5.13), we use the following. By standard arguments (see, e.g. proof of [37, Proposition 2.5-(i)]) we have that u R h,k (t) L 1 ρ (B R ) ≤ C u 0h L 1 ρ (B R ) for all t > 0 , for some positive constant C = C(p, m, N, ρ L 1 (R N ) ), and, for any ε ∈ (0, m − p), 1 0 B R (u R h,k ) p+ε ρ(x)dxdt ≤C , for some positive constantC =C(p, m, N, ρ L 1 (R N ) , u 0 L 1 ρ (R N ) ). Hence, after having passed to the limit as k → +∞, R → +∞, h → +∞, for any T > 0, ϕ ∈ C ∞ c (R N × (0, T )) such that ϕ(x, T ) = 0 for every x ∈ R N , we have that T 0 R N u p+ε ρ(x)ϕ dxdt ≤ C . Therefore, (2.4) holds. 6.1. End of proof of Theorem 2.5: an example of complete blowup in infinite time. We recall that we are assuming m > 1 and 1 < p < m. Let us set r := |x|. We now construct a subsolution to equation ρ u t = ∆u m + ρ u p in R N × (0, T ) , (6.6) under the hypothesis that there exist k 1 and k 2 with k 2 ≥ k 1 > 0 such that k 1 r 2 ≤ 1 ρ(x) ≤ k 2 r 2 for any x ∈ R N \ B e . (6.7) Moreover, due to the running assumptions on the weight there exist positive constants ρ 1 , ρ 2 such that ρ 1 ≤ 1 ρ(x) ≤ ρ 2 for any x ∈ B e . (6.8) Let s(x) :=        log(|x|) if x ∈ R N \ B e , |x| 2 + e 2 2e 2 if x ∈ B e . The requested statements will follow from the following result. Proposition 6.6. Let assumption (2.3), (6.7) and (6.8) be satisfied, and 1 < p < m. If the initial datum u 0 is smooth, compactly supported and large enough, then problem (1.5) has a solution u(t) ∈ L ∞ (R N ) for any t ∈ (0, ∞) that blows up in infinite time, in the sense that lim t→+∞ u(x, t) = +∞ ∀x ∈ R N . (6.9) More precisely, if C > 0, a > 0, α > 0, β > 0, T > 0 verify 0 < T −β < a 2 . (6.10) Proof. We construct a suitable subsolution of (6. For any (x, t) ∈ (R N \ B e ) × (0, T ), we have: 0 < α < 1 m − 1 , β = α(m − 1) + 1 2 ,(6.u t = Cα(T + t) α−1 F 1 m−1 − Cβ(T + t) α−1 1 m − 1 F 1 m−1 + Cβ(T + t) α−1 1 m − 1 F 1 m−1 −1 . (u m ) r = − C m a (T + t) mα m m − 1 F 1 m−1 1 r (T + t) −β . (u m ) rr = C m a (T + t) mα m m − 1 F 1 m−1 (T + t) −β r 2 + C m a 2 (T + t) mα m (m − 1) 2 F 1 m−1 −1 (T + t) −2β r 2 . For any (x, t) ∈ B e × (0, T ), we have: For every (x, t) ∈ D 1 , by the previous computations we have Thanks to (6.7), (6.12) becomes u t − 1 ρ ∆u m − u p = Cα(T + t) α−1 F 1 m−1 − Cβ(T + t) α−1 1 m − 1 F 1 m−1 + Cβ(T + t) α−1 1 m − 1 F 1 m−1 −1 + 1 ρ − C m a (T + t) mα−β m m − 1 F 1 m−1 1 r 2 −u t − 1 ρ ∆u m − u p ≤ CF 1 m−1 −1 F α(T + t) α−1 − β m − 1 (T + t) α−1 + (N − 2)k 2 C m−1 a m m − 1 (T + t) mα−β + β m − 1 (T + t) α−1 − C m−1 a 2 m (m − 1) 2 k 1 (T + t) mα−2β − C p−1 (T + t) pα F p+m−2 m−1 ≤ CF 1 m−1 −1 σ(t)F − δ(t) − γ(t)F p+m−2 m−1 where ϕ(F ) := σ(t)F − δ(t) − γ(t)F p+m−2 m−1 , with σ(t) = α − β m − 1 (T + t) α−1 + C m−1 a m m − 1 k 2 (N − 2) (T + t) mα−β , δ(t) = − β m − 1 (T + t) α−1 + C m−1 a 2 m (m − 1) 2 k 1 (T + t) mα−2β , γ(t) = C p−1 (T + t) pα , Our goal is to find suitable C > 0, a > 0, such that ϕ(F ) ≤ 0 , for all F ∈ (0, 1) . To this aim, we impose that and conditions in (6.13) follow. So far, we have proved that u t − 1 ρ(x) ∆(u m ) − u p ≤ 0 in D 1 . Furthermore, since u m ∈ C 1 ([R N \ B e ] × [0, T )) it follows that u is a subsolution to equation (6.6) in [R N \ B e ] × (0, T ). Now, we consider equation (6.6) in B e × (0, T ). We observe that, due to condition (6.10), 1 2 < G < 1 for all (x, t) ∈ B e × (0, T ). (6.14) Similarly to the previous computation we obtain (6.14), v is a subsolution of (6.6) for every (x, t) ∈ B e × (0, T ), if 2 p+m−2 m−1 (σ 0 − δ 0 ) ≤ γ . This last inequality is always verified thanks to (6.11). Hence we have proved that Hence, w is a subsolution to equation (6.6) in R N × (0, T ). v t − 1 ρ ∆v m − v p ≤ CG 1 m−1 −1 ψ(G) , where ψ(G) := σ 0 G − δ 0 − γG p+m−2 m−1 , with σ 0 (t) = α − β m − 1 (T + t) α−1 + ρ 2 N e 2 m m − 1 C m−1 a (T + t) mα−β , δ 0 (t) = − β m − 1 (T + t) α−1 γ(t) = C p−1 (T + t) pα . Due to Theorem 2. 2 . 2Let M be a complete, noncompact manifold of infinite volume such that the Poincaré and Sobolev inequalities (1.2) and (1.3) hold on M . Let 1 < p < m and u 0 ∈ L m (M ), u 0 ≥ 0. Then problem (1.1) admits a solution for any T > 0, in the sense of Definition 2.1. 0 sufficiently large and, finally, m := inf B R v 0 is sufficiently large. Then u satisfies lim t→+∞ u(x, t) = +∞ ∀x ∈ M. Lemma 3. 6 . 6Let v ∈ L 1 (B R ). Let k > 0. Suppose that there exist C > 0 and s > 1 such that g(k) ≤ Cµ(A k ) s for any k ≥k.(3.11) Proposition 4. 3 . 3Let 1 < p < m, R > 0, u 0 ∈ L ∞ (B R ), u 0 ≥ 0. Let u be the solution to problem (3.1). Let M be such that inequality (1.3) holds. Then there exists Γ = Γ(p, m, N, C s ) > 0 such that, for all t > 0, u(t) L ∞ (B R ) ≤ Γ e Ct u 0 L m (B R )2m mN+2(m−p) + e Ct u 0 L m (B R ) constant C = C(m) > 0 is the one given in Lemma 4.1 . Remark 4 . 4 . 44If in Proposition 4.3, in addition, we assume that for some k 0 > 0 u 0 L m (B R ) ≤ k 0 for every R > 0 , then the bound from above for u(t) L ∞ (B R ) in (4.3) is independent of R. m−1+q/r 2 )+(1−q/r 2 ) L q (B R ) ds , andC > 0 is a constant only depending on m. Inequality (5.4) is formally obtained by multiplying the differential inequality in problem (3.1) by ζ(t)[(u m ) t ], and integrating by parts; indeed, a standard approximation procedure is needed (see [18, Lemma 3.3] and [2, Theorem 13]). Moreover, as a consequence of Definition 3.1, for any ϕ ∈ C ∞ c (B R ×[0, T ]) such that ϕ(x, T ) = 0 for any , due to inequality(5.13), all the integrals in (5.15) are finite, hence u is a solution to problem (1.1) in the sense of Definition 2.1. ,, for any x ∈ R N , then the solution u of problem (1.5) satisfies (6.9) and the bound from below u(x, t) ≥ C(T + t) for any (x, t) ∈ R N × (0, +∞) . 6). Define, for all (x, t) ∈ R N ,w(x, t) ≡ w(r(x), t) := u(x, t) in [R N \ B e ] × (0, T ), v(x, t) in B e × (0, T ), := (x, t) ∈ (R N \ B e ) × (0, T ) | 0 < F (r, t) < 1 . + (N − 1) C m a (T + t) mα−β m m − 1 F 1 m−1 1 r 2 − C p (T + t) pα . ,m (m − 1)σ(t) ≤ (p + m − 2)γ(t) . (6.13)Observe that, thanks to the choice in (6.11) and by choosing (m − 1) 2 k 1 (T + t) mα−2β v m ) − v p ≤ 0 in B e × (0, T ) , Moreover, w m ∈ C 1 (R N × [0, T )), indeed, (u m ) r = (v m ) r = −C m ζ(t) ∂B e × (0, T ) . Regularité des solutions de l'éequation des milieux poreus dans R N. D G Aronson, P Bénilan, C. R. Acad. Sci. Paris Ser. A-B. 288D. G. Aronson, P. Bénilan, Regularité des solutions de l'éequation des milieux poreus dans R N , C. R. Acad. Sci. Paris Ser. A-B 288 (1979), 103-105 . Stabilization of solutions of a degenerate nonlinear diffusion problem. D Aronson, M G Crandall, L A Peletier, Nonlinear Anal. 6D. Aronson, M.G. Crandall, L.A. Peletier, Stabilization of solutions of a degenerate nonlinear diffusion problem, Nonlinear Anal. 6 (1982), 1001-1022. Elliptic partial differential equations. Existence and regularity of distributional solutions. L Boccardo, G Croce, Studies in Mathematics. 55De GruyterL. Boccardo, G. Croce, "Elliptic partial differential equations. Existence and regularity of distributional solu- tions", De Gruyter, Studies in Mathematics, 55, 2013 . The Fujita exponent for the Cauchy problem in the hyperbolic space. C Bandle, M A Pozio, A Tesei, J. Differential Equations. 251C. Bandle, M.A. Pozio, A. Tesei, The Fujita exponent for the Cauchy problem in the hyperbolic space, J. Differential Equations 251 (2011), 2143-2163. Asymptotics of the porous media equations via Sobolev inequalities. M Bonforte, G Grillo, J. Funct. Anal. 225M. Bonforte, G. Grillo, Asymptotics of the porous media equations via Sobolev inequalities, J. Funct. Anal. 225 (2005), 33-62. Sublinear elliptic equations in R n. H Brezis, S Kamin, Manuscripta Math. 74H. Brezis, S. Kamin, Sublinear elliptic equations in R n , Manuscripta Math. 74 (1992), 87-106. Boundedness of global solutions of a supercritical parabolic equation. X Chen, M Fila, J S Guo, Nonlinear Anal. 68X. Chen, M. Fila, J.S. Guo, Boundedness of global solutions of a supercritical parabolic equation, Nonlinear Anal. 68 (2008), 621-628. Heat kernels and spectral theory. E B Davies, Cambridge Tracts in Mathematics. 92Cambridge University PressE.B. Davies, "Heat kernels and spectral theory", Cambridge Tracts in Mathematics, 92. Cambridge University Press, Cambridge, 1990. The role of critical exponents in blow-up theorems: the sequel. K Deng, H A Levine, J. Math. Anal. Appl. 243K. Deng, H.A. Levine, The role of critical exponents in blow-up theorems: the sequel, J. Math. Anal. Appl. 243 (2000), 85-126. On the blowing up of solutions of the Cauchy problem for ut = ∆u + u 1+α. H Fujita, J. Fac. Sci. Univ. Tokyo Sect. I. 13H. Fujita, On the blowing up of solutions of the Cauchy problem for ut = ∆u + u 1+α , J. Fac. Sci. Univ. Tokyo Sect. I 13 (1966), 109-124. Blow-up set for type I blowing up solutions for a semilinear heat equation. Y Fujishima, K Ishige, Ann. Inst. H. Poincaré Anal. Non Linéaire. 31Y. Fujishima, K. Ishige, Blow-up set for type I blowing up solutions for a semilinear heat equation, Ann. Inst. H. Poincaré Anal. Non Linéaire 31 (2014), 231-247. Continuation of blowup solutions of nonlinear heat equations in several dimensions. V A Galaktionov, J L Vázquez, Comm. Pure Appl. Math. 50V.A. Galaktionov, J.L. Vázquez, Continuation of blowup solutions of nonlinear heat equations in several dimensions, Comm. Pure Appl. Math. 50 (1997), 1-67. Analytic and geometric background of recurrence and non-explosion of the Brownian motion on Riemannian manifolds. A Grigor&apos;yan, Bull. Amer. Math. Soc. 36A. Grigor'yan, Analytic and geometric background of recurrence and non-explosion of the Brownian motion on Riemannian manifolds, Bull. Amer. Math. Soc. 36 (1999), 135-249. Heat Kernel and Analysis on Manifolds. A Grigor&apos;yan, AMS/IP Studies in Advanced Mathematics. Providence, RI; Boston, MAInternational Press47A. Grigor'yan, "Heat Kernel and Analysis on Manifolds", AMS/IP Studies in Advanced Mathematics, 47, American Mathematical Society, Providence, RI; International Press, Boston, MA, 2009. N onlinear characterizations of stochastic completeness. G Grillo, K Ishige, M Muratori, J. Math. Pures Appl. 139G. Grillo, K. Ishige, M. Muratori, N onlinear characterizations of stochastic completeness, J. Math. Pures Appl. 139 (2020), 63-82. Radial fast diffusion on the hyperbolic space. G Grillo, M Muratori, Proc. Lond. Math. Soc. 109G. Grillo, M. Muratori, Radial fast diffusion on the hyperbolic space, Proc. Lond. Math. Soc. 109 (2014), 283-317. Smoothing effects for the porous medium equation on Cartan-Hadamard manifolds. G Grillo, M Muratori, Nonlinear Anal. 131G. Grillo, M. Muratori, Smoothing effects for the porous medium equation on Cartan-Hadamard manifolds, Nonlinear Anal. 131 (2016), 346-362. Porous media equations with two weights: smoothing and decay properties of energy solutions via Poincaré inequalities. G Grillo, M Muratori, M M Porzio, Discrete Contin. Dyn. Syst. 33G. Grillo, M. Muratori, M.M. Porzio, Porous media equations with two weights: smoothing and decay prop- erties of energy solutions via Poincaré inequalities, Discrete Contin. Dyn. Syst. 33 (2013), 3599-3640. The porous medium equation with large initial data on negatively curved Riemannian manifolds. G Grillo, M Muratori, F Punzo, J. Math. Pures Appl. 113G. Grillo, M. Muratori, F. Punzo, The porous medium equation with large initial data on negatively curved Riemannian manifolds, J. Math. Pures Appl. 113 (2018), 195-226. The porous medium equation with measure data on negatively curved Riemannian manifolds. G Grillo, M Muratori, F Punzo, J. European Math. Soc. 20G. Grillo, M. Muratori, F. Punzo, The porous medium equation with measure data on negatively curved Riemannian manifolds, J. European Math. Soc. 20 (2018), 2769-2812. B low-up and global existence for the porous medium equation with reaction on a class of Cartan-Hadamard manifolds. G Grillo, M Muratori, F Punzo, J. Diff. Eq. 266G. Grillo, M. Muratori, F. Punzo, B low-up and global existence for the porous medium equation with reaction on a class of Cartan-Hadamard manifolds, J. Diff. Eq. 266 (2019), 4305-4336. The porous medium equation on Riemannian manifolds with negative curvature. The large-time behaviour. G Grillo, M Muratori, J L Vázquez, Adv. Math. 314G. Grillo, M. Muratori, J.L. Vázquez, The porous medium equation on Riemannian manifolds with negative curvature. The large-time behaviour, Adv. Math. 314 (2017), 328-377. On nonexistence of global solutions of some semilinear parabolic differential equations. K Hayakawa, Proc. Japan Acad. 49K. Hayakawa, On nonexistence of global solutions of some semilinear parabolic differential equations, Proc. Japan Acad. 49 (1973), 503-505. Nonlinear thermal evolution in an inhomogeneous medium. S Kamin, P Rosenau, J. Math. Phys. 23S. Kamin, P. Rosenau, Nonlinear thermal evolution in an inhomogeneous medium, J. Math. Phys. 23 (1982), 1385-1390. An Introduction to Variational Inequalities and Their Applications. D Kinderlehrer, G Stampacchia, Academic PressNew YorkD. Kinderlehrer, G. Stampacchia, "An Introduction to Variational Inequalities and Their Applications", Academic Press, New York, 1980. The role of critical exponents in blow-up theorems. H A Levine, SIAM Rev. 32H.A. Levine, The role of critical exponents in blow-up theorems, SIAM Rev. 32 (1990), 262-288. On the behavior of solutions of the Cauchy problem for a degenerate parabolic equation with nonhomogeneous density and a source. A V Martynenko, A F Tedeev, Russian) Zh. Vychisl. Mat. Mat. Fiz. 487Comput. Math. Math. Phys.A.V. Martynenko, A. F. Tedeev, On the behavior of solutions of the Cauchy problem for a degenerate parabolic equation with nonhomogeneous density and a source, (Russian) Zh. Vychisl. Mat. Mat. Fiz. 48 (2008), no. 7, 1214-1229; transl. in Comput. Math. Math. Phys. 48 (2008), no. 7, 1145-1160. The Cauchy problem for a degenerate parabolic equation with inhomogenous density and a source in the class of slowly vanishing initial functions (Russian) Izv. A V Martynenko, A F Tedeev, V N Shramenko, Ross. Akad. Nauk Ser. Mat. 763Izv. Math.A.V. Martynenko, A.F. Tedeev, V.N. Shramenko, The Cauchy problem for a degenerate parabolic equation with inhomogenous density and a source in the class of slowly vanishing initial functions (Russian) Izv. Ross. Akad. Nauk Ser. Mat. 76 (2012), no. 3, 139-156; transl. in Izv. Math. 76 (2012), no. 3, 563-580. On the behavior of solutions of the Cauchy problem for a degenerate parabolic equation with source in the case where the initial function slowly vanishes. A V Martynenko, A F Tedeev, V N Shramenko, Ukrainian Math. J. 64A.V. Martynenko, A.F. Tedeev, V.N. Shramenko, On the behavior of solutions of the Cauchy problem for a degenerate parabolic equation with source in the case where the initial function slowly vanishes, Ukrainian Math. J. 64 (2013), 1698-1715. Blow-up and global existence for solutions to the porous medium equation with reaction and slowly decaying density. G Meglioli, F Punzo, J. Diff. Eq. to appearG. Meglioli, F. Punzo, Blow-up and global existence for solutions to the porous medium equation with reaction and slowly decaying density, J. Diff. Eq., to appear. Blow-up and global existence for solutions to the porous medium equation with reaction and fast decaying density. G Meglioli, F Punzo, preprintG. Meglioli, F. Punzo, Blow-up and global existence for solutions to the porous medium equation with reaction and fast decaying density, preprint (2019). Multiple blow-up for a porous medium equation with reaction. N Mizoguchi, F Quirós, J L Vázquez, Math. Ann. 350N. Mizoguchi, F. Quirós, J.L. Vázquez, Multiple blow-up for a porous medium equation with reaction, Math. Ann. 350 (2011), 801-827. Support properties of solutions to nonlinear parabolic equations with variable density in the hyperbolic space. F Punzo, Discrete Contin. Dyn. Syst. Ser. S. 5F. Punzo, Support properties of solutions to nonlinear parabolic equations with variable density in the hyper- bolic space, Discrete Contin. Dyn. Syst. Ser. S 5 (2012), 657-670. Blow-up of solutions to semilinear parabolic equations on Riemannian manifolds with negative sectional curvature. F Punzo, J. Math. Anal. Appl. 387F. Punzo, Blow-up of solutions to semilinear parabolic equations on Riemannian manifolds with negative sectional curvature, J. Math. Anal. Appl. 387 (2012), 815-827. The decay of global solutions of a semilinear heat equation. P Quittner, Discrete Contin. Dyn. Syst. 21P. Quittner, The decay of global solutions of a semilinear heat equation, Discrete Contin. Dyn. Syst. 21 (2008), 307-318. Morrey spaces and classification of global solutions for a supercritical semilinear heat equation in R n. P Souplet, J. Funct. Anal. 272P. Souplet, Morrey spaces and classification of global solutions for a supercritical semilinear heat equation in R n , J. Funct. Anal. 272 (2017), 2005-2037. Global behavior for a class of nonlinear evolution equations. P E Sacks, SIAM J. Math Anal. 16P.E. Sacks, Global behavior for a class of nonlinear evolution equations, SIAM J. Math Anal. 16 (1985), 233-250. Blow-up in Quasilinear Parabolic Equations. A A Samarskii, V A Galaktionov, S P Kurdyumov, A P Mikhailov, De Gruyter Expositions in Mathematics. 19Walter de Gruyter & CoA.A. Samarskii, V.A. Galaktionov, S.P. Kurdyumov, A.P. Mikhailov, "Blow-up in Quasilinear Parabolic Equations", De Gruyter Expositions in Mathematics, 19. Walter de Gruyter & Co., Berlin, 1995. The problems of blow-up for nonlinear heat equations. Complete blow-up and avalanche formation. J L Vázquez, Atti Accad. Naz. Lincei Cl. Sci. Fis. Mat. Natur. Rend. Lincei Mat. Appl. 15J.L. Vázquez, The problems of blow-up for nonlinear heat equations. Complete blow-up and avalanche forma- tion, Atti Accad. Naz. Lincei Cl. Sci. Fis. Mat. Natur. Rend. Lincei Mat. Appl. 15 (2004), 281-300. Smoothing and decay estimates for nonlinear diffusion equations. Equations of porous medium type. J L Vázquez, Oxford Lecture Series in Mathematics and its Applications. 33Oxford University PressJ.L. Vázquez, "Smoothing and decay estimates for nonlinear diffusion equations. Equations of porous medium type". Oxford Lecture Series in Mathematics and its Applications, 33. Oxford University Press, Oxford, 2006. J L Vázquez, The Porous Medium Equation. Mathematical Theory. OxfordOxford University PressOxford Mathematical MonographsJ.L. Vázquez, "The Porous Medium Equation. Mathematical Theory", Oxford Mathematical Monographs. The Clarendon Press, Oxford University Press, Oxford, 2007. Fundamental solution and long time behavior of the porous medium equation in hyperbolic space. J L Vázquez, J. Math. Pures Appl. 104J.L. Vázquez, Fundamental solution and long time behavior of the porous medium equation in hyperbolic space, J. Math. Pures Appl. 104 (2015), 454-484. A note on semilinear heat equation in hyperbolic space. Z Wang, J Yin, J. Differential Equations. 256Z. Wang, J. Yin, A note on semilinear heat equation in hyperbolic space, J. Differential Equations 256 (2014), 1151-1156. Asymptotic behaviour of the lifespan of solutions for a semilinear heat equation in hyperbolic space. Z Wang, J Yin, Proc. Roy. Soc. Edinburgh Sect. A. 146Z. Wang, J. Yin, Asymptotic behaviour of the lifespan of solutions for a semilinear heat equation in hyperbolic space, Proc. Roy. Soc. Edinburgh Sect. A 146 (2016) 1091-1114. L p -energy and blow-up for a semilinear heat equation. F B Weissler, Proc. Sympos. Pure Math. 45F.B. Weissler, L p -energy and blow-up for a semilinear heat equation, Proc. Sympos. Pure Math. 45 (1986), 545-551. Behavior of global solutions of the Fujita equation. E Yanagida, Sugaku Expositions. 26E. Yanagida, Behavior of global solutions of the Fujita equation, Sugaku Expositions 26 (2013), 129-147. Blow-up results for nonlinear parabolic equations on manifolds. Q S Zhang, Duke Math. J. 97Q.S. Zhang, Blow-up results for nonlinear parabolic equations on manifolds, Duke Math. J. 97 (1999), 515- 539.
[]
[ "Electronic structure, magnetism and high-temperature superconductivity in the multi-layer octagraphene and octagraphite", "Electronic structure, magnetism and high-temperature superconductivity in the multi-layer octagraphene and octagraphite" ]
[ "Jun Li \nSchool of Physics\nState Key Laboratory of Optoelectronic Materials and Technologies\nSun Yat-Sen University\n510275GuangzhouPeoples Republic of China\n", "Shangjian Jin \nSchool of Physics\nState Key Laboratory of Optoelectronic Materials and Technologies\nSun Yat-Sen University\n510275GuangzhouPeoples Republic of China\n", "Fan Yang \nSchool of Physics\nBeijing Institute of Technology\n100081BeijingChina\n", "Dao-Xin Yao \nSchool of Physics\nState Key Laboratory of Optoelectronic Materials and Technologies\nSun Yat-Sen University\n510275GuangzhouPeoples Republic of China\n" ]
[ "School of Physics\nState Key Laboratory of Optoelectronic Materials and Technologies\nSun Yat-Sen University\n510275GuangzhouPeoples Republic of China", "School of Physics\nState Key Laboratory of Optoelectronic Materials and Technologies\nSun Yat-Sen University\n510275GuangzhouPeoples Republic of China", "School of Physics\nBeijing Institute of Technology\n100081BeijingChina", "School of Physics\nState Key Laboratory of Optoelectronic Materials and Technologies\nSun Yat-Sen University\n510275GuangzhouPeoples Republic of China" ]
[]
We systematically investigate the electronic structure, magnetism and high-temperature superconductivity (SC) in the multi-layer octagraphene and octagraphite (bulk octagraphene). A tight binding model is used to fit the electronic structures of single-layer, multi-layer octagraphenes and octagraphite. We find that the multi-layer octagraphene and octagraphite follow a simple A-A stacking structure from the energy analysis. The van der Waals interaction induces t ⊥ ≈ 0.25 eV and the hopping integrals within each layers changes little when the layer number n increases. There is a well Fermi-surface nesting with nesting vector Q = (π, π) for the singlelayer octagraphene at half-filling, which can induce a 2D Néel antiferromagnetic order. With increasing the layer number n → ∞, the Fermi-surface nesting transforms to 3D with nesting vector Q = (π, π, π) and shows the system has a 3D Néel antiferromagnetic order. Upon doping, the multi-layer octagraphene and octagraphite can enter a high-temperature s ± SC driven by spin fluctuation. We evaluate the superconducting transition temperature T c by using the random-phase approximation (RPA), which yields a high T c even if the layer number n ≥ 3. Our study shows that the multi-layer octagraphene and octagraphite are promising candidates for realizing the high-temperature SC. arXiv:2008.09620v1 [cond-mat.supr-con]
10.1103/physrevb.102.174509
[ "https://arxiv.org/pdf/2008.09620v1.pdf" ]
221,266,175
2008.09620
dbd8ab0528eac5bea6241b41f4bfb76507b018b3
Electronic structure, magnetism and high-temperature superconductivity in the multi-layer octagraphene and octagraphite Jun Li School of Physics State Key Laboratory of Optoelectronic Materials and Technologies Sun Yat-Sen University 510275GuangzhouPeoples Republic of China Shangjian Jin School of Physics State Key Laboratory of Optoelectronic Materials and Technologies Sun Yat-Sen University 510275GuangzhouPeoples Republic of China Fan Yang School of Physics Beijing Institute of Technology 100081BeijingChina Dao-Xin Yao School of Physics State Key Laboratory of Optoelectronic Materials and Technologies Sun Yat-Sen University 510275GuangzhouPeoples Republic of China Electronic structure, magnetism and high-temperature superconductivity in the multi-layer octagraphene and octagraphite (Dated: August 25, 2020) We systematically investigate the electronic structure, magnetism and high-temperature superconductivity (SC) in the multi-layer octagraphene and octagraphite (bulk octagraphene). A tight binding model is used to fit the electronic structures of single-layer, multi-layer octagraphenes and octagraphite. We find that the multi-layer octagraphene and octagraphite follow a simple A-A stacking structure from the energy analysis. The van der Waals interaction induces t ⊥ ≈ 0.25 eV and the hopping integrals within each layers changes little when the layer number n increases. There is a well Fermi-surface nesting with nesting vector Q = (π, π) for the singlelayer octagraphene at half-filling, which can induce a 2D Néel antiferromagnetic order. With increasing the layer number n → ∞, the Fermi-surface nesting transforms to 3D with nesting vector Q = (π, π, π) and shows the system has a 3D Néel antiferromagnetic order. Upon doping, the multi-layer octagraphene and octagraphite can enter a high-temperature s ± SC driven by spin fluctuation. We evaluate the superconducting transition temperature T c by using the random-phase approximation (RPA), which yields a high T c even if the layer number n ≥ 3. Our study shows that the multi-layer octagraphene and octagraphite are promising candidates for realizing the high-temperature SC. arXiv:2008.09620v1 [cond-mat.supr-con] I. INTRODUCTION The two-dimensional (2D) superconductors have drawn tremendous interests for their rich physical properties and potential applications. So far, the SC has been reported in many 2D materials, such as FeSe-SrTiO 3 1 , monolayer NbSe 2 2 , MoS 2 3 , CuO 2 4 , Bi 2 Sr 2 CaCu 2 O 8+δ 5 , etc. As the first singlelayer 2D material, graphene shows an interesting proximityinduced superconductivity when it contacts with SC materials 6 . Besides, few-layer graphene with doping may exhibit a considerable superconducting transition temperature T c 7-11 , which is higher than the reported T c in bulk compounds of the same composition 12 . Recently, the "high-temperature SC" with a T c ∼ 1.7 K has been revealed in the magic-angle twisted bi-layer graphene 13 . These progresses inform us that combinations and interactions between layers may bring important influence to the properties of 2D materials. Theoretically, the SC of graphene-based 2D materials has been widely studied via the Eliashberg theory under the framework of electron-phonon coupling mechanism (BCS) [14][15][16][17][18][19] . By doping and applying a biaxial stress, the highest T c of graphene-based materials has been proposed to reach 30 K 18 . In addition to graphene, variable forms of graphyne have been predicted and some were synthesized 20 . It is only predicted that α-graphyne would exhibit a SC with T c ∼ 12 K by hole-doping and biaxial tensile strain 21 . The hexagon symmetry of graphene or graphyne is unfavorable to form the Fermi surface nesting with high density of states, which is important to form the high-temperature superconductivity. Another 2D carbon-based material is the octagraphene 22,23 . Astonishingly, the 2D square-octagon lattice structure of the single-layer octagraphene leads to a high density of states near the well-nested Fermi-surface (FS), which may induce an antiferromagnetic spin-density-wave (SDW) order. The BCS mechanism based on electron-phonon interaction is not enough to describe the pairing and the SC mainly origi-nates from spin fluctuation. Our recent research on a repulsive Hubbard model on a square-octagon lattice with nearestneighbor and next-nearest-neighbor hopping terms, which can serve as a rough representation of the single-layer octagraphene, shows that the system can host the high-temperature SC with s ± -wave pairing symmetry 24 . Unlike the complex forms of other 2D superconductors, the simple structure of octagraphene may be an ideal platform for studying the origin of high-temperature SC. In real materials, multi-layer octagraphene and octagraphite may be more common. We here attend to study the electronic structures, magnetism and high-temperature superconductivity in the multi-layer octagraphene and octagraphite. Meanwhile, the synthesizations of octagraphene, multilayer octagraphene and octagraphite are in progress. While a novel synthesization route of single-layer octagraphene has been proposed theoretically 25 , an one-dimensional carbon nanoribbons with partial four and eight-membered rings has been realized experimentally 26 . As octagraphene shows a low cohesive energy 23 , it has an opportunity to build the strongest carbon atomic sheet after graphene. In this paper, we get a better tight binding (TB) model model to study the band structure of single-layer octagraphene. In comparison with our previous work 24 , the present Hamiltonian adopts hopping integrals fitted from the density-functional theory (DFT) calculations and are thus more realistic. Unlike the complex stacking of the graphene, our DFT calculation suggests that multi-layer octagraphenes build more likely an A-A stacking. There is a well Fermisurface nesting with nesting vector Q = (π, π) for the singlelayer octagraphene at half-filling, which can induce a 2D Néel antiferromagnetic order. With increasing the layer number n → ∞, the Fermi-surface nesting transforms to 3D with nesting vector Q = (π, π, π) and shows the system has a 3D Néel antiferromagnetic order. Upon doping, the multi-layer octagraphene and octagraphite can enter a high-temperature s ± SC driven by spin fluctuation. We calculate the T c of single-layer octagraphene, multi-layer octagraphene, and octagraphite, and find that the interlayer interaction would not affect the superconducting state much. With increasing the n, T c converges to ∼ 170 K, which is still high. The rest of the paper is organized as follows. In sec. II we provide our model and the details of our methods. In Sec. III, we introduce the calculation to single-layer octagraphene and compare with our previous work. In Sec. IV, we study the property of multi-layer octagraphenes. Sec. V provides the results for octagraphite, which is different from the multi-layer octagraphenes. The exhibited T c with increasing the layer number n is given in our estimation. Finally, in Sec. VI we provide the conclusions. II. MODEL AND APPROACH A. The Model We use the projector augmented wave (PAW) method implemented in Vienna ab initio simulation package (VASP) to perform the density functional theory (DFT) calculations [27][28][29][30] . The generalized gradient approximation (GGA) and the Perdew Burke-Ernzerhof (PBE) function are used to treat the electron exchange correlation potential 31 . The vacuum is set as 15 Å to avoid the external interaction. Grimme's DFT-D3 is chosen to correct the van der Waals interaction 32 . An extremely high cutoff energy (1500 eV) and 16×16×1 kpoint mesh with Monkhorst-Pack scheme are used in the selfconsistent calculation. To quantitatively analyze the band structures from DFT calculations, we build a tight binding (TB) model to describe the single-layer octagraphene, multi-layer octagraphene and octagraphite. The Hamiltonian can be expressed as H T B = − i, j,σ t i j c † iσ c jσ − <i, j> t ⊥ c † i c j + H.c.,(1) where c † iσ (c iσ ) is the electron creation (annihilation) operator for a given site i with spin σ. t i j is the hopping energies defined in Fig. 1(c) and t ⊥ represents the Van der Waals interlayer interaction between neighbor layers. Note that the matrix form of Eq. (1) is different for the single-layer octagraphene, multi-layer octagraphene and octagraphite. Similarly as graphene, there are strong Coulomb repulsions between the 2p z electrons in the octagraphene materials. Here we use an effective Hubbard model to describe the effects H Hubbard = H TB + U in i↑ni↓ .(2) Here the U-term represents the on-site repulsive Hubbard interaction between the 2p z electrons within the same site. B. The RPA approach We use the procedure of RPA outlined in our prior work 24,33 to solve Eq. (2). With generally neglecting the frequency dependence, we define free susceptibility for U = 0 χ (0)p,q s,t (q) = 1 N k,α,β ξ α t (k)ξ α, * s (k)ξ β q (k )ξ β, * p (k ) n F ε β k − n F ε α k ε α k − ε β k . (3) where α, β = 1, 2, 3, 4 are band indices, q = k − k is the nesting vector between k and k, ε α k and ξ α ξ (k) are the αth eigenvalue and eigenvector of matrix form of Eq. (1) respectively and n F is the Fermi-Dirac distribution function. In the RPA level, the spin (charge) susceptibility for the Hubbard-model is χ (c(s)) (q) = I + (−)χ (0) (q) U −1 χ (0) (q)(4) where χ (c(s)) (q), χ (0) (q) and U are 16×16 matrices with U pq st = Uδ s=t=p=q . A Cooper pair with momentum k and orbital (t, s) could be scattered to k, (p, q) by charge or spin fluctuations. In the RPA level, to project the effective interaction into the two bands which cross the Fermi surface, we obtain the following low energy effective Hamiltonian for the Cooper pairs near the Fermi surface, V e f f = 1 N αβ,kk V αβ k, k c † α (k)c † α (−k)c β −k c β k ,(5) where α, β = 1, 2 and V αβ is V αβ k, k = Re pqst,kk Γ pq st k, k , 0 ξ α, * p (k)ξ α, * q (−k)ξ β s −k ξ β t k . (6) In the singlet channel, the effective vertex Γ pq st (k, k ) is given as follow, Γ pq st k, k = U pt qs + 1 4 U 3χ (s) k − k − χ (c) k − k U pt qs + 1 4 U 3χ (s) k + k − χ (c) k + k U ps qt , (7) while in the triplet channel, it is Γ pq st k, k = − 1 4 U χ (s) k − k + χ (c) k − k U pt qs + 1 4 U χ (s) k + k + χ (c) k + k U ps qt .(8) We can construct the following linear integral gap equation to determine the T c and the leading pairing symmetry of the system from low energy effective Hamiltonian Eq. (5) − 1 (2π) 2 β FS dk V αβ (k, k ) v β F (k ) ∆ β k = λ∆ α (k). (9) Here, the integration and summation are along variable Fermi surface patches labeled by α or β. The v β F is Fermi velocity at k on the βth Fermi surface patch, and k , k represent the component along that patch. In the eigenvalue problem, the normalized eigenvector ∆ α (k) represents the relative value of the gap function on the αth Fermi surface patch. The largest pairing eigenvalue λ is used to estimate T c by the following equation, λ −1 = ln 1.13 ω D k B T c ,(10) here we all choose the typical energy scale of spin fluctuation ω D = 0.3 eV in our calculation, see reference 33 . III. SINGLE-LAYER OCTAGRAPHENE In our DFT calculation of single-layer octagraphene, the fit of Brich-Murnaghan EOS gives the more accurate lattice constant a 0 = 3.44 Å. We note that the relative positions of carbon atoms are almost independent of the lattice constant a. The rotational symmetry of σ bonds of octagraphene are lower than graphene, and hence the octagraphene is less stable than graphene. The rest p orbital electrons form the π bonds similar as the graphene. In Fig. 2(a), we show our DFT calculated band structures with variable lattice constant a. There are two bands 2 and 3 near the Fermi level. For a/a 0 = 0.9, the bands are quadruplely degenerate at the M point with E = -3.01 eV. This coincidence is different from the Dirac point. The structure is not a bi-conical structure with linear dispersion, but a parabolic dispersion. It means low-energy excitations are no-longer massless. k k x y Energy (eV) (a) (b) (c) Q Energy (eV) M Γ X X Γ M X X Γ M X X Γ M X 4 2 0 -2 -4 -π 0 π π 0 π ε 1 ε 2 ε 3 ε 4 ε 1 ε 2 ε 3 ε 4 ε 1 ε 2 ε 3 ε 4 1 At the Fermi level, the band structures contain a hole pocket around the Γ point and an electron pocket around the M point, see Fig. 2(b). This is similar to the undoped Fe-pnictides materials 34 . The two pockets connected by the nesting vector Q 1 = (π, π) form the well Fermi-surface nesting, which is independent of deformations within the single-layer. After a general procedure of Fourier transformation, the Hamiltonian Eq. (1) of single-layer reads as H 1 = −               0 t 1 t 2 e ik y + t 3 t 1 t 1 0 t 1 t 2 e ik x + t 3 t 2 e −ik y + t 3 t 1 0 t 1 t 1 t 2 e −ik x + t 3 t 1 0               . (11) We obtain four bands 1 , 2 , 3 and 4 by diagonalizing Eq. (11). Since the 1 and 4 are away from the Fermi level, we only use the 2 and 3 to get better fittings. By fitting the bands 2 and 3 of the path from Γ to M points, we get t 1 = 2.678 ± 0.033 eV, t 2 = 2.981 ± 0.027 eV and t 3 = 0.548 ± 0.024 eV with a/a 0 = 1.0. In comparison, t ≈ 2.7 eV of nearestneighbor hopping energy and t ≈ 0.1 eV of next nearestneighbor hopping energy is reported in graphene 35 . Note that the existence of this small t 3 is necessary to split the 3 and 4 at M point, and make 2 coincides with 3 here. Q 1 remains almost unchanged with different deformations, see Fig. 2(b). This is due to that the diagonalization result of Eq. (11) is mathematically independent of deformation a/a 0 . This phenomenon is also examinated by our DFT calculation, supporting the credibility of our TB model. Such an unchanged Fermi-surface nesting may stabilize the SC phase of the octagraphene. Figure 2(c) shows the variable fitting parameters t 1 , t 2 and t 3 of TB model with lattice constant a. As the distances between carbon atoms enlarge, the values of t 1 , t 2 and t 3 decrease. This leads to the flatter band structures in Fig.2(a). However, t 2 /t 1 remains almost 1.1 when a changes from 0.90a 0 to 1.20a 0 . The relative interaction t 2 /t 1 is independent of a. We may conclude that the hopping energies between carbon atoms are nearly inversely proportional to distances based on our calculations. We then use a Hubbard model in Eq. (2) to study the influence of spin fluctuation on SC. Although the interaction parameter U would be more than 10 eV for the graphene-based materials, the accurate value of U is still under discussion 35 . Due to the weak-coupling character of RPA, there is a limitation for the value of U, i. e. U c . Here, we set U = 5.4 eV (2t 1 ) and have the electron doping density x as 10% according to our estimation of the limits of RPA. The details of RPA limitation U c will be elaborated in Sec. V. The diagonalizing eigen-susceptibilities χ(q) of Eq. (3) peaks at the vector Q 1 = (π, π), also verified by our DFT result. The related eigenvector of susceptibilities (Q 1 ) = (1/2, −1/2, 1/2, −1/2) means that the Néel pattern is formed, see Fig. 4(d). We then get λ = 0.321 for a/a 0 = 1.0 and T c ∼ 190 K for the single-layer octagraphene. For comparison, it has been reported recently that the calculated T c is 20.8 K within the framework of electron-phonon coupling 25 . Our calculated T c is much higher due to the spin fluctuation, not the electronphonon interaction. In the previous study, our variational Monte Carlo gives the superconducting gap amplitude ∆ ∼ 50 meV and the similar T c at ∼ 180 K with the s ± -wave pairing 24 . The consistence between the two methods shows great chance to search for high T c superconductor. We also note that with the decreasing of a, T c decreases in a limited scale. This may be explained by the weakness of interactions. However, T c would remain a high value (> 100 K) when a/a 0 from 0.9 to 1.2. Thus single-layer octagraphene would be a good superconductor with limited mechanical deformation. IV. MULTI-LAYER OCTAGRAPHENE In real materials, multi-layer octagraphene may be more common. We here apply a DFT+RPA method to study the properties of multi-layer octagraphenes. We firstly verify the stacking modes of bi-layer octagraphene. Due to the C 4v symmetry of single-layer, there may be three mostly possible stacking modes between two octagraphene layers: A-A stacking, A-B stacking and A-C stacking, which are defined as (0, 0), (0.5, 0.5) and (0, 0.5) relative shifts between the two layers, respectively. The differences between cohesive Fig. 3(a). In our calculations, the A-A (0,0) stacking is the most stable. Otherwise, from A-A (0,0) stacking to A-B (0.5, 0.5) stacking, the energy differences are smaller compared with graphene. The distance between the neighboring layers of multi-layer octagraphene is 3.72 Å , which is larger than the value of graphene (3.4Å). This indicates a weaker inter-layer coupling, making the material more slippery than graphite 36 . Since the A-A stacking bi-layer is the most stable stacking mode, we only consider the A-A stacking structure. The bilayer Hamiltonian near the Fermi surface in matrix form reads as H 2 = H 1 t ⊥ I 4×4 t ⊥ I 4×4 H 1 ,(12) where H 1 is Eq. (11), I 4×4 is a 4 × 4 identity matrix. The fitting parameters of bi-layer octagraphene are t 1 = 2.685 ± 0.021 eV, t 2 = 3.001 ± 0.016 eV, t 3 = 0.558 ± 0.016 eV and t ⊥ = 0.184 ± 0.011 eV. t 1 , t 2 and t 3 have little deference from single-layer octagraphene. This can be understood by the small interlayer interaction t ⊥ , smaller than that of graphene (t ⊥ ≈ 0.4 eV) 35 . However, each band of single-layer splits into two bands due to the doubled unit cell. As a result, there are two nesting hole pockets around the Γ point and two nesting electron pockets around the M point, seen Fig. 3(b). Interestingly, three branches from 2 , 3 and 4 coincide and form a triple degeneracy at the M point, see Figs. 3(c) and (d). This triple degeneracy, which naturally exists in the bi-layer octagraphene, does not need any external deformation. From our TB model, the diagonalizing of Eq. (12) gives the exactly same result at the M point when t 1 + t ⊥ = t 2 + t 3 is satisfied. While matching of single-layer 2 and 3 at the M point is determined by the C 4v symmetry, the matching with 4 is just a coincidence. The usage of RPA for bi-layer octagraphene gives λ = 0.324 for U = 5.4 eV, doping x = 10%, which has a little difference from single-layer octagraphene. We obtain T c ∼ 180 K, which is a bit lower than that in single-layer octagraphene. We suppose that this may be caused by the interlayer interaction and the cell expansion. Although t ⊥ is very small compared with the intralayer interactions, the well Fermi-surface nesting of one layer is deviated by the interlayer interaction, see Fig. 3(b). There are two hole and two electron pockets with the nesting vectors Q 2 = (π, π), (π + δ, π + δ) and (π − δ, π − δ). The bluring of perfect Fermi surface nesting suppresses the superconductivity and reduces the T c . Then we tend to study the tendency of SC with increasing the layer number n. The A-A stacking multi-layer octagraphenes show more 2D-like behavior. As the n increases, the two energy bands 2 and 3 split into more branches due to the expansion of unit cell. We can still use the same form of Eq. (12), which can be written as: H n =                   H 1 t ⊥ I 4×4 0 t ⊥ I 4×4 H 1 t ⊥ I 4×4 . . . . . . . . . 0 t ⊥ I 4×4 H 1                   .(13) We fit the DFT calculated data of 2 and 3 of the path from Γ to M points to Eq. (13). The fitting parameters and λ of tri-to six-layer are reported in Table I. We find that the fitting parameters are very close to those of bi-layer octagraphene, whose relative difference are all less than one percent. With increasing the layer number n, we find that the pairing symmetry is kept unchanged as s ± , and the T c does not change much. According to our estimation, we get T c ∼ 170 K for trito five-layer and about T c ∼ 160 K for six-layer when U = 5.4 eV, doping x = 10%. Thus we suggest superconductivity of octagraphene is related to the 2D characteristics of materials. V. OCTAGRAPHITE Similarly as the graphite, it is important to study the octagraphite (n = ∞). The DFT calculated intra-layer structure is similar as the single-layer octagraphene, with only slightly enhanced lattice size, as the interaction between the neighboring layers changes the lattice parameters slightly. (b) Fermi surface obtained by VESTA 37 , the nesting vector is almost Q ∞ = (π, π, π). (c) The eigen-susceptibilities χ(q) with q z = 0, π/2, π. χ(q) peaks at almost Q ∞ = (π, π, π). There are always four bands near the Fermi level for a given k z , which shows the 2D feature of octagraphene materials. The highest and lowest boundaries of each band are marked by k z = 0 and k z = π, respectively. The three dimensional (3D) Fermi surface has a fusiform, where the largest hole pocket is around Γ point, see Fig. 4(b). It is similar to the multi-orbital Fe-based superconductor family 34 , and shows the importance of interlayer interactions. We here use the 3D single-orbital TB model (Eq. (1)) to construct the major band features of the octagraphite, which Table I. The lattice constant a 0 , fitting parameters t 1 , t 2 , t 3 , t ⊥ and λ of single to six-layer of octagraphene and octagraphite (∞). n a 0 (Å) t 1 (eV) t 2 (eV) t 3 (eV) t ⊥ (eV) λH ∞ = −               2t ⊥ cos k z               .(14) Since the 1 and 4 are away from the Fermi level, we only use the 2 and 3 with k z = 0, π/2 and π in our fittings. By fitting the bands 2 and 3 from Γ to M point, we get t 1 = 2.686 ± 0.017eV, t 2 = 2.986 ± 0.013 eV, t 3 = 0.574 ± 0.012 eV and t ⊥ = 0.259 ± 0.005 eV. t ⊥ here has little difference from octagraphene with layer number n ≥ 4. We need now to consider the form of Fermi surface. See Fig. 1(c) from TB model Eq. (1), the (c 1σ , c 2σ , c 3σ , c 4σ ) in a unit cell can be transformed to (−c 1σ , c 2σ , −c 3σ , c 4σ ) with a gauge transformation T , like T H T B (t 1 , t 2 , t 3 , t ⊥ ) T −1 = H T B (−t 1 , t 2 , t 3 , t ⊥ ).(15) Since the gauge transformation T does not change the momentum coordinates, H T B (t 1 , t 2 , t 3 , t ⊥ ) would has exactly the same energy levels as H T B (−t 1 , t 2 , t 3 , t ⊥ ) at any momentum k. It is easily seen that when t 3 = 0 in Eq. (14), H ∞ (k) and H ∞ (k + (π, π, π)) satisfy the following equations, H ∞ (k, t 1 , t 2 , t ⊥ ) = − H ∞ (k + (π, π, π), −t 1 , t 2 , t ⊥ ).(16) Given that the eigenvalues of H ∞ (k) and H ∞ (k+(π, π, π)) have the same absolute value with a different sign. Consider, for simplicity, all energy levels in a half Brillouin zone must have opposite values as the other half. Therefore, the Fermi energy level is located at E f = 0 with half filling exactly. If eigenvalue E k = 0 happens at a nonspecific k, E k at Fermi energy level, it is easily seen that E k+(π,π,π) = 0. We finally prove the perfect Fermi surface nesting vector Q ∞ = (π, π, π) for t 3 = 0 in Eq. (14). When t 3 > 0, the actual Fermi surface nesting vector is deviated from Q ∞ = (π, π, π) with a limited scale. Figure 4(c) shows the eigen-susceptibilities χ(q) for q z = 0, π/2, π. χ(q) peaks at Q ∞ = (π, π, π), and the related eigenvector of susceptibilities (Q ∞ ) = (1/2, −1/2, 1/2, −1/2) means that the Néel pattern is obtained both within the layer and between the layers with half filling, shown in Fig. 4(d). The reason for that χ(q) peaks at Q ∞ = (π, π, π) lies in that the FSnesting vector is at Q ∞ = (π, π, π). As shown in Fig. 4(b), due to the inter-layer coupling, the hole pocket centering at the Γ-point is no longer nested with the electron pocket centering at the M (π, π, 0) point with the same k z , and instead it's best nested with the electron pocket centering at the (π, π, π)point. Therefore, the FS-nesting vector is Q ∞ = (π, π, π). Note that such an inter-layer magnetic structure is new for the octagraphite and is absent for the single-layer octagraphene. What's more, the FS-nesting in this case is not perfect, which leads to a small but finite U c with half filling, see Fig.4(e). It means considerable superconductivity can occur even in half filling. Finally, we get λ = 0.319 and T c ∼ 170 K for the octagraphite. Practically, the U of real carbon-based materials are larger than our given value U = 5.4 eV 38 , this may give a chance to get a higher T c in real materials. However, the RPA given T c level is usually overestimated because of its weakcoupling perturbation, with its limitation of adopting a strong U 33 . As shown in Fig. 4(e), The RPA limited U c is above 6.0 eV when electron doping density x > 10%. In Fig. 4(f), the dependence of x for λ shows that the RPA results are reliable when U/U c is far less than 1. Thus we set U = 5.4 eV, x =10% to approach the relatively reasonable T c in the field of our RPA limit. We notice that λ of octagraphite shows a small decrease from single-layer octagraphene. Note that t 3 here is larger than that of single-layer octagraphenes, and is negative to form the well nesting Fermi surface. The Fermi nesting is deviated by the interlayer interaction, leading to the a small decrease of T c . Calculated s ± -wave pairing is stronger than the other three pairing symmetry channels (p, d xy , d x 2 −y 2 ), so the superconductivity of octagraphite is also similar to multiorbital Fe-based superconductors. Besides, λ of octagraphite converges to a constant value as the layer number n ≥ 3, which means that T c changes little with n. This reflects the 2D nature of octagraphite. Interestingly in Figs. 2(a), 3(c) and 4(a), except for the four energy bands described by TB model, other bands are almost the same and independent of the layer number n from the DFT results. They are represented by the local properties of orbits. Note that these bands are far away from Fermi level, so they have little influence on the superconductivity. VI. CONCLUSIONS Here we study the electronic structure, magnetism and superconductivity of single-layer octagraphene, multi-layer octagraphene, and octagraphite. The DFT calculations suggest that the multi-layer octagraphene has a simple A-A stacking and the cohesive energy differences are smaller than graphene. This indicates a good slip property and a promising mechanical applications. A TB model is built to capture the main features for each layer number n. The hopping parameters are obtained with high accuracy. We find the hopping parameters change little with the layer number n. The van der Waals interaction induces t ⊥ ≈ 0.25 eV, smaller than multilayer graphenes. All these support that the multi-layer octagraphene and octagraphite are more 2D-like. We find the sandwich structure with the multiple energy bands overlapping frequently in the multi-layer octagraphene. This band structure has not been reported before, which may bring more interesting topological phenomena. At the Fermi level, the band structures of octagraphenes contain hole pockets around the Γ point and electron pockets around the M point. The two pockets connected by the nesting vector Q 1 = (π, π) form the well Fermi-surface nesting for the single-layer octagraphene. For the multi-layer octagraphene the nesting vector is blured from Q = (π, π), makes T c lower than the single-layer octagraphene. For octagraphite, Fermisurface nesting is switched to 3D form with nesting vector Q ∞ = (π, π, π), also yields a high T c . By applying a RPA method with half filling, a 3D antiferromagnetic Néel magnetism is obtained both within the layer and between the layers. Thus the spin fluctuation is dominant for the SC pairing with doping. We calculate the T c of single-layer octagraphene, multi-layer octagraphene, and octagraphite, and find that the interlayer interaction would not affect the superconducting state much. With increasing the n, T c converges to ∼ 170 K, which is still high. The difference between the three-layer octagraphene and octagraphite is so tiny that we suggest the high-temperature superconducting s ± pairing mechanism of this material is mainly a 2D mechanism. Moreover, we find that the in-plane strain or stress would not change the energy bands obviously near the Fermi surface for the single-layer octagraphene. As an actual singlelayer octagraphene may exist on a substrate, the lattice difference with the substrate would lead to some deformations. Therefore, this stability of Fermi nesting may bring great preparation advantages. We note that the synthesis of multilayer octagraphene is now in progress. Novel synthesis routes of multi-layer octagraphene have been reported recently 25 . One-dimensional carbon nanoribbons with four and eightmembered rings have been synthesized experimentally 26 . It holds great hope to realize this promising high T c material in the future. VII. ACKNOWLEDGMENTS Figure 1 . 1(a) The predicted structure of octagraphene from DFT calculation. The relative positions between the layers form the A-A stacking. (b) Structure of single-layer octagraphene. The relative positions of four carbon atoms in a unit cell are independent of the deformation. (c) 2D single-orbital tight binding (TB) model. t 1 , t 2 and t 3 correspond to the intra-square, inter-square and diagonal hopping energies, respectively. Figure 2 . 2Single-layer octagraphene. (a) Band structures of different lattice constant a: a/a 0 = 1.1, 1.0 and 0.9 [a 0 = 3.44 Å]. DFT calculated results, solid lines; fitting results obtained by TB model, dashed lines. For a/a 0 = 0.9, the bands show a quadruple degeneracy at the M point with E = −3.01 eV. (b) Fermi surface from TB model, independent of the relative lattice constant a/a 0 . The Fermi surface is well nested by the vector Q 1 = (π, π). (c) Variable fitting parameters t 1 , t 2 and t 3 of TB model with lattice constant a. t 2 /t 1 = 1.1 is almost constant independent of a. Figure 3 . 3(a) The differences between cohesive energy per atom of the bi-layer octagraphene with relative shifts. The relative shifts between the two layers are chosen along the (100) and (110) in real space. A-A stacking (0,0) is the most stable in our calculation. (b) Fermi surface of bi-layer octagraphene. The nesting vectors Q 2 = (π, π), (π+δ, π+δ) and (π−δ, π−δ) mean the deviation of perfect Fermi surface nesting. (c) Band structures of the bi-layer octagraphene with a 0 = 3.45 Å. The solid lines represent the results by DFT calculation. The dashed lines are fitting results of TB model. (d) The detailed bands near the M point. Three branches from 2 , 3 and 4 coincide and form a triple degeneracy at the M point. energy per atom along (100) and (110) directions are shown in Figure 4 . 4Octagraphite. (a) Band structures with k z = 0, π/2, π. (d) Predicted antiferromagnetic Néel pattern with half filling. (e) RPA calculated U c as a function of the electron doping density x. (f) Doping density x dependent of largest pairing eigenvalues λ with U = 5.4 eV. Based on (e) and (f), we set U = 5.4 eV (2t 1 ) and electron doping density x = 10%. Figure 4 ( 4a) shows the DFT calculated band structure of octagraphite. t 1 t 2 e ik y + t 3 t 1 t 1 2t ⊥ cos k z t 1 t 2 e ik x + t 3 t 2 e ik y + t 3 t 1 2t ⊥ cos k z t 1 t 1 t 2 e −ik x + t 3 t 1 2t ⊥ cos k z NSFC-11974432, No. GBABRF-2019A1515011337, Leading Talent Program of Guangdong Special Projects and the start-up funding of SYSU No. 20LGPY161. thank Yao-Tai Kang for the RPA C++ program references, Zhihai Liu and Luyang WangShangjian Jin and Dao-Xin Yao are sup. Fan Yang is supported by NSFC under the Grants No. 11674025. * [email protected] Yao-Tai Kang for the RPA C++ program ref- erences, Zhihai Liu and Luyang Wang for helpful discus- sions. Jun Li, Shangjian Jin and Dao-Xin Yao are sup- ported by NKRDPCGrants No. 2017YFA0206203, No. 2018YFA0306001, No. NSFC-11974432, No. GBABRF- 2019A1515011337, Leading Talent Program of Guangdong Special Projects and the start-up funding of SYSU No. 20LGPY161, Fan Yang is supported by NSFC under the Grants No. 11674025. * [email protected] . Q.-Y Wang, Z Li, W.-H Zhang, Z.-C Zhang, J.-S Zhang, W Li, H Ding, Y.-B Ou, P Deng, K Chang, J Wen, C.-L Song, K He, J.-F Jia, S.-H Ji, Y.-Y Wang, L.-L Wang, X Chen, X.-C Ma, Q.-K Xue, 10.1088/0256-307X/29/3/037402Chinese Physics Letters. 2937402Q.-Y. Wang, Z. Li, W.-H. Zhang, Z.-C. Zhang, J.-S. Zhang, W. Li, H. Ding, Y.-B. Ou, P. Deng, K. Chang, J. Wen, C.-L. Song, K. He, J.-F. Jia, S.-H. Ji, Y.-Y. Wang, L.-L. Wang, X. Chen, X.-C. Ma, and Q.-K. Xue, Chinese Physics Letters 29, 037402 (2012). . J M Lu, O Zheliuk, I Leermakers, N F Q Yuan, U Zeitler, K T Law, J T Ye, 10.1126/science.aab2277Science. 3501353J. M. Lu, O. Zheliuk, I. Leermakers, N. F. Q. Yuan, U. Zeitler, K. T. Law, and J. T. Ye, Science 350, 1353 (2015). . X Xi, Z Wang, W Zhao, J.-H Park, K T Law, H Berger, L Forró, J Shan, K F Mak, 10.1038/nphys3538Nature Physics. 12139X. Xi, Z. Wang, W. Zhao, J.-H. Park, K. T. Law, H. Berger, L. Forró, J. Shan, and K. F. Mak, Nature Physics 12, 139 (2016). . G.-Y Zhu, F.-C Zhang, G.-M Zhang, 10.1103/PhysRevB.94.174501Phys. Rev. B. 94174501G.-Y. Zhu, F.-C. Zhang, and G.-M. Zhang, Phys. Rev. B 94, 174501 (2016). . Y Yu, L Ma, P Cai, R Zhong, C Ye, J Shen, G.-D Gu, X.-H Chen, Y.-B Zhang, Nature. Y.-j. Yu, L.-g. Ma, P. Cai, R.-d. Zhong, C. Ye, J. Shen, G.-D. Gu, X.-H. Chen, and Y.-b. Zhang, Nature (2019). . H B Heersche, P Jarillo-Herrero, J B Oostinga, L M K Vandersypen, A F Morpurgo, 10.1038/nature05555Nature. 44656H. B. Heersche, P. Jarillo-Herrero, J. B. Oostinga, L. M. K. Van- dersypen, and A. F. Morpurgo, Nature (London) 446, 56 (2007). . M Xue, G Chen, H Yang, Y Zhu, D Wang, J He, T Cao, Journal of the American Chemical Society. 1346536M. Xue, G. Chen, H. Yang, Y. Zhu, D. Wang, J. He, and T. Cao, Journal of the American Chemical Society 134, 6536 (2012). . K Li, X Feng, W Zhang, Y Ou, L Chen, K He, L.-L Wang, L Guo, G Liu, Q.-K Xue, X Ma, 10.1063/1.4817781Applied Physics Letters. 10362601K. Li, X. Feng, W. Zhang, Y. Ou, L. Chen, K. He, L.-L. Wang, L. Guo, G. Liu, Q.-K. Xue, and X. Ma, Applied Physics Letters 103, 062601 (2013). . B M Ludbrook, G Levy, P Nigge, M Zonno, M Schneider, D J Dvorak, C N Veenstra, S Zhdanovich, D Wong, P Dosanjh, C Straßer, A Stöhr, S Forti, C R Ast, U Starke, A Damascelli, 10.1073/pnas.1510435112Proceedings of the National Academy of Science. 11211795B. M. Ludbrook, G. Levy, P. Nigge, M. Zonno, M. Schnei- der, D. J. Dvorak, C. N. Veenstra, S. Zhdanovich, D. Wong, P. Dosanjh, C. Straßer, A. Stöhr, S. Forti, C. R. Ast, U. Starke, and A. Damascelli, Proceedings of the National Academy of Sci- ence 112, 11795 (2015). . A P Tiwari, S Shin, E Hwang, S.-G Jung, T Park, H Lee, 10.1088/1361-648X/aa88fbJournal of Physics Condensed Matter. 29445701A. P. Tiwari, S. Shin, E. Hwang, S.-G. Jung, T. Park, and H. Lee, Journal of Physics Condensed Matter 29, 445701 (2017). . L Huder, G Trambly De Laissardiãšre, G Lapertot, A Jansen, C Chapelier, V Renard, 10.1016/j.carbon.2018.09.003Carbon. 140592L. Huder, G. Trambly de LaissardiÚre, G. Lapertot, A. Jansen, C. Chapelier, and V. Renard, Carbon 140, 592 (2018). . M Calandra, F Mauri, 10.1103/PhysRevLett.95.237002Phys. Rev. Lett. 95237002M. Calandra and F. Mauri, Phys. Rev. Lett. 95, 237002 (2005). . Y Cao, V Fatemi, S Fang, K Watanabe, T Taniguchi, E Kaxiras, P Jarillo-Herrero, 10.1038/nature26160Nature. 55643Y. Cao, V. Fatemi, S. Fang, K. Watanabe, T. Taniguchi, E. Kaxiras, and P. Jarillo-Herrero, Nature (London) 556, 43 (2018). . M Calandra, G Profeta, F Mauri, 10.1002/pssb.201200142Physica Status Solidi B Basic Research. 2492544M. Calandra, G. Profeta, and F. Mauri, Physica Status Solidi B Basic Research 249, 2544 (2012). . J Pešić, R Gajić, K Hingerl, M Belić, 10.1209/0295-5075/108/67005Europhysics Letters). 10867005EPLJ. Pešić, R. Gajić, K. Hingerl, and M. Belić, EPL (Europhysics Letters) 108, 67005 (2014). . T P Kaloni, A V Balatsky, U Schwingenschlögl, 10.1209/0295-5075/104/47013EPL (Europhysics Letters). 10447013T. P. Kaloni, A. V. Balatsky, and U. Schwingenschlögl, EPL (Eu- rophysics Letters) 104, 47013 (2013). . I I Mazin, A V Balatsky, 10.1080/09500839.2010.487473Philosophical Magazine Letters. 90731I. I. Mazin and A. V. Balatsky, Philosophical Magazine Letters 90, 731 (2010). . C Si, Z Liu, W Duan, F Liu, 10.1103/PhysRevLett.111.196802Phys. Rev. Lett. 111196802C. Si, Z. Liu, W. Duan, and F. Liu, Phys. Rev. Lett. 111, 196802 (2013). . B.-T Wang, P.-F Liu, T Bo, W Yin, O Eriksson, J Zhao, F Wang, 10.1039/C8CP00697KPhysical Chemistry Chemical Physics (Incorporating Faraday Transactions). 2012362B.-T. Wang, P.-F. Liu, T. Bo, W. Yin, O. Eriksson, J. Zhao, and F. Wang, Physical Chemistry Chemical Physics (Incorporating Faraday Transactions) 20, 12362 (2018). . D Malko, C Neiss, F Viñes, A Görling, 10.1103/PhysRevLett.108.086804Phys. Rev. Lett. 10886804D. Malko, C. Neiss, F. Viñes, and A. Görling, Phys. Rev. Lett. 108, 086804 (2012). . T Morshedloo, M R Roknabadi, M Behdani, M Modarresi, A Kazempour, 10.1016/j.commatsci.2016.07.002Computational Materials Science. 124183T. Morshedloo, M. R. Roknabadi, M. Behdani, M. Modarresi, and A. Kazempour, Computational Materials Science 124, 183 (2016). . Y Liu, G Wang, Q Huang, L Guo, X Chen, 10.1103/PhysRevLett.108.225505Phys. Rev. Lett. 108225505Y. Liu, G. Wang, Q. Huang, L. Guo, and X. Chen, Phys. Rev. Lett. 108, 225505 (2012). . X.-L Sheng, H.-J Cui, F Ye, Q.-B Yan, Q.-R Zheng, G Su, 10.1063/1.4757410Journal of Applied Physics. 112X.-L. Sheng, H.-J. Cui, F. Ye, Q.-B. Yan, Q.-R. Zheng, and G. Su, Journal of Applied Physics 112, 074315-074315-7 (2012). . Y.-T Kang, C Lu, F Yang, D.-X Yao, 10.1103/PhysRevB.99.184506Phys. Rev. B. 99184506Y.-T. Kang, C. Lu, F. Yang, and D.-X. Yao, Phys. Rev. B 99, 184506 (2019). . J S Gu, Dingyu Xing, 10.1088/0256-307X/36/9/097401Chinese Physics Letters. 3697401J. S. Qinyan Gu, Dingyu Xing, Chinese Physics Letters 36, 097401 (2019). . M Liu, M Liu, L She, Z Zha, J Pan, S Li, T Li, Y He, Z Cai, J Wang, Y Zheng, X Qiu, D Zhong, 10.1038/ncomms14924Nature Communications. 814924M. Liu, M. Liu, L. She, Z. Zha, J. Pan, S. Li, T. Li, Y. He, Z. Cai, J. Wang, Y. Zheng, X. Qiu, and D. Zhong, Nature Communica- tions 8, 14924 (2017). . G Kresse, J Hafner, 10.1103/PhysRevB.47.558Phys. Rev. B. 47558G. Kresse and J. Hafner, Phys. Rev. B 47, 558 (1993). . G Kresse, D Joubert, 10.1103/PhysRevB.59.1758Phys. Rev. B. 591758G. Kresse and D. Joubert, Phys. Rev. B 59, 1758 (1999). . G Kresse, J Hafner, 10.1103/PhysRevB.49.14251Phys. Rev. B. 4914251G. Kresse and J. Hafner, Phys. Rev. B 49, 14251 (1994). . P E Blöchl, 10.1103/PhysRevB.50.17953Phys. Rev. B. 5017953P. E. Blöchl, Phys. Rev. B 50, 17953 (1994). . J P Perdew, K Burke, M Ernzerhof, 10.1103/PhysRevLett.77.3865Phys. Rev. Lett. 773865J. P. Perdew, K. Burke, and M. Ernzerhof, Phys. Rev. Lett. 77, 3865 (1996). . S Grimme, J Antony, S Ehrlich, H Krieg, 10.1063/1.3382344The Journal of Chemical Physics. 132154104S. Grimme, J. Antony, S. Ehrlich, and H. Krieg, The Journal of Chemical Physics 132, 154104 (2010). . F Liu, C.-C Liu, K Wu, F Yang, Y Yao, 10.1103/PhysRevLett.111.066804Phys. Rev. Lett. 11166804F. Liu, C.-C. Liu, K. Wu, F. Yang, and Y. Yao, Phys. Rev. Lett. 111, 066804 (2013). . P J Hirschfeld, M M Korshunov, I I Mazin, 10.1088/0034-4885/74/12/124508Reports on Progress in Physics. 74124508P. J. Hirschfeld, M. M. Korshunov, and I. I. Mazin, Reports on Progress in Physics 74, 124508 (2011). . A H Castro Neto, F Guinea, N M R Peres, K S Novoselov, A K Geim, 10.1103/RevModPhys.81.109Rev. Mod. Phys. 81109A. H. Castro Neto, F. Guinea, N. M. R. Peres, K. S. Novoselov, and A. K. Geim, Rev. Mod. Phys. 81, 109 (2009). . Z Liu, 10.1088/0957-4484/25/7/075703Nanotechnology. 2575703Z. Liu, Nanotechnology 25, 075703 (2014). . K Momma, F Izumi, 10.1107/S0021889811038970Journal of Applied Crystallography. 441272K. Momma and F. Izumi, Journal of Applied Crystallography 44, 1272 (2011). . M Schüler, M Rösner, T O Wehling, A I Lichtenstein, M I Katsnelson, 10.1103/PhysRevLett.111.036601Phys. Rev. Lett. 11136601M. Schüler, M. Rösner, T. O. Wehling, A. I. Lichtenstein, and M. I. Katsnelson, Phys. Rev. Lett. 111, 036601 (2013). . K S Novoselov, A K Geim, S V Morozov, D Jiang, Y Zhang, S V Dubonos, I V Grigorieva, A A Firsov, 10.1126/science.1102896Science. 306666K. S. Novoselov, A. K. Geim, S. V. Morozov, D. Jiang, Y. Zhang, S. V. Dubonos, I. V. Grigorieva, and A. A. Firsov, Science 306, 666 (2004).
[]
[ "The first moment of azimuthal anisotropy in nuclear collisions from AGS to LHC energies", "The first moment of azimuthal anisotropy in nuclear collisions from AGS to LHC energies" ]
[ "Subhash Singha \nDepartment of Physics\nKent State University\n44242OhioUSA\n", "Prashanth Shanmuganathan \nDepartment of Physics\nKent State University\n44242OhioUSA\n", "Declan Keane \nDepartment of Physics\nKent State University\n44242OhioUSA\n" ]
[ "Department of Physics\nKent State University\n44242OhioUSA", "Department of Physics\nKent State University\n44242OhioUSA", "Department of Physics\nKent State University\n44242OhioUSA" ]
[]
We review topics related to the first moment of azimuthal anisotropy (v1), commonly known as directed flow, focusing on both charged particles and identified particles from heavy-ion collisions. Beam energies from the highest available, at the CERN LHC, down to projectile kinetic energies per nucleon of a few GeV per nucleon, as studied in experiments at the Brookhaven AGS, fall within our scope. We focus on experimental measurements and on theoretical work where direct comparisons with experiment have been emphasized. The physics addressed or potentially addressed by this review topic includes the study of Quark Gluon Plasma, and more generally, investigation of the Quantum Chromodynamics phase diagram and the equation of state describing the accessible phases.
10.1155/2016/2836989
[ "https://arxiv.org/pdf/1610.00646v1.pdf" ]
55,664,196
1610.00646
ce6cebc5bb15d935b5e1007751207812fe556e64
The first moment of azimuthal anisotropy in nuclear collisions from AGS to LHC energies Subhash Singha Department of Physics Kent State University 44242OhioUSA Prashanth Shanmuganathan Department of Physics Kent State University 44242OhioUSA Declan Keane Department of Physics Kent State University 44242OhioUSA The first moment of azimuthal anisotropy in nuclear collisions from AGS to LHC energies (Dated: August 7, 2018) We review topics related to the first moment of azimuthal anisotropy (v1), commonly known as directed flow, focusing on both charged particles and identified particles from heavy-ion collisions. Beam energies from the highest available, at the CERN LHC, down to projectile kinetic energies per nucleon of a few GeV per nucleon, as studied in experiments at the Brookhaven AGS, fall within our scope. We focus on experimental measurements and on theoretical work where direct comparisons with experiment have been emphasized. The physics addressed or potentially addressed by this review topic includes the study of Quark Gluon Plasma, and more generally, investigation of the Quantum Chromodynamics phase diagram and the equation of state describing the accessible phases. I. INTRODUCTION The purpose of relativistic nuclear collision experiments is the creation and study of nuclear matter at high energy densities. Experiments have established a new form of strongly-interacting matter, called Quark Gluon Plasma (QGP) [1][2][3][4][5][6][7]. Collective motion of the particles emitted from such collisions is of special interest because it is sensitive to the equation of state in the early stages of the reaction [8][9][10]. Directed flow was the first type of collective motion identified among the fragments from nuclear collisions [11][12][13], and in current analyses, is characterized by the first harmonic coefficient in the Fourier expansion of the azimuthal distribution of the emitted particles with respect to each event's reaction plane azimuth (Ψ) [14][15][16]: v 1 = cos(φ − Ψ) ,(1) where φ is the azimuth of a charged particle, or more often, the azimuth of a particular particle species, and the angle brackets denote averaging over all such particles in all events. In some experimental analyses, v 1 is evaluated directly from Eq. (1), then a correction is applied for reaction plane resolution [15], whereas in a typical modern analysis method, the directed flow correlation is extracted using cumulants [16]. In general, v 1 is of interest when plotted as a function of rapidity, y, or sometimes pseudorapidity η = − ln(tan θ/2), where θ is the polar angle of the particle. The dependence of v 1 on collision centrality and on transverse momentum, p T , can offer additional insights. Until relatively recently [17,18], the rapidity-even component v even 1 (y) = v even 1 (−y) was always assumed to be zero or negligible in mass-symmetric collisions. In fact, fluctuations within the initial-state colliding nuclei, unrelated to the reaction plane, can generate a significant v even 1 signal [17,18]. This fluctuation effect falls beyond the scope of the present review, which focuses on fluid-like directed flow, v odd 1 (y) = −v odd 1 (−y), as per Eq. (1), and from here on, v 1 for mass-symmetric collisions implicitly signifies v odd 1 . During the first decade of the study of v 1 in nuclear collisions, it was more commonly called sideward flow. It refers to a sideward collective motion of the emitted particles, and is a repulsive collective deflection in the reaction plane. By convention, the positive direction of v 1 is taken to be the direction of 'bounce-off' of projectile spectators in a fixed target experiment [8,10]. Models imply that directed flow, especially the component closest to beam rapidities, is initiated during the passage time of the two colliding nuclei; the typical time-scale for this is 2R/γ [9,10], where R and γ are the nuclear radius and Lorentz factor, respectively. This is even earlier than the still-early time when elliptic flow, v 2 , is mostly imparted. Thus v 1 can probe the very early stages of the collision [19,20], when the deconfined state of quarks and gluons is expected to dominate the collision dynamics [9,10]. Both hydrodynamic [21,22] and transport model [23,24] calculations indicate that the directed flow of charged particles, especially baryons at midrapidity, is sensitive to the equation of state and can be used to explore the QCD phase diagram. The theoretical work leading to the prediction of collective flow in nuclear collisions evolved gradually. In the mid-1950s, Belenkij and Landau [25] were the first to consider a hydrodynamic description of nuclear collisions. During the 1970s, as the Bevatron at Lawrence Berkeley National Lab was converted for use as the first accelerator of relativistic nuclear beams, the idea of hydrodynamic shock compression of nuclear matter emerged [26][27][28], and these developments in turn led to increasingly realistic predictions [11,12,29] that paved the way for the first unambiguous measurement of directed flow at the Bevalac in the mid-1980s [13]. A frequent focus of theory papers during the subsequent years was the effort to use directed flow measurements to infer the incompressibility of the nuclear equation of state in the hadron gas phase and to infer properties of the relevant momentum-dependent potential [8,10]. The observed directed flow at AGS energies [30][31][32][33][34][35][36][37] and below is close to a linear function of rapidity throughout the populated region, and the slope dv 1 /dy can adequately quantify the strength of the signal. At SPS energies and above [38][39][40][41][42][43][44][45][46], a more complex structure is observed in v 1 (y), with the slope dv 1 /dy in the midrapidity region being different from the slope in the regions closer to beam rapidities. Various models also exhibit this kind of behavior. At these energies, both hydrodynamic and nuclear transport calculations predict a negative sign for charged particle dv 1 /dy near midrapidity, where pions are the dominant particle species. This negative dv 1 /dy near midrapidity has been given various names in the literature: "third flow component" [47], "anti-flow" [48], or "wiggle" [10,22,49]. This phenomenon has been discussed as a possible QGP signature, and a negative dv 1 /dy for baryons has been argued [22] to be particularly significant. However, some aspects of anti-flow can be explained in a model with only hadronic physics [49,50] by assuming either incomplete baryon stopping with a positive space-momentum correlation [49], or full stopping with a tilted source [51]. (GeV/c) T p 0 A three-fluid hydrodynamic model [22] predicts a monotonic trend in net-baryon directed flow versus beam energy in the case of a purely hadronic equation of state, whereas a prominent minimum at AGS energies, dubbed the "softest point collapse", is predicted when the equation of state incorporates a firstorder phase transition between hadronic and quark-gluonic matter. Recent measurements of both proton and net-proton directed flow at RHIC [46] indeed indicate non-monotonic directed flow as a function of beam energy, with the minimum lying between 11.5 and 19.6 GeV in √ s N N . However, more recent hydrodynamic and nuclear transport calculations which incorporate significant theoretical improvements (see Section V) do not reproduce the notable qualitative features of the data, and therefore cast doubt on any overall conclusion about the inferred properties of the QCD phase diagram. Directed flow has also been measured at the LHC [55]. A negative slope of v 1 (η) is observed for charged particles, but its magnitude is much smaller than at RHIC, which is thought to be a consequence of the smaller tilt of the participant zone at the LHC. In this article, we review a representative set of directed flow results spanning AGS to LHC energies. In Sections II and III, we discuss measurements of v 1 for charged particles in mass-symmetric and massasymmetric collisions, respectively. In Section IV, we cover measurements of v 1 for various identified particle species. Section V reviews some recent model calculations which lend themselves to direct comparisons with directed flow data. Section VI presents a summary and future outlook. II. DIFFERENTIAL MEASUREMENTS OF CHARGED PARTICLE DIRECTED FLOW In this section, we review measurements of v 1 for all charged particles in cases where individual species were not identified. Studies of the dependence on transverse momentum p T , pseudorapidity η, beam energy √ s N N , system size, and centrality are included. A. Dependence of v1 on transverse momentum The p T -dependence of v 1 for charged particles has been studied by the STAR experiment at RHIC [42,43]. The left panel of Fig. 1 presents directed flow results for Au+Au collisions in two centrality intervals: 5-40% and 40-80%, and in two regions of pseudorapidity: |η| < 1.3 and 2.5 < |η| < 1.3. In this case, because of the odd-functional property of v 1 (η), the backward pseudorapidity region by convention has its sign of v 1 reversed before summing over the indicated gate in pseudorapidity. These measurements represent the first instance of using only spectators to determine the estimated azimuth of the reaction plane [52][53][54]. [41]. Right panel: Comparison of charged particle v1 vs. η at RHIC [43] and LHC [55] energies. Note that all the RHIC data points are divided by 10 in order to be plotted on a common scale with LHC results. The measured v 1 in the mid-pseudorapidity interval |η| < 1.3 crosses zero in the p T region of 1 to 2 GeV/c. Extrapolations raise the possibility that a qualitatively similar zero crossing by forward-pseudorapidity v 1 occurs at higher p T , but that region does not fall within the acceptance of the STAR detector. The charged particle v 1 (p T ) measurements by ALICE [55] in Pb+Pb collisions at 2.76 TeV are compared in the right panel of Fig. 1 with the corresponding data from 200 GeV Au+Au collisions at RHIC. The measurements at both LHC and RHIC shows a similar trend, including a sign change around p T ∼ 1.5 GeV/c in central collisions and negative values at all p T for peripheral collisions. There is an interest in the observation of zero crossing behavior in v 1 (p T ) as it can be used to constrain hydrodynamic model calculations [56]. It has been pointed out that this sign change is an artifact of combining all species of charged particles together, and can be explained [43] by the different sign of v 1 for pions and baryons, in conjunction with the enhanced production of baryons at higher p T [57]. This complication is one of the reasons why directed flow of identified particles, as reviewed in later sections, can be easier to interpret and offers additional insights. B. Dependence of v1 on pseudorapidity The left panel in Fig. 2 shows charged particle v 1 (η) in Au+Au collisions at 19.6, 62.4, 130 and 200 GeV measured in the PHOBOS detector [41]. It is evident that charged particle v 1 within 3 to 4 units of η on either side of η = 0 has a sign opposite to that of the spectators on that side of η = 0 (the anti-flow phenomenon). The right panel of Fig. 2 shows the η dependence of v 1 for charged particles in 2.76 TeV Pb+Pb collisions, as measured by the ALICE collaboration [55] at the LHC. The ALICE results are compared in this panel with RHIC measurements from STAR [40,43]. The v 1 slope at the LHC and at the top RHIC energies has the same negative sign, but the slope magnitude at the LHC is a factor ∼ 3 smaller than at top RHIC energy. This pattern is consistent with the participant zone at the LHC having a smaller tilt, as predicted [51], and does not support a proposed picture at LHC energies in which a strong rotation is imparted to the central fireball [58,59]. C. System size and beam energy dependence of v1 The beam energy and system size dependence of v 1 have been studied at RHIC using data from two colliding species at two beam energies. The left panel of Fig. 3 shows charged particle v 1 for mid-centrality (30-60%) Au+Au and Cu+Cu collisions at √ s N N = 62.4 and 200 GeV, measured by STAR [43]. A trend of decreasing v 1 (η) is observed as beam energy increases for both Au+Au and Cu+Cu collisions. Across the reported pseudorapidity range, v 1 (η) is independent of system size within errors at each beam energy. This is a remarkable finding, given that the Au+Au system mass is three times that of the Cu+Cu system, and given that neither the AMPT [60][61][62] nor the UrQMD [23,24] models exhibit such a scaling behavior. A different scaling behavior is presented in the right panel of Fig. 3. Here, the data in the left panel are transformed into the rest frame of the beam nucleus, i.e., zero on the x-axis corresponds to y beam for each of the two collision energies involved. Within errors, the measurements lie on a universal curve across about three units of pseudorapidity. This behavior is known in the heavy-ion literature as limiting fragmentation, and had previously been observed in Au+Au collisions as a function of beam energy by STAR [42] and PHOBOS [41]. The term 'limiting fragmentation' was originally employed by Feynman [63] and Benecke et al. [64] to describe the analogous phenomenon of the measured µ + /µ − ratio in cosmic ray showers at sea level being almost independent of muon energy. D. Centrality dependence of v1 Fig. 4 shows v 1 as a function of collision centrality in 62.4 and 200 GeV Au+Au [43] from STAR and in 2.76 TeV Pb+Pb [55] from ALICE, for the mid-pseudorapidity region |η| < 1.3. Fig. 4 also shows v 1 (divided by 6 in order to fit conveniently on a common scale) as a function of collision centrality in 200 GeV Au+Au [43] from STAR at the forward pseudorapidity region 2.5 < |η| < 4.0. Directed flow magnitude at mid-pseudorapidity increases monotonically going from central to peripheral collisions at all three beam energies, and there is a strong trend for this magnitude to decrease with increasing beam energy. A similar but stronger centrality dependence is observed at forward pseudorapidity, and the magnitude also increases strongly from mid to forward pseudorapidity. It has been pointed out by Caines [65] that many aspects of soft physics (defined as p T < 2 GeV/c) in heavy-ion collisions at relativistic energies depend only on event multiplicity per unit rapidity; in other words, for a fixed value of dN ch /dη, there is no significant dependence on beam energy, or on centrality, or on the mass of the colliding system. This type of scaling is called "entropy-driven" soft physics. The directed flow results for two beam energies and two colliding systems reported by STAR in Ref. [43] (see [43] and one LHC dataset, 2.76 TeV Pb+Pb [55] are shown at mid-pseudorapidity, while only 200 GeV Au+Au [43] is shown at forward pseudorapidity. Note that the data points at forward pseudorapidity are divided by 6 in order to be plotted on a common scale with the data at mid-pseudorapidity. scaling is observed to hold, with caveats, for homogeneity lengths from femtoscopy [65,67], for elliptic flow per average participant eccentricity [65,68], and for various strangeness yields [65]. More recent data from the LHC also reveal examples of entropy-driven multiplicity scaling [66]. III. DIRECTED FLOW OF CHARGED PARTICLES IN MASS-ASYMMETRIC COLLISIONS In mass-asymmetric collisions like Cu+Au, the well-defined distinction that exists in mass-symmetric collisions between the odd v 1 (η) component (a hydrodynamic effect correlated with the reaction plane) and the even v 1 (η) component (an initial-state fluctuation effect unrelated to the reaction plane) no longer holds. In a recent paper from the PHENIX collaboration, they report midrapidity charged hadron v 1 (p T ) in Cu+Au collisions at √ s N N = 200 GeV for centralities of 10-20%, 20-30%, 30-40% and 40-50%, using 1 − 0.5 − 0 0.5 1 0.01 − 0.005 − 0 0.005 Cu+Au 200 GeV <2 GeV/c T 1<p 1 v 10-40% + h - h syst. uncert. + h - h + syst. uncert. (EP) h (a) η 1 − 0.5 − 0 0.5 1 0.001 − 0 0.001 0.002 1 v ∆ (b) FIG. 6. (Color online) Directed flow as a function of pseudorapidity, separately evaluated for positively and negatively charged particles, at 1 < pT < 2 GeV/c and centrality 10-40% in 200 GeV Cu+Au collisions in the STAR detector [70]. spectator neutrons from the Au side of the collision to determine the event plane; see Fig. 5 [69]. However, they preserve the standard convention for the sign of v 1 by defining the direction of bounce-off by remnants of the first nucleus in the A+A system (Cu) to be positive. An even more recent paper from STAR reports v 1 (p T ) distributions for the same system and centrality that are consistent within errors [70]. The PHENIX results in Fig. 5 [69] reveal that the higher p T particles at midrapidity (above 1 or 1.5 GeV/c in p T ), and at all the studied centralities, have negative v 1 and so are preferentially emitted with azimuths parallel to the Au fragment bounce-off direction (and antiparallel to the Cu fragment bounce-off direction). Whether or not the more abundant particles below 1 GeV/c are preferentially emitted with opposite azimuths, as might be expected based on momentum conservation, cannot be answered within the systematic uncertainty of the measurements [69]. In the STAR collaboration's analysis of charged particle directed flow in Cu+Au collisions at √ s N N = 200 GeV, a particular focus is the v 1 difference between positive and negative charges. This difference has the potential to be sensitive to the strong electric field between the two incident ions, whose electric charges differ by 79 − 29 = 50 units; this field has a lifetime on the order of a fraction of a fm/c. Fig. 6 [70] presents v 1 (η) at medium p T (1 < p T < 2 GeV/c) and intermediate centrality (10-40%). Like the PHENIX result, this v 1 measurement was made relative to the event plane from spectator neutrons, dominated by the Au side. It is evident from Fig. 6 that both even and odd components are present, and most interestingly, there is a significant pattern showing a larger magnitude for negative particles. The Parton-Hadron String Dynamics (PHSD) model [71,72], when the initial electric field is explicitly modeled, predicts a v 1 difference signal that is an order of magnitude larger than observed [70]. On the other hand, parton distribution functions [73] can be used to estimate the number of quarks and antiquarks at very early times in relation to the number created in the collision; then given certain plausible assumptions, as set out in Ref. [70], it can be inferred that only a small fraction of the total quarks created in the collision are produced during the lifetime of the initial electric field. In addition to this important insight, the chargeddependent directed flow measurements in Cu+Au collisions offer new and valuable quantitative information with relevance to the Chiral Magnetic Effect [74,75] and the Chiral Magnetic Wave [76,77]. IV. DIFFERENTIAL MEASUREMENTS OF IDENTIFIED PARTICLE DIRECTED FLOW The charged particle measurements reviewed in Section II are an admixture of all emitted particle species. Measurements of directed flow for identified particles offer more insights into the underlying physics that controls this observable. In this section, we discuss the dependence of v 1 on p T , y and centrality for several identified particle species. A. Dependence of v1 on transverse momentum Measurements of v 1 (p T ) for protons, antiprotons and charged pions have been reported by the E877 collaboration at the AGS (11A GeV/c) [30][31][32][33]. For antiprotons, large negative values of v 1 are observed for p T > 0.1 GeV/c but with large statistical errors. For protons and charged pions, v 1 (p T ) in various rapidity gates have been published, and these results are also divided into various bins of transverse energy, E T , which is a proxy for centrality. The E877 collaboration also provides the information needed to convert from their intervals of E T into percent centrality [31]. The NA49 collaboration [38] measured proton and pion v 1 (p T ) in Pb+Pb collisions at a projectile kinetic energy of 158A GeV, as shown in Fig. 7. The rapidity gate for these measurements is 4 < y lab < 5, which corresponds to a forward region (midrapidity is y lab = 2.92). The NA49 collaboration describes the v 1 (p T ) behavior as 'peculiar', especially for pions. However, they point out that negative v 1 at low p T has been predicted by Voloshin [78], and is explained by the interaction of radial and directed flow. Various types of non-flow effect were also mentioned as possible contributors to the observed pion behavior [38]. B. Dependence of v1 on rapidity Various models suggest that the structure of v 1 (y) near midrapidity, especially the pattern for baryons, is sensitive to the QCD equation of state and therefore this signal can be used to investigate QGP production and changes of phase [8-10, 22, 79, 80]. Fig. 8 presents proton and pion v 1 (y) in central, mid-central and peripheral Pb+Pb collisions at projectile kinetic energies of 40A and 158A GeV [39], as reported by the NA49 collaboration at the CERN SPS. The data points at negative rapidity are mirrored from the positive side. The v 1 (y) for pions is similar in magnitude and shape at both 40A and 158A GeV. The proton v 1 (y) measurements suggest that proton anti-flow (also called wiggle) [10,22,[47][48][49], which is not observed at the AGS (see Fig. 9), might begin to happen at SPS energies. However, based on detailed studies using a variety of methods, including the approach introduced by Borghini et al. [81], the NA49 collaboration reports that the observed pattern of proton v 1 (y) at SPS energies could be influenced by non-flow effects [39]. The EOS-E895 experiment carried out a beam energy scan at the Brookhaven AGS during 1996. E895 featured the first Time Projection Chamber with pad readout, and reported directed flow in the form of v 1 , as well as in the form of the older observable p x [82] (in-plane transverse momentum) for several identified particle species: p,p, Λ, K 0 S and K ± , in Au+Au collisions at projectile kinetic energies 2A, 4A, 6A and 8A GeV [34][35][36][37]. The left panel in Fig. 9 shows v 1 (y ) for protons at the four E895 beam energies, where y denotes normalized rapidity such that the target and projectile are always at y = −1 and +1, respectively. The slope of v 1 remains positive for all four beam energies. The slope of the rapidity dependence of directed flow was extracted by E895 using a cubic fit v 1 (y ) = F y + Cy 3 . The right panel of Fig. 9 shows the fitted proton slope dv 1 /dy (not using normalized rapidity), and unlike in Ref. [34], the horizontal axis in the right panel of Fig. 9 uses the now-conventional beam energy scale √ s N N . Two additional points for proton dv 1 /dy in Au+Au collisions are also plotted here: a measurement at the top energy of the Berkeley Bevalac (1.2A GeV) [83] using the same EOS TPC detector as in E895, and an E877 measurement at the top AGS energy of 11A GeV [31]. The plotted data show a peak near 2A GeV beam kinetic energy ( √ s N N ∼ 2.7 GeV), and thereafter, a smooth decrease with beam energy across the AGS range. Three-fluid hydrodynamic calculations [22] predict a "softest point" in the Equation of State in the AGS energy range, but the E895 beam energy scan did not reveal any non-monotonic behavior in the energy dependence. Overall, hadronic models with a momentum-dependent mean field [84] show better agreement with data at AGS and SPS energies. Phase-I of the beam energy scan (BES) [85,86] [34]. Right panel: Beam energy dependence of the slope dv1/dy (here using un-normalized rapidity) for protons, re-plotted using the more ubiquitous beam energy scale √ sNN [34]. A point from the same detector at the Bevalac [83] and another from E877 at the top AGS energy [31] are also plotted. features like a first-order phase transition and a critical point may become evident in nuclear collisions as the beam energy is scanned across RHIC's BES region. The STAR experiment initially reported measurements of directed flow for protons, antiprotons and charged pions in the energy range of 7.7 to 200 GeV [46]. At the 2015 Quark Matter meeting, preliminary directed flow data at BES energies for nine particle species were presented, along with v 1 (y) slopes for protons, Λ and π + in 9 bins of centrality [90]. Fig. 10 shows v 1 (y) at intermediate centrality (10-40%) for p, Λ,p,Λ, K ± , K 0 S , and π ± at 7.7 to 39 GeV. The overall strength of directed flow has been characterized by the slope dv 1 /dy from a linear fit over the range −0.8 < y < 0.8 [90], and the beam energy dependence of these slopes for the same nine particle species (p, Λ,p,Λ, K ± , K 0 S , and π ± ) in 10-40% centrality Au+Au collisions are presented in Fig. 11. The most noteworthy feature of the data is a minimum within the √ s N N range of 10 to 20 GeV in the slope dv 1 /dy| y∼0 for protons at intermediate centrality. The same quantity for Λ hyperons is consistent with the proton result, but the larger statistical errors for Λ do not allow an independent determination of a possible minimum in the beam energy dependence for this species. The proton and Λ directed flow slope change from positive to negative close to 11.5 GeV, and remain negative at all the remaining energies (up to and including 200 GeV in the case of protons). The remaining species have negative slope at all the studied beam energies. At 7.7 GeV, dv 1 /dy for K − is closer to zero than dv 1 /dy for K + , which supports the inference that K + and K − experience nuclear potentials that are repulsive and attractive, respectively, under the conditions created at this beam energy [91]. C. Net-particle directed flow There are two separate contributions to the energy dependence of proton directed flow in the vicinity of midrapidity: one part arises from baryon number transported from the initial state at beam rapidity towards y ∼ 0 by the stopping process of the collision, while the other part arises from baryon-antibaryon pairs produced in the fireball near midrapidity. Clearly, these two contributions have very different dependence on beam energy, and disentangling them has a good potential to generate new insights. Towards this end, the STAR collaboration has defined [46] net-proton directed flow according to [v 1 (y)] p = r(y)[v 1 (y)]p + [1 − r(y)] [v 1 (y)] net-p where r(y) is the observed rapidity dependence of the antiproton to proton ratio. Net-kaon v 1 (y) is defined analogously, with K + and K − substituted for p andp, respectively [90]. STAR's measurements of net-proton and net-kaon directed flow slope as a function of beam energy are reproduced in Fig. 12 [90]. The net-proton directed flow shows a double sign change, and a clear minimum around the same beam energy where the proton directed flow has its minimum. This coincidence is not surprising, since antibaryon production is low at and below the beam energy of the minimum, and therefore net-proton and proton observables only begin to deviate from each other at energies above the minimum. The observed minimum in directed flow for protons and net-protons resemble the predicted "softest point collapse" of directed flow [19,22], and these authors are open to an interpretation in terms of a first-order phase transition. Other theorists (see Section V) point out that mechanisms other than a first-order phase transition can cause a drop in pressure (a softening of the equation of state) and therefore a definitive conclusion requires more research. The net-kaon directed flow reproduced in Fig. 12 [90] shows close agreement with the net-proton result near and above 11.5 GeV, but deviates very strongly at 7.7 GeV. This deviation is not understood. To date, all of the several model comparisons with STAR v 1 measurements at BES energies have considered only particle v 1 as opposed to net-particle v 1 . V. RECENT MODEL CALCULATIONS OF DIRECTED FLOW Models that explicitly incorporate properties of the Quantum Chromodynamics phase diagram and its equation of state (typically hydrodynamic models) suggest that the magnitude of directed flow is an excellent indicator of the relative pressure during the early, high-density stage of the collision. Therefore, directed flow at sub-AGS beam energies can reveal information about hadron gas incompressibility, while at the higher energies that are a focus of the present review, it can flag the softening, or drop in pressure, that may accompany a transition to a different phase, notably Quark Gluon Plasma. For example, there may be a spinodal decomposition associated with a first-order phase transition [92,93], which would cause a large softening effect. However, interpretation of flow measurements is not straightforward, and it is known that directed flow can also be sensitive to poorly-understood model inputs like momentum-dependent potentials [91] in the nuclear medium. More theoretical work is needed to elucidate the quantitative connection between softening signatures, like the beam energy dependence directed flow, and QCD phase changes. During the period since publication of the STAR BES directed flow results in 2014, there have been several theoretical papers [19,20,91,94,95] aimed towards interpretation of these measurements. The Frankfurt hybrid model [96] used for the data comparison by Steinheimer et al. [94] is based on a Boltzmann transport approach similar to UrQMD for the initial and late stages of the collision process, while a hydrodynamic evolution is employed for the intermediate hot and dense stage. The equation of state for the hydro stage includes crossover and first-order phase transition options. The data comparison by Konchakovski et al. [20] uses the Parton-Hadron String Dynamics (PHSD) model [71] of the Giessen group, a microscopic approach with a crossover equation of state having properties similar to the crossover of lattice QCD [87][88][89]. The PHSD code also has a mode named Hadron String Dynamics (HSD), which features purely hadronic physics throughout the collision evolution, and which yields directed flow predictions in close agreement with those [46] of the UrQMD model. The data comparison by Ivanov and Soldatov [95] uses a relativistic 3-fluid hydrodynamic model (3FD) [97] with equations of state that include a crossover option and a first-order phase transition option. The most recent comparison to the STAR BES v 1 data, by Nara et al. [19], uses the Jet AA Microscopic (JAM) model [98]. JAM is a purely hadronic Boltzmann transport code, but the authors of Ref. [19] introduce an option to switch from the normal stochastic binary scattering style to a modified style where the elementary 2-body scatterings are always oriented like attractive orbits [99,100]. They argue that the switch-over from random to attractive binary orbits mimics the softening effect of a first-order phase transition. Fig. 13 focuses on the most promising directed flow measurement from the RHIC Beam Energy Scan, namely the dv 1 /dy| y∼0 for protons at 10-40% centrality, and summarizes recent model comparisons [19,20,94,95] with these data. These authors are largely in agreement that the data disfavor models with purely hadronic physics. However, some conclude that a crossover deconfinement transition is favored [20,95], while others conclude that a first-order phase transition is still a possible explanation [19]. Note that the argument of Nara et al. [19] is that a more sophisticated implementation of a first-order phase transition would transition from the 'JAM' curve at low BES energies to the 'JAM-attractive' curve at higher BES energies. Overall, Fig. 13 underlines the fact that no option in any of the model calculations to date reproduces, even qualitatively, the most striking feature of the data, namely, the minimum in proton directed flow in the region of √ s N N ∼ 10 − 20 GeV. It is also noteworthy that the v 1 difference between nominally equivalent equation of state implementations in different models is very large. For example, the differences in dv 1 /dy between the 1st-order phase transition in the hybrid model [94] and a similar nominal quantity for the 3FD model [95] (see Fig. 14) is currently more than an order of magnitude larger than the experimental measurement being interpreted, and is larger still than the error on the measured data. VI. SUMMARY AND OUTLOOK In this review, we discuss heavy ion directed flow results for charged particles and for identified particle types, covering beam energies from the Brookhaven AGS to the CERN LHC. Charged particle directed flow measurements have been published as a function of transverse momentum, pseudorapidity, and collision centrality, while Cu+Cu and Au+Au have also been compared. The charged particle directed flow magnitude at the LHC is a factor of three smaller than that at top RHIC energy. The observations from RHIC suggest that the charged particle directed flow is independent of system size, but depends on the incident beam energy. Limiting fragmentation scaling is observed for v 1 at RHIC energies, but entropy-driven multiplicity scaling in terms of dN ch /dη is not seen at RHIC. In mass-asymmetric collisions, specifically Cu+Au, recent directed flow measurements at √ s N N = 200 GeV have opened a new window into quark and antiquark formation at the very earliest times of the collision evolution (t ≤ 0.25 fm/c) and could clarify theoretical and experimental questions related to the Chiral Magnetic Effect and the Chiral Magnetic Wave. Measurements of v 1 for identified species offer deeper insights into the development of hydrodynamic flow. Opposite v 1 for pions and protons at AGS/SPS and at lower RHIC energies suggests an important role for nuclear shadowing. Signals of anti-flow of neutral kaons in AGS/E895 together with kaon measurements in the RHIC Beam Energy Scan region point to kaon-nucleon potential effects. The single sign-change in proton v 1 slope and a double sign-change in net-proton v 1 slope, with a clear minimum around √ s N N ∼ 11.5 -19.6 GeV shows a qualitative resemblance to a hydrodynamic model prediction called "softest point collapse of flow". This original prediction assumed a first-order phase transition, but a crossover from hadron gas to a deconfined phase can also cause a softening (a drop in pressure). None of the current state-of-the-art models can explain the main features of the STAR directed flow measurements, and different models with nominally similar equations of state diverge from each other very widely over the BES range. Looking ahead to likely developments during the period 2017-2018 in the area of directed flow at √ s N N of a few GeV and above, we can expect new BES Phase-I results for the φ meson, as well as final publication of current preliminary RHIC Beam Energy Scan Phase-I results, like those in Ref. [90]. We can also expect parallel theoretical work on related physics and interpretation of the newest data. The preliminary results for new particle species like Λs and charged and neutral kaons, as well as BES v 1 for protons, Λs and pions in narrow bins of centrality (nine bins spanning 0-80% centrality) amount to very stringent constraints on the next round of theoretical interpretation in terms of the QCD phase diagram and equation of state. More comprehensive measurements of the many phenomenological aspects of directed flow outlined in this review will be possible beginning in the year 2019, as a consequence of the much increased statistics of Phase-II of the RHIC Beam Energy Scan (to take data in 2019 and 2020), in conjunction with the anticipated upgrades to the performance of the STAR detector [101]. Thereafter, new facilities like FAIR [102] in Germany, NICA [103] in Russia, and J-PARC-HI [104,105] in Japan, will begin coming online. These new dedicated facilities will further strengthen worldwide research on the QCD phase diagram at high baryon chemical potential. FIG . 2. (Color online) Left panel: Charged particle v1 as a function of η for 0-40% central Au+Au collisions at √ sNN = 19.6, 62.4, 130 and 200 GeV, measured by PHOBOS at RHIC FIG. 3 . 3(Color online) Left panel: Charged particle v1 as a function of η for 200 and 62.4 GeV Au+Au and Cu+Cu collisions [43]. Right panel: the same as the left panel, but with the x-axis shifted by y beam . FIG Fig. 3)represented one of the first (and still few) violations of entropy-driven multiplicity scaling. In contrast, this . 4. (Color online) v1 of charged particles as a function of centrality for a mid-pseudorapidity region (|η| < 1.3) and a forward pseudorapidity region (2.5 < |η| < 4.0). Two RHIC datasets, 62.4 and 200 GeV Au+Au FIG . 5. (Color online) Midrapidity v1(pT ) in Cu+Au collisions at √ sNN = 200 GeV in four bins of centrality, asreported by the PHENIX collaboration[69]. FIG . 7. (Color online) Pion and proton v1(pT ) at 4 < y lab < 5 in Pb+Pb collisions at the SPS, at a projectile kinetic energy of 158A GeV[38]. FIG. 8 . 8(Color online) Pion and proton v1(y) for three centralities in 40A and 158A GeV Pb+Pb collisions at the SPS[39]. The open markers at negative rapidities were obtained by reflecting the solid markers at positive rapidities, where the detector acceptance was optimum. FIG. 9 . 9at RHIC took data in 2010, 2011 and 2014, spanning the √ s N N range from 200 GeV down to 7.7 GeV. Lattice QCD calculations [87-89] imply a smooth crossover from hadronic matter to QGP at beam energies near and above the top energy of RHIC, whereas phase diagram (Color online) Left panel: v1(y ) for protons in 2A, 4A, 6A and 8A GeV Au+Au collisions measured by E895 FIG. 10. (Color online) Directed flow as a function of rapidity for p, Λ,p,Λ, K ± , K 0 S , and π ± in 10-40% centrality Au+Au collisions, at √ sNN values of 7.7, 11.5, 14.5, 19.6, 27 and 39 GeV [46, 90]. The magnitude of v1 forΛ at 7.7GeV is divided by 5 to fit on the same vertical scale as all the other panels. . 11. (Color online) Beam energy dependence of v1(y) slope for p, Λ,p,Λ, K ± , K 0 S , and π ± in 10-40% centrality Au+Au collisions[46,90].FIG. 12. (Color online) Beam energy dependence of the slope of v1(y) for net protons and net kaons in 10-40% centrality Au+Au collisions, as reported by STAR[46,90]. 14. (Color online) Beam energy dependence of directed flow slope for protons in 10-40% centrality Au+Au from the STAR experiment, compared with recent hybrid [94] and 3FD [95] model calculations. All the experimental data are from Ref. [46] except for one energy point, √ sNN = 14.5 GeV [90], which should be considered a preliminary measurement. FIG. 1. (Color online) Left panel: Charged particle v1 as a function of transverse momentum in 200 GeV Au+Au collisions at RHIC, for two centralities and two pseudorapidity windows (|η| < 1.3 and 2.5 < η < 1.3) [43]. Right panel: Comparison of RHIC results with midrapidity measurements in 2.76 TeV Pb+Pb collisions at the LHC [55]..5 1 1.5 2 2.5 3 3.5 1 v 0.01 − 0.008 − 0.006 − 0.004 − 0.002 − 0 0.002 0.004 0.006 0.008 5-40% 5-40% 40-80% 40-80% Au+Au 200 GeV |<1.3 η | <4 η 2.5 < (GeV/c) T p 0.5 1 1.5 2 2.5 3 3.5 4 4.5 1 v 0.004 − 0.002 − 0 0.002 0.004 0.006 0.008 5-40% 5-40% 40-80% 40-80% Au+Au 200 GeV Pb+Pb 2.76 TeV FIG. 13. (Color online) Beam energy dependence of directed flow slope for protons in 10-40% centrality Au+Au from the STAR experiment, compared with recent available model calculations[19,20,95]. All the experimental data are from Ref.[46] except for one energy point, √ sNN = 14.5 GeV[90], which should be considered a preliminary measurement. The Frankfurt hybrid model[94] as well as a pure hydro calculation with particle freeze-out at constant energy density[94] both lie above the data and are off-scale at all BES energies.y / d 1 v d -0.02 0 0.02 STAR PHSD crossover HSD JAM JAM-attractive order st 3FD 1 3FD crossover Conflict of InterestsThe authors declare that there is no conflict of interests regarding the publication of this paper. . I Arsene, BRAHMS CollaborationNucl. Phys. 7571I. Arsene et al. (BRAHMS Collaboration), Nucl. Phys. A757, 1 (2005). . B B Back, PHOBOS CollaborationNucl. Phys. 75728B. B. Back et al. (PHOBOS Collaboration), Nucl. Phys. A757, 28 (2005). . J Adams, STAR CollaborationNucl. Phys. 757102J. Adams et al. (STAR Collaboration), Nucl. Phys. A757, 102 (2005). . K Adcox, PHENIX CollaborationNucl. Phys. 757184K. Adcox et al. (PHENIX Collaboration), Nucl. Phys. A757, 184 (2005). . B Muller, J L Nagle, Annu. Rev. Nucl. Part. Sci. 5693B. Muller and J. L. Nagle, Annu. Rev. Nucl. Part. Sci. 56, 93 (2006). . B Jacak, P Steinberg, Phys. Today. 63539B. Jacak and P. Steinberg, Phys. Today 63N5, 39 (2010). Hot & Dense QCD Matter, White Paper submitted to the 2012 Nuclear Science Advisory Committee. S A Bass, S. A. Bass et al., Hot & Dense QCD Matter, White Paper submitted to the 2012 Nuclear Science Advisory Committee; https://www.bnl.gov/npp/docs/Bass RHI WP final.pdf . W Reisdorf, H G Ritter, Annu. Rev. Nucl. Part. Sci. 47663W. Reisdorf and H. G. Ritter, Annu. Rev. Nucl. Part. Sci. 47, 663 (1997). . H Sorge, Phys. Rev. Lett. 782309H. Sorge, Phys. Rev. Lett. 78, 2309 (1997). . N Herrmann, J P Wessels, T Wienold, Annu. Rev. Nucl. Part. Sci. 49581N. Herrmann, J. P. Wessels and T. Wienold, Annu. Rev. Nucl. Part. Sci. 49, 581 (1999). . M Gyulassy, K A Frankel, H Stöcker, Phys. Lett. 110185M. Gyulassy, K. A. Frankel and H. Stöcker, Phys. Lett. 110B, 185 (1982). . P Danielewicz, M Gyulassy, Phys. Lett. 129283P. Danielewicz and M. Gyulassy, Phys. Lett. 129B, 283 (1983). . H A Gustafsson, Phys. Rev. Lett. 521590H. A. Gustafsson et al., Phys. Rev. Lett. 52, 1590 (1984). . S Voloshin, Y Zhang, Z. Phys. C. 70665S. Voloshin and Y. Zhang, Z. Phys. C 70, 665 (1996). . A M Poskanzer, S A Voloshin, Phys. Rev. C. 581671A. M. Poskanzer and S. A. Voloshin, Phys. Rev. C 58, 1671 (1998). . A Bilandzic, R Snellings, S Voloshin, Phys. Rev. C. 8344913A. Bilandzic, R. Snellings and S. Voloshin, Phys. Rev. C 83, 044913 (2011). . D Teaney, L Yan, Phys. Rev. C. 8364904D. Teaney and L. Yan, Phys. Rev. C 83, 064904 (2011). . M Luzum, J Y Ollitrault, Phys. Rev. Lett. 106102301M. Luzum and J. Y. Ollitrault, Phys. Rev. Lett. 106, 102301 (2011). . Y Nara, A Ohnishi, H Stöcker, arXiv:1601.07692hep-phY. Nara, A. Ohnishi, and H. Stöcker, arXiv:1601.07692 [hep-ph]. . V P Konchakovski, W Cassing, Yu B Ivanov, V D Toneev, Phys. Rev. C. 9014903V. P. Konchakovski, W. Cassing, Yu. B. Ivanov and V. D. Toneev, Phys. Rev. C 90, 014903 (2014). U W Heinz, Relativistic Heavy Ion Physics. R. StockNew YorkSpringer Verlag23U. W. Heinz, in Relativistic Heavy Ion Physics, Landolt-Boernstein New Series, Vol. I/23, edited by R. Stock (Springer Verlag, New York, 2010). . H Stöcker, Nucl. Phys. 750121H. Stöcker, Nucl. Phys. A750, 121 (2005). . S A Bass, Prog. Part. Nucl. Phys. 41255S. A. Bass et al., Prog. Part. Nucl. Phys. 41, 255 (1998). . M Bleicher, E Zabrodin, C Spieles, S A Bass, C Ernst, S Soff, L Bravina, M Belkacem, H Weber, H Stöcker, W Greiner, J. Phys. G. 251859M. Bleicher, E. Zabrodin, C. Spieles, S. A. Bass, C. Ernst, S. Soff, L. Bravina, M. Belkacem, H. Weber, H. Stöcker and W. Greiner, J. Phys. G 25, 1859 (1999). . S Z Belenkij, L D Landau, Nuovo Cimento Suppl. 315S. Z. Belenkij and L. D. Landau, Nuovo Cimento Suppl. 3, 15 (1956). . G F Chapline, M H Johnson, E Teller, M S Weiss, Phys. Rev. D. 84302G. F. Chapline, M. H. Johnson, E. Teller and M. S. Weiss, Phys. Rev. D 8, 4302 (1973). . W Scheid, H Müller, W Greiner, Phys. Rev. Lett. 32741W. Scheid, H. Müller and W. Greiner, Phys. Rev. Lett. 32, 741 (1974). . J Hofmann, H Stöcker, U Heinz, W Scheid, W Greiner, Phys. Rev. Lett. 3688J. Hofmann, H. Stöcker, U. Heinz, W. Scheid and W. Greiner, Phys. Rev. Lett. 36, 88 (1976). . H Stöcker, J A Maruhn, W Greiner, Phys. Rev. Lett. 44725H. Stöcker, J. A. Maruhn and W. Greiner, Phys. Rev. Lett. 44, 725 (1980). . J Barrette, E877 CollaborationNucl. Phys. 590259J. Barrette et al. (E877 Collaboration), Nucl. Phys. A590, 259c (1995). . J Barrette, E877 CollaborationPhys. Rev. C. 563254J. Barrette et al. (E877 Collaboration), Phys. Rev. C 56, 3254 (1997). . J Barrette, E877 CollaborationPhys. Rev. C. 59884J. Barrette et al. (E877 Collaboration), Phys. Rev. C 59, 884 (1999). . J Barrette, E877 CollaborationPhys. Lett. 485319J. Barrette et al. (E877 Collaboration), Phys. Lett. B485, 319 (2000). . H Liu, E895 CollaborationPhys. Rev. Lett. 845488H. Liu et al. (E895 Collaboration), Phys. Rev. Lett. 84, 5488 (2000). . P Chung, E895 CollaborationPhys. Rev. Lett. 85940P. Chung et al. (E895 Collaboration), Phys. Rev. Lett. 85, 940 (2000). . P Chung, E895 CollaborationPhys. Rev. Lett. 862533P. Chung et al. (E895 Collaboration), Phys. Rev. Lett. 86, 2533 (2001). C Pinkenburg, E895 CollaborationProc. of Quark Matter. of Quark MatterStony Brook, New York698495C. Pinkenburg et al. (E895 Collaboration), Proc. of Quark Matter 2001, Stony Brook, New York, Nucl. Phys. A698, 495c (2002). . C Alt, NA49 CollaborationPhys. Rev. Lett. 804136C. Alt et al. (NA49 Collaboration), Phys. Rev. Lett. 80, 4136 (1998). . C Alt, NA49 CollaborationPhys. Rev. C. 6834903C. Alt et al. (NA49 Collaboration), Phys. Rev. C 68, 034903 (2003). . J Adams, STAR CollaborationPhys. Rev. C. 7214904J. Adams et al. (STAR Collaboration), Phys. Rev. C 72, 014904 (2005). . B B Back, PHOBOS CollaborationPhys. Rev. Lett. 9712301B. B. Back et al. (PHOBOS Collaboration), Phys. Rev. Lett. 97, 012301 (2006). . J Adams, STAR CollaborationPhys. Rev. C. 7334903J. Adams et al. (STAR Collaboration), Phys. Rev. C 73, 034903 (2006). . B I Abelev, STAR CollaborationPhys. Rev. Lett. 101252301B. I. Abelev et al. (STAR Collaboration), Phys. Rev. Lett. 101, 252301 (2008). . G Agakishiev, STAR CollaborationPhys. Rev. C. 8514901G. Agakishiev et al. (STAR Collaboration), Phys. Rev. C 85, 014901 (2012). . L Adamczyk, STAR CollaborationPhys. Rev. Lett. 108202301L. Adamczyk et al. (STAR Collaboration), Phys. Rev. Lett. 108, 202301 (2012). . L Adamczyk, STAR CollaborationPhys. Rev. Lett. 112162301L. Adamczyk et al. (STAR Collaboration), Phys. Rev. Lett. 112, 162301 (2014). . L P Csernai, D Rohrich, Phys. Lett. B. 458454L. P. Csernai and D. Rohrich, Phys. Lett. B 458, 454 (1999). . J Brachmann, Phys. Rev. C. 6124909J. Brachmann et al., Phys. Rev. C 61, 024909 (2000). . R J M Snellings, H Sorge, S A Voloshin, F Q Wang, N Xu, Phys. Rev. Lett. 842803R. J. M. Snellings, H. Sorge, S. A. Voloshin, F. Q. Wang and N. Xu, Phys. Rev. Lett. 84, 2803 (2000). . Y Guo, F Liu, A H Tang, Phys. Rev. C. 8644901Y. Guo, F. Liu, and A. H. Tang, Phys. Rev. C 86, 044901 (2012). . P Bozek, I Wyskiel, Phys. Rev. C. 812803P. Bozek and I. Wyskiel, Phys. Rev. C 81, 2803 (2000). . Star, STAR Note SN-0448STAR ZDC-SMD proposal, STAR Note SN-0448 (2003). . G Wang, Kent State UniversityPhD thesisG. Wang, PhD thesis, Kent State University, 2005; https://drupal.star.bnl.gov/STAR/theses. E Eemc Smd: C, Allgower, The STAR ZDC-SMD has the same structure as the STAR. 499740The STAR ZDC-SMD has the same structure as the STAR EEMC SMD: C. E. Allgower et al., Nucl. Instr. Meth. A 499, 740 (2003). . B Abelev, ALICE CollaborationPhys. Rev. Lett. 111232302B. Abelev et al. (ALICE Collaboration), Phys. Rev. Lett. 111, 232302 (2013). . U Heinz, P Kolb, J. Phys. G. 301229U. Heinz and P. Kolb, J. Phys. G 30, S1229 (2004). . J Adams, STAR CollaborationPhys. Lett. B. 655104J. Adams et al. (STAR Collaboration), Phys. Lett. B 655, 104 (2007). . J Bleibel, G Burau, C Fuchs, Phys. Lett. B. 659520J. Bleibel, G. Burau, and C. Fuchs, Phys. Lett. B 659, 520 (2008). . L P Csernai, V K Magas, H Stöcker, D D Strottman, Phys. Rev. C. 8424914L. P. Csernai, V. K. Magas, H. Stöcker, and D. D. Strottman, Phys. Rev. C 84, 024914 (2011). . Z.-W Lin, C M Ko, Phys. Rev. C. 6534904Z.-W. Lin and C. M. Ko, Phys. Rev. C 65, 034904 (2002). . Z.-W Lin, C M Ko, B.-A Li, B Zhang, S , Phys. Rev. C. 7264901Z.-W. Lin, C. M. Ko, B.-A. Li, B. Zhang and S. Pal, Phys. Rev. C 72, 064901 (2005). . L.-W Chen, V Greco, C M Ko, P F Kolb, Phys. Lett. 60595L.-W. Chen, V. Greco, C. M. Ko and P. F. Kolb, Phys. Lett. B605, 95 (2005). . R P Feynman, Phys. Rev. Lett. 231415R. P. Feynman, Phys. Rev. Lett. 23, 1415 (1969). . J Benecke, T T Chou, C.-N Yang, E Yen, Phys. Rev. 1882159J. Benecke, T. T. Chou, C.-N. Yang and E. Yen, Phys. Rev. 188, 2159 (1969). . H Caines, arXiv:nucl-ex/0609004Eur. Phys. J. 49297H. Caines, Eur. Phys. J. C49, 297 (2007); arXiv:nucl-ex/0609004. . H Caines, private communicationH. Caines, private communication, 2016. . M Lisa, arXiv:nucl-ex/0512008M. Lisa, arXiv:nucl-ex/0512008. . B Alver, PHOBOS CollaborationPhys. Rev. Lett. 98242302B. Alver et al. (PHOBOS Collaboration), Phys. Rev. Lett. 98, 242302 (2007). . A Adare, PHENIX CollaborationarXiv:1509.07784v1Phys. Rev. C. A. Adare et al. (PHENIX Collaboration), submitted to Phys. Rev. C; arXiv:1509.07784v1. . L Adamczyk, STAR CollaborationarXiv:1608.04100submitted for publicationL. Adamczyk et al. (STAR Collaboration), submitted for publication; arXiv:1608.04100. . W Cassing, Eur. Phys. J.: Spec. Top. 1683W. Cassing, Eur. Phys. J.: Spec. Top. 168, 3 (2009). . V Voronyuk, V D Toneev, S A Sergei, W Cassing, Phys. Rev. C. 9064903V. Voronyuk, V. D. Toneev, S. A. Sergei and W. Cassing, Phys. Rev. C 90, 064903 (2014). . D Kharzeev, R D Pisarski, M H G Tytgat, Phys. Rev. Lett. 81512D. Kharzeev, R. D. Pisarski and M. H. G. Tytgat, Phys. Rev. Lett. 81, 512 (1998). . D Kharzeev, R D Pisarski, Phys. Rev. D. 61111901D. Kharzeev and R. D. Pisarski, Phys. Rev. D 61, 111901 (2000). . D E Kharzeev, H Yee, Phys. Rev. D. 8385007D. E. Kharzeev and H. Yee, Phys. Rev. D 83, 085007 (2011). . Y Burnier, D E Kharzeev, J Liao, H.-U Yee, Phys. Rev. Lett. 10752303Y. Burnier, D. E. Kharzeev, J. Liao, and H.-U. Yee, Phys. Rev. Lett. 107, 052303 (2011). . S A Voloshin, Phys. Rev. C. 551630S. A. Voloshin, Phys. Rev. C 55, R1630 (1997). . P Kolb, U Heinz, nucl-th/0305084P. Kolb and U. Heinz, nucl-th/0305084. . P Huovinen, P V Ruuskanen, Annu. Rev. Nucl. Part. Sci. 56163P. Huovinen and P.V. Ruuskanen, Annu. Rev. Nucl. Part. Sci. 56, 163 (2006). . N Borghini, P M Dinh, J.-Y Ollitrault, Phys. Rev. C. 6614905N. Borghini, P. M. Dinh, and J.-Y. Ollitrault, Phys. Rev. C 66, 014905 (2002). . P Danielewicz, G Odyniec, Phys. Lett. 157146P. Danielewicz and G. Odyniec, Phys. Lett. 157B, 146 (1985). . M D Partlan, EOS CollaborationPhys. Rev. Lett. 752100M. D. Partlan et al. (EOS Collaboration), Phys. Rev. Lett. 75, 2100 (1995). . M Isse, A Ohnishi, N Otuka, P K Sahu, Y Nara, Phys. Rev. C. 7264908M. Isse, A. Ohnishi, N. Otuka, P. K. Sahu and Y. Nara, Phys. Rev. C 72, 064908 (2005). B I Abelev, STAR collaborationSTAR Note SN0493. B. I. Abelev et al. (STAR collaboration), STAR Note SN0493 (2009). . M M Aggarwal, STAR collaborationarXiv:1007.2613M. M. Aggarwal et al. (STAR collaboration), arXiv:1007.2613. . F Karsch, Nucl. Phys. B Proc. Suppl. 129614F. Karsch et al., Nucl. Phys. B Proc. Suppl. 129, 614 (2004). . Y Aoki, G Endrodi, Z Fodor, S D Katz, K K Szabo, Nature. 443675Y. Aoki, G. Endrodi, Z. Fodor, S. D. Katz and K. K. Szabo, Nature 443, 675 (2006). . M Cheng, Phys. Rev. D. 7974505M. Cheng et al., Phys. Rev. D 79, 074505 (2009). Shanmuganathan for the STAR Collaboration. P , arXiv:1512.09009v1Proc. of Quark Matter. of Quark MatterKobe, JapanP. Shanmuganathan for the STAR Collaboration, Proc. of Quark Matter 2015, Kobe, Japan, Nucl. Phys. A (in press); arXiv:1512.09009v1. W Cassing, V P Konchakovski, A Palmese, V D Toneev, E L Bratkovskaya, arXiv:1408.4313Proc. 3rd Int. Conf. on New Frontiers in Physics. 3rd Int. Conf. on New Frontiers in PhysicsKolymbari, Crete951004nucl-thW. Cassing, V. P. Konchakovski, A. Palmese, V. D. Toneev and E. L. Bratkovskaya, Proc. 3rd Int. Conf. on New Frontiers in Physics, Kolymbari, Crete, 2014, EPJ Web Conf. 95, 01004 (2015); arXiv:1408.4313 [nucl-th]. . P Shukla, A K Mohanty, Phys. Rev. C. 6454910P. Shukla, A. K. Mohanty, Phys. Rev. C 64, 054910 (2001). . A Bessa, E S Fraga, B W Mintz, Phys. Rev. D. 7934012A. Bessa, E. S. Fraga, B. W. Mintz, Phys. Rev. D 79, 034012 (2009). . J Steinheimer, J Auvinen, H Petersen, M Bleicher, H Stöcker, Phys. Rev. C. 8954913J. Steinheimer, J. Auvinen, H. Petersen, M. Bleicher and H. Stöcker, Phys. Rev. C 89, 054913 (2014). . Yu B Ivanov, A A Soldatov, Phys. Rev. C. 9124915Yu. B. Ivanov and A. A. Soldatov, Phys. Rev. C 91, 024915 (2015). . H Petersen, J Steinheimer, G Burau, M Bleicher, H Stöcker, Phys. Rev. C. 7844901H. Petersen, J. Steinheimer, G. Burau, M. Bleicher and H. Stöcker, Phys. Rev. C 78, 044901 (2008). . Yu B Ivanov, V N Russkikh, V D Toneev, Phys. Rev. C. 7344904Yu. B. Ivanov, V. N. Russkikh, and V. D. Toneev, Phys. Rev. C 73, 044904 (2006). . Y Nara, N Otuka, A Ohnishi, K Niita, S Chiba, Phys. Rev. C. 6124901Y. Nara, N. Otuka, A. Ohnishi, K. Niita and S. Chiba, Phys. Rev. C 61, 024901 (2000). . D E Kahana, D Keane, Y Pang, T Schlagel, S Wang, Phys. Rev. Lett. 744404D. E. Kahana, D. Keane, Y. Pang, T. Schlagel and S. Wang, Phys. Rev. Lett. 74, 4404 (1995). . D E Kahana, Y Pang, E V Shuryak, Phys. Rev. C. 56481D. E. Kahana, Y. Pang and E. V. Shuryak, Phys. Rev. C 56, 481 (1997). BES Phase-II Whitepaper. STAR collaboration, BES Phase-II Whitepaper, STAR Note SN0598 (2014). . H Sako, Nucl. Phys. A. 9311158H. Sako et al., Nucl. Phys. A 931, 1158 (2014). White Paper for J-PARC Heavy-Ion Program. H Sako, H. Sako et al., White Paper for J-PARC Heavy-Ion Program, April 2016, http://silver.j-parc.jp/sako/white-paper-v1.17.pdf
[]
[ "QCD and a new paradigm for nuclear structure", "QCD and a new paradigm for nuclear structure" ]
[ "A W Thomas \nSchool of Chemistry and Physics\nCSSM and ARC Centre of Excellence for Particle Physics at the Terascale\nUniversity of Adelaide\n5005AdelaideSAAustralia\n" ]
[ "School of Chemistry and Physics\nCSSM and ARC Centre of Excellence for Particle Physics at the Terascale\nUniversity of Adelaide\n5005AdelaideSAAustralia" ]
[]
We review the reasons why one might choose to seriously re-examine the traditional approach to nuclear theory where nucleons are treated as immutable. This examination leads us to argue that the modification of the structure of the nucleon when immersed in a nuclear medium is fundamental to how atomic nuclei are built. Consistent with this approach we suggest key experiments which should tell us unambiguously whether there is such a change in the structure of a bound nucleon. We also briefly report on extremely promising recent calculations of the structure of nuclei across the periodic table based upon this idea.
10.1051/epjconf/201612301003
[ "https://www.epj-conferences.org/articles/epjconf/pdf/2016/18/epjconf_hias2016_01003.pdf" ]
54,086,712
1606.05956
01193a7f338b437f5970afc595ed767faa973516
QCD and a new paradigm for nuclear structure A W Thomas School of Chemistry and Physics CSSM and ARC Centre of Excellence for Particle Physics at the Terascale University of Adelaide 5005AdelaideSAAustralia QCD and a new paradigm for nuclear structure We review the reasons why one might choose to seriously re-examine the traditional approach to nuclear theory where nucleons are treated as immutable. This examination leads us to argue that the modification of the structure of the nucleon when immersed in a nuclear medium is fundamental to how atomic nuclei are built. Consistent with this approach we suggest key experiments which should tell us unambiguously whether there is such a change in the structure of a bound nucleon. We also briefly report on extremely promising recent calculations of the structure of nuclei across the periodic table based upon this idea. Introduction Since the discovery of the neutron in the 1930s, the overwhelming majority of theoretical studies of nuclear structure have adopted the hypothesis that the protons and neutrons inside a nucleus are immutable objects whose internal structure never changes. These immutable objects interact through non-relativistic two-and three-body forces and the challenge is primarily to accurately solve the many-body problem. The phenomenological forces used include physics such as Yukawa's pion exchange and as a consequence the precise calculation of observables may require the inclusion of exchange current corrections. Beginning with the famous one-boson-exchange potentials [1], it became clear that the dominant part of the intermediate range attraction between nucleons had a Lorentz scalar, isoscalar character, which was phenomenologically represented by the exchange of a σ meson. For decades this meson was viewed as an artifact involving an unphysical meson used purely for convenience. However, careful dispersion relation treatments of πN scattering in the past decade have shown that this state does indeed exist [2]. Confirmation of this Lorentz scalar, isoscalar character of the intermediate range attraction in the NN force also came from dispersion relation studies by groups in Paris [3], Stony Brook and elsewhere. Walecka and coworkers exploited the Lorentz scalar nature of the NN attraction and the Lorentz vector character of the short range repulsion to build a very successful, fully relativistic theory of nuclear matter [4] and later finite nuclei [5]. Here too the nucleons were immutable. All this was very satisfactory but for one vexatious issue. At nuclear matter densities the typical mean scalar field strength felt by a bound nucleon in the Walecka model is of order 500 MeV. This is a huge number. As a Supported by the Australian Research Council through grants DP150103101 and CE110001104. a consequence, the effective mass of the bound nucleon is only one half of its free mass. At around the same time as Walecka and collaborators developed their model, the theory of the strong interaction underwent a revolution. Quantum Chromodynamics was developed as a local gauge field theory built on color. It became clear that, by analogy with Rutherford's work on the nucleus within the atom, the natural explanation of the discovery of scaling at SLAC in the late 60's [6] was that the nucleon too was primarily empty space containing point-like quarks. From this more fundamental point of view the huge scalar field experienced by a bound nucleon is even more challenging. How can it be that the exchange of a scalar meson, which must couple to the confined quarks in the nucleon with such strength, can have no effect on the internal structure of the nucleon, which after all is far from point-like? Considerations like these led Guichon [7] to propose a dramatically different approach to nuclear binding, the Quark Meson Coupling (QMC) model, where the effect of the mean scalar field generated by other nucleons is treated self-consistently in solving for the wave function of each confined quark. Taking the simplest form for the coupling of the σ and ω mesons to quarks confined in the MIT bag model [8,9], means that in nuclear matter the vector field simply shifts the definition of the energy, while the scalar field modifies the Dirac wave function. This difference in the effect of the two Lorentz components of the nuclear mean field is crucial, as their effects more or less cancel when it comes to the total energy but for the quark motion (or loosely speaking, wave function) the scalar field is not cancelled. A critical effect of the change in the quark wave function induced by an attractive scalar field is that the size of the lower Dirac component increases. In turn this reduces the value of dVψψ, which defines the overall strength with which the scalar field couples to the nucleon. This process is completely analogous to the way an atom rearranges its internal structure to oppose an applied electric field. Thus the parameter calculated within any particular quark model which describes this is called the "scalar polarizability", d. The overall scalar coupling to the nucleon is written in the simplest approximation as g σN (σ) = g σN (0) − d 2 (g σN (0)σ) 2 .(1) In the MIT bag model d ≈ 0.22R, with R the bag radius. This behaviour is very straightforward and appears in all relativistic quark models used so far. Nevertheless, in terms of nuclear structure it is profound. Whereas the repulsion felt by each nucleon grows linearly with density, the scalar attraction saturates as the density rises and one naturally finds saturation of nuclear matter. This mechanism is both new and extremely effective. As a result the mean scalar field felt by a nucleon at the saturation density of nuclear matter is just a few hundred MeV, much lower than that found in the Walecka model. Philosophically, this approach is radically different from anything done before because the colourless clusters of quarks which occupy single particle levels in nuclear matter may have nucleon quantum numbers but their internal structure is modified. Almost immediately it was shown [10] that this change could account for the key features of the famous nuclear EMC effect, discovered in the early 80's. Later the model was developed further by Guichon, Rodionov and Thomas [11] to correctly treat the effect of spurious centre of mass motion in the bag, which had resulted in anomalously small ωN couplings. In the same paper the model was also extended to finite nuclei, showing very naturally how one obtains realistic spin orbit forces. Finally, since the model is built at the quark level, using the same quark model, with the same quarkmeson coupling constants, one can derive the properties of any bound hadron. For example, it shows very naturally why the spin-orbit force for the Λ hyperon is extremely small [12,13]. For a complete overview of the phenomenological consequences of the QMC model we refer to the review by Saito et al. [14]. With the motivation for the QMC approach clearly established, one is naturally led to the following lines of investigation. First, given the success of the conventional approach to nuclear structure based upon non-relativistic two-and three-body forces, it is natural to ask how that is related to QMC. We address this in section 2. Second, one may also ask what evidence there is to support the at first sight radical idea that the clusters of quarks bound in shell model orbits actually have internal structure different from that of a free nucleon. This is addressed in Section 3, where we anticipate the results of a critical experiment performed at Jefferson Lab, which are expected to appear soon. Section 4 summarises this new approach to the structure of the atomic nucleus and looks to further consequences of it. Nuclear structure: a new force of the Skyrme type It is worthwhile to begin with some remarks on the application of effective field theory (EFT) to nuclear structure, since that also is often treated as containing all of the consequences of QCD [15]. Certainly the systematic application of chiral effective field theory to the NN and NNN forces and hence to nuclear structure has proven quite powerful. Such an approach is built upon the symmetries of QCD and is often considered to be equivalent to it. The problem is that the EFT approach needs some power counting scheme, which is a purely human construction. It also needs a set of hadronic degrees of freedom (dof) and that choice too is at the whim of the user. Finally, the EFT typically applied to nuclear problems is non-relativistic. The usual choice of dof are nucleons and pions. If these are indeed the appropriate dof one is in luck. However, given the remarks in the Introduction, where we saw that on model independent grounds the intermediate range attraction between nucleons is a rather large Lorentz scalar, this is not so obvious. The attractive scalar and repulsive vector forces may cancel (in the central component of the nuclear force) to produce a relatively small amount of binding but the effect of those two components on the internal structure of a nucleon is completely different. In an EFT the only way to include the effect of a change in the structure of a bound nucleon at the level of QCD is to include nucleon excited states amongst the dof. Typically this is limited to the Δ resonance, where we do know the relevant couplings quite well. However, given that the σ meson has quantum numbers 0 ++ , one may expect that the inclusion of excitations like the Roper resonance [16] may be relevant. Unfortunately, we have so little knowledge of that state that, at the present time, it would be very difficult to include it in an EFT framework in any reliable manner. As a consequence, building an EFT of nuclei based upon nucleon and pion dof may not be as accurate an expression of QCD as it may appear at first sight. An alternative approach to developing an EFT for nuclear structure is based on the density functional approach. There one starts with the QMC model itself and develops a density functional equivalent to it. From this one can use the machinery developed around the Skyrme forces [17], which have proven so successful in the study of both nuclear structure and reactions. Indeed, using the density functional approach it has proven possible to develop a clear connection between the self-consistent treatment of in-medium hadron structure and the existence of many-body [18] or density dependent [19] effective forces. Dutra et al. [20] critically examined a variety of phenomenological Skyrme models of the effective density dependent nuclear force against the most up-to-date empirical constraints. Amongst the few percent of the Skyrme forces studied which satisfied all of these constraints, the Skyrme model SQMC700, was unique in that it was actually derived from the QMC model and hence incorporated the effects of the internal structure of the nucleon and its modification in-medium. Very recently, Stone, Guichon, Reinhard and Thomas [21] carried out a systematic study of the properties of atomic nuclei across the whole periodic table using the new, effective, density-dependent NN force derived from the QMC model [19]. The study began by defining those combinations of the three fundamental couplings in the model (namely the σ, ω and ρ couplings to the up and down quarks) which reproduce the saturation density, binding energy per nucleon and symmetry energy of nuclear matter within the empirical uncertainties on these quantities. Then, a search was carried out for the set of three parameters satisfying this nuclear matter constraint which best described the ground-state properties of a selection of more than 100 nuclei across the entire periodic table. The root mean-square deviation of the fit from the actual binding energy for this set of nuclei was just 0.35%. For the superheavy nuclei where the binding energies are known, the deviation was a mere 0.1%. This level of agreement with the empirical binding energies is remarkable, in that it is comparable with the very best phenomenological Skyrme forces which have typically 11 or more adjustable parameters. Not only does this derived effective NN force satisfactorily describe binding energies but going beyond the nuclei used in the fit it accurately describes the evolution of quadrupole deformation across isotopic chains, including shell closures. It also proved capable of describing the observed shape co-existence of prolate, oblate and spherical shapes in the Zr region. Finally, it naturally gave a double quadrupole-octupole phase transition in the Ra-Th region. These are remarkable successes given the extremely small number of parameters and this suggests that it would be worthwhile to apply this derived effective force across a variety of challenges in modern nuclear physics. Experimental tests Almost immediately after the creation of the QMC model it was applied [10] to the modification of the valence quark distribution in nuclei discovered by the European Muon Collaboration (EMC), known as the EMC effect [22]. That early work was based on the MIT bag model, for which the calculation of structure functions is possible within some approximations [23] but complicated. More recently, the generalization of the QMC model to the NJL model, suggested by Bentz and Thomas [24], has also been applied to the EMC effect with similar success [25]. The modification of the quark wave functions within the bound nucleons, because of the applied mean scalar field, naturally suppresses the valence distributions at large Bjorken x. While this approach is the only quantitative model of nuclear structure which is able to describe the nuclear EMC effect, it is not yet universally accepted as the explanation for it. For example, it has recently been suggested that the entire EMC effect should be attributed to an as yet uncalculable modification of the nucleons involved in short-range correlations [26], while the rest of the nucleons apparently remain totally unchanged. free (ρB=0) -Hartree free (ρB=0) -RPA 12 C (ρB=0.1) -RPA NM (ρB=0.16) -RPA 208 Pb -experiment GFMC Figure 1. Predictions (from Ref. [36]) for the Coulomb sum rule as a function of three momentum transfer for nuclear matter at densities corresponding to 12 C and 208 Pb, with or without the effect of the in-medium modification of the nucleon electric form factors. Also shown are the GFMC calculations for 12 C (small points [37]) and older experimental data for 208 Pb [34,38]. Another feature of this approach to nuclear structure is that the elastic form factors of the nucleon are also modified in-medium [27]. Using the QMC model, predictions were made almost 20 years ago for the experiment being planned at Jefferson Lab to measure the ratio of the electric to magnetic form factors of a proton bound in 4 He [28]. A decade later the measurements were in remarkably good agreement with those predictions [29][30][31], showing a significant medium modification. However, after the data appeared it was shown that it could also be fit by adding an unusually large polarised charge exchange correction. Although we are aware of no data supporting that proposed correction and no proposal to check it experimentally, it has muddied the waters sufficiently that this cannot yet be regarded as a "smoking gun". Another suggestion, which seems far less susceptible to unknown nuclear corrections, involves the measurement of the longitudinal response function measured in inelastic electron scattering [34]. That was also examined in the late 90's on the basis of the modification of the electric form factor of the proton [32], already mentioned. Very recently, inspired by the proposal of Meziani and collaborators [33] to make a definitive measurement of this quantity for several nuclei across the periodic table, this response function and the associated Coulomb sum rule of McVoy and van Hove [35] were investigated using the NJL model to describe the structure of both the free and bound nucleons [36]. This work not only treated self-consistently the modification of bound nucleon structure resulting from the mean scalar field but it also included a state-of-the-art treatment of relativistic corrections and RPA correlations. The results are illustrated in Fig. 1. At high values of the momentum transfer the effect of relativity and of the medium modification of the electric form factor of the proton in particular are both very significant. The older data certainly favours the new calculations and it is clearly vital to have the results of the comprehensive new experiment from Jefferson Lab as soon as possible. The beauty of this particular measurement is that it appears to be extremely insensitive to other nuclear corrections, including the effect of short-range correlations. Summary We have presented a compelling argument that within the framework of QCD one is naturally led to the conclusion that the structure of a bound nucleon must differ from that in free space. This idea has been used to derive, starting from the quark level, a new, density-dependent effective nuclear force which has proven remarkably accurate in describing the properties of finite nuclei across the entire periodic table, while at the same time reproducing the known properties of nuclear matter. We trust that these remarkable results will inspire a great deal more work on nuclear structure within this framework over the coming years. We have seen that within the quantitative models of nuclear structure that have been developed within this approach, using either the MIT bag or the NJL model to describe nucleon structure, one finds a natural explanation of the nuclear EMC effect. There are also predictions for the modification of the electromagnetic form factors of the bound nucleon, for which the most unambiguous test is the Coulomb sum rule. There is an expectation that definitive new data for this will come from Jefferson Lab in the near future. Finally, we briefly mention a number of other consequences of this approach to nuclear structure which are both fascinating and the subject of experimental investigation in the near future. For example, a careful study of nuclear structure functions has shown that this approach predicts an important isovector component of the nuclear EMC effect [39]. For a nucleus like 56 Fe this leads to a correction to the Paschos-Wolfenstein relation which is of the sign and magnitude to reduce the NuTeV anomaly by more than one standard deviation. These predictions will be tested directly in future measurements of parity violation [40] at Jefferson Lab following the 12 GeV upgrade. Within this approach one also finds a remarkably large nuclear modification of the spin dependent parton distributions of the nucleon [41]. Again, future experiments planned at Jefferson Lab will test this through the measurement of the spin structure functions of light nuclei with an unpaired proton. In conclusion, we stress that while one can derive effective NN forces which can be used in traditional nuclear structure calculations, the underlying physics constitutes a new paradigm for nuclear theory. The quark clusters which occupy shell model orbits in finite nuclei have internal structure which depends on the local scalar fieldthey are not immutable. This simple observation, which is entirely natural within the framework of QCD, explains the saturation of nuclear matter and the nuclear EMC effect and predicts a dramatic reduction in the Coulomb sum rule as well as a multitude of other phenomena which will be subject to experimental study in the coming decade. © The Authors, published by EDP Sciences. This is an open access article distributed under the terms of the Creative Commons Attribution License 4.0 (http://creativecommons.org/licenses/by/4.0/). AcknowledgementsI am indebted to the many collaborators who have contributed to the understanding of this approach to nuclear structure, particularly P. A. M. Guichon, W. Bentz, I. Cloët, K. Saito, J. Stone and K. Tsushima. This work was supported by the University of Adelaide and by the Australian Research Council through the ARC Centre of Excellence for Particle Physics at the Terascale (CE110001104), an ARC Australian Laureate Fellowship (FL0992247) and DP150103101. . T Ueda, A E S Green, Phys. Rev. 1741304T. Ueda and A. E. S. Green, Phys. Rev. 174, 1304 (1968). . B Ananthanarayan, G Colangelo, J Gasser, H Leutwyler, Phys. Rept. 353207B. Ananthanarayan, G. Colangelo, J. Gasser and H. Leutwyler, Phys. Rept. 353, 207 (2001). . W N Cottingham, M Lacombe, B Loiseau, J M Richard, R , Vinh Mau, Phys. Rev. D. 8800W. N. Cottingham, M. Lacombe, B. Loiseau, J. M. Richard and R. Vinh Mau, Phys. Rev. D 8, 800 (1973). . J D Walecka, Annals Phys. 83491J. D. Walecka, Annals Phys. 83, 491 (1974). . B D Serot, J D Walecka, Phys. Lett. B. 87172B. D. Serot and J. D. Walecka, Phys. Lett. B 87, 172 (1979). . E D Bloom, Phys. Rev. Lett. 23930E. D. Bloom et al., Phys. Rev. Lett. 23, 930 (1969). . P A M Guichon, Phys. Lett. B. 200235P. A. M. Guichon, Phys. Lett. B 200, 235 (1988). . A Chodos, R L Jaffe, K Johnson, C B Thorn, V F Weisskopf, Phys. Rev. D. 93471A. Chodos, R. L. Jaffe, K. Johnson, C. B. Thorn and V. F. Weisskopf, Phys. Rev. D 9, 3471 (1974). . A W Thomas, Adv. Nucl. Phys. 131A. W. Thomas, Adv. Nucl. Phys. 13, 1 (1984). . A W Thomas, A Michels, A W Schreiber, P A M Guichon, Phys. Lett. B. 23343A. W. Thomas, A. Michels, A. W. Schreiber and P. A. M. Guichon, Phys. Lett. B 233, 43 (1989). . P A M Guichon, Nucl. Phys. A. 601349P. A. M. Guichon et al., Nucl. Phys. A 601, 349 (1996). . K Tsushima, Nucl. Phys. A. 630691K. Tsushima et al., Nucl. Phys. A 630, 691 (1998). . K Tsushima, K Saito, A W Thomas, Phys. Lett. B. 411413Phys. Lett. BK. Tsushima, K. Saito and A. W. Thomas, Phys. Lett. B 411, 9 (1997) [Phys. Lett. B 421, 413 (1998)]. . K Saito, K Tsushima, A W Thomas, Prog. Part. Nucl. Phys. 581K. Saito, K. Tsushima and A. W. Thomas, Prog. Part. Nucl. Phys. 58, 1 (2007). . S Weinberg, Nucl. Phys. B. 3633S. Weinberg, Nucl. Phys. B 363, 3 (1991). . D Leinweber, arXiv:1511.09146hep-latD. Leinweber et al., arXiv:1511.09146 [hep-lat]. . D Vautherin, D M Brink, Phys. Rev. C. 5626D. Vautherin and D. M. Brink, Phys. Rev. C 5, 626 (1972). . P A M Guichon, A W Thomas, Phys. Rev. Lett. 93132502P. A. M. Guichon and A. W. Thomas, Phys. Rev. Lett. 93, 132502 (2004). . P A M Guichon, Nucl. Phys. A. 7721P. A. M. Guichon et al., Nucl. Phys. A 772, 1 (2006). . M Dutra, Phys. Rev. C. 8535201M. Dutra et al., Phys. Rev. C 85, 035201 (2012). . J R Stone, P A M Guichon, P G Reinhard, A W Thomas, Phys. Rev. Lett. 11692501J. R. Stone, P. A. M. Guichon, P. G. Reinhard and A. W. Thomas, Phys. Rev. Lett. 116, 092501 (2016). . J J Aubert, European Muon CollaborationPhys. Lett. B. 123275J. J. Aubert et al. [European Muon Collaboration], Phys. Lett. B 123, 275 (1983). . A I Signal, A W Thomas, Phys. Rev. D. 402832A. I. Signal and A. W. Thomas, Phys. Rev. D 40, 2832 (1989). . W Bentz, A W Thomas, Nucl. Phys. A. 696138W. Bentz and A. W. Thomas, Nucl. Phys. A 696, 138 (2001). . I C Cloet, W Bentz, A W Thomas, Phys. Lett. B. 642210I. C. Cloet, W. Bentz and A. W. Thomas, Phys. Lett. B 642, 210 (2006). . D Wang, PVDIS CollaborationNature. 50667D. Wang et al. [PVDIS Collaboration], Nature 506, 67 (2014). . D H Lu, K Tsushima, A W Thomas, A G Williams, K Saito, Phys. Rev. C. 6068201D. H. Lu, K. Tsushima, A. W. Thomas, A. G. Williams and K. Saito, Phys. Rev. C 60, 068201 (1999). . D H Lu, A W Thomas, K Tsushima, A G Williams, K Saito, Phys. Lett. B. 417217D. H. Lu, A. W. Thomas, K. Tsushima, A. G. Williams and K. Saito, Phys. Lett. B 417, 217 (1998). . S Strauch, Phys. Rev. Lett. 9152301Jefferson Lab E93-049 CollaborationS. Strauch et al. [Jefferson Lab E93-049 Collabora- tion], Phys. Rev. Lett. 91, 052301 (2003). . M Paolone, Phys. Rev. Lett. 10572001M. Paolone et al., Phys. Rev. Lett. 105, 072001 (2010). . J M Udias, J R Vignote, Phys. Rev. C. 6234302J. M. Udias and J. R. Vignote, Phys. Rev. C 62, 034302 (2000). . K Saito, K Tsushima, A W Thomas, Phys. Lett. B. 46527K. Saito, K. Tsushima and A. W. Thomas, Phys. Lett. B 465, 27 (1999). . J Morgenstern, Z E Meziani, Phys. Lett. B. 515269J. Morgenstern and Z. E. Meziani, Phys. Lett. B 515, 269 (2001). . K W Mcvoy, L Van Hove, Phys. Rev. 1251034K. W. McVoy and L. Van Hove, Phys. Rev. 125, 1034 (1962). . I C Cloët, W Bentz, A W Thomas, Phys. Rev. Lett. 11632701I. C. Cloët, W. Bentz and A. W. Thomas, Phys. Rev. Lett. 116, 032701 (2016). . A Lovato, S Gandolfi, R Butler, J Carlson, E Lusk, S C Pieper, R Schiavilla, Phys. Rev. Lett. 11192501A. Lovato, S. Gandolfi, R. Butler, J. Carlson, E. Lusk, S. C. Pieper and R. Schiavilla, Phys. Rev. Lett. 111, 092501 (2013). . A Zghiche, Nucl. Phys. A. 572757Nucl. Phys. AA. Zghiche et al., Nucl. Phys. A 572, 513 (1994) [Nucl. Phys. A 584, 757 (1995)]. . I C Cloet, W Bentz, A W Thomas, Phys. Rev. Lett. 102252301I. C. Cloet, W. Bentz and A. W. Thomas, Phys. Rev. Lett. 102, 252301 (2009). . I C Cloet, W Bentz, A W Thomas, Phys. Rev. Lett. 109182301I. C. Cloet, W. Bentz and A. W. Thomas, Phys. Rev. Lett. 109, 182301 (2012). . I C Cloet, W Bentz, A W Thomas, Phys. Rev. Lett. 9552302I. C. Cloet, W. Bentz and A. W. Thomas, Phys. Rev. Lett. 95, 052302 (2005).
[]
[ "New face of multifractality: Multi-branched left-sidedness and phase transitions in multifractality of interevent times", "New face of multifractality: Multi-branched left-sidedness and phase transitions in multifractality of interevent times" ]
[ "Jaros Law Klamut ", "Ryszard Kutner ", "Zbigniew R Struzik ", "\nFaculty of Physics\nTomasz Gubiec Center for Polymer Studies\nDepartment of Physics\nUniversity of Warsaw\nPasteur Str. 5PL-02093WarsawPoland\n", "\nFaculty of Physics\nBoston University\n02215BostonMAUSA\n", "\nUniversity of Warsaw\nPasteur Str. 5PL-02093WarsawPoland\n", "\nJapan and Advanced Center for Computing and Communication\nUniversity of Tokyo\nBunkyo-ku, RIKEN, 2-1 Hirosawa113-8655, 351-0198Tokyo, Wako, SaitamaJapan\n" ]
[ "Faculty of Physics\nTomasz Gubiec Center for Polymer Studies\nDepartment of Physics\nUniversity of Warsaw\nPasteur Str. 5PL-02093WarsawPoland", "Faculty of Physics\nBoston University\n02215BostonMAUSA", "University of Warsaw\nPasteur Str. 5PL-02093WarsawPoland", "Japan and Advanced Center for Computing and Communication\nUniversity of Tokyo\nBunkyo-ku, RIKEN, 2-1 Hirosawa113-8655, 351-0198Tokyo, Wako, SaitamaJapan" ]
[]
We develop an extended multifractal analysis based on the Legendre-Fenchel transform rather than the routinely used Legendre transform. We apply this analysis to studying time series consisting of inter-event times. As a result, we discern the non-monotonic behavior of the generalized Hurst exponent -the fundamental exponent studied by us -and hence a multi-branched left-sided spectrum of dimensions. This kind of multifractality is a direct result of the non-monotonic behavior of the generalized Hurst exponent and is not caused by non-analytic behavior as has been previously suggested. We examine the main thermodynamic consequences of the existence of this type of multifractality related to the thermal stable, metastable, and unstable phases within a hierarchy of fluctuations, and also to the first and second order phase transitions between them.
null
[ "https://arxiv.org/pdf/1809.02674v2.pdf" ]
158,454,290
1809.02674
a9088359741fe44f03c6787625c3395bb677efd0
New face of multifractality: Multi-branched left-sidedness and phase transitions in multifractality of interevent times Jaros Law Klamut Ryszard Kutner Zbigniew R Struzik Faculty of Physics Tomasz Gubiec Center for Polymer Studies Department of Physics University of Warsaw Pasteur Str. 5PL-02093WarsawPoland Faculty of Physics Boston University 02215BostonMAUSA University of Warsaw Pasteur Str. 5PL-02093WarsawPoland Japan and Advanced Center for Computing and Communication University of Tokyo Bunkyo-ku, RIKEN, 2-1 Hirosawa113-8655, 351-0198Tokyo, Wako, SaitamaJapan New face of multifractality: Multi-branched left-sidedness and phase transitions in multifractality of interevent times numbers: 8965 Gh0540-a8975Da We develop an extended multifractal analysis based on the Legendre-Fenchel transform rather than the routinely used Legendre transform. We apply this analysis to studying time series consisting of inter-event times. As a result, we discern the non-monotonic behavior of the generalized Hurst exponent -the fundamental exponent studied by us -and hence a multi-branched left-sided spectrum of dimensions. This kind of multifractality is a direct result of the non-monotonic behavior of the generalized Hurst exponent and is not caused by non-analytic behavior as has been previously suggested. We examine the main thermodynamic consequences of the existence of this type of multifractality related to the thermal stable, metastable, and unstable phases within a hierarchy of fluctuations, and also to the first and second order phase transitions between them. The concept of extended scale invariance, referred to as multifractality, has become a routinely applied but still intensively developed methodology for studying both complex systems [1][2][3][4][5][6] and nonlinear (e.g. chaotic with a low degree of freedom) dynamical ones [7]. This is an inspiring and rapidly evolving approach to nonlinear science in many different fields stretching far beyond traditional physics [8]. The direct inspiration of the present work is our earlier results presented in papers [9,10]. In these publications we found the left-sided multifractality on financial markets of small, medium, and large capitalizations as a direct result of a non-analytic behavior of the Rényi exponent. We indicated that a broad distribution of interevent times is responsible for the existence of left-sided multifractality. In the present work, we suggest that not only a broad distribution but primarily nonlinear long-term autocorrelations bear responsibility for the multifractality observed. Attention was first drawn to the existence of leftsided multifractality (generated by the binomial cas- * [email protected] cade which produces singularity in the Rényi exponent or stretched exponential decay of the smallest coarse-grained probability) by Mandelbrot and coauthors [11,12]. An interesting breakdown of multifractality in diffusion-limited aggregation was discovered by Blumenfeld and Aharony [13]. They found strongly asymmetric spectra of singularity depending on the size of the growing aggregate in DLA, showing a clear tilt to the left as a signature of phase transition to non-multifractality. Earlier, the multifractals with the right part of spectrum of singularities not well defined (caused by a phase transition) was mimicked by a random version of the paradigmatic two-scale Cantor set and also in the context of DLA [14][15][16][17] (and refs therein). In recent years much effort has been devoted to the reliable identification of the multifractality in real data coming from various fields such as geophysics [18], in seismology [19] and hierarchical cascades of stresses in earthquake pattern [20]) [21], atmospheric science and climatology (e.g., turbulent phenomena) [22,23], financial markets [24,25], neuroscience [26] (e.g., neuron spiking [27]), cardio-science or cardiophysics [28] (e.g., physiology of the human heart) [29] and refs. therein, and further work investigating complexity in heart rate [30,31] and physiology [32]. However, the identification of multifractality is still a challenge, because there are many circumstances in which an apparent (spurious) multifractality appears. Recognizing true multifractality is all the more difficult because we are not sure that all sources of multifractality have been discovered to date [33] and because one has to deal with physical multifractality of limited range; also, the limited amount of empirical data available is a serious technical challenge. These last two hurdles are finite size effects, which sometimes manages to disarm by finite size scaling. The spontaneous volatility clustering (present even for a single realization of the Poisson random walk in the finite time range) can hinder the identification of true, significant, and stable multifractality. There are also other difficulties with this identification, especially when nonlinear properties of time series are studied. A spurious multifractality can also arise as a result of very slow crossover phenomenon on finite time scales [34]. In addition, the pollution of a multifractal signal with noise (white or colored) as well as the presence of short memory or periodicity can significantly change properties of the multifractal signal. Unfortunately, the physical origin of multifractality is, in fact, rarely identified. Only two sources of true multifractality have been known to date [33]: (i) presence in the system of broad distributions and/or (ii) long-term/range correlations. However, there is a widespread belief that some stochastic or deterministic mixture of monofractals should produce multifractals [7,18,24]. All of them are able to produce cascades that lie at the heart of multifractality. Incontrovertibly, the situation is complicated. Nevertheless, we demonstrate, by studying the time series of interevent times, that the extraction of true multifractality is possible in this important case. Notably, the multifractality of the series of interevent times is poorly and only occasionally researched, although it comes from a key role of the dependence between inter-event times. As a part of this work, we attempt to fill this deficiency, even more striking because inter-event or waiting times are an essential element of the popular continuous time random walk formalism [35][36][37]. Financial markets fluctuate, sometimes strongly by increasing the risk level in order to maximize profit. This finds its reflection in the interevent times' patterns acting as a direct reflection of the systems' activities -their various properties were studied in the last decade [1,9,10,[38][39][40][41][42][43]. Among them, the key observation is that quite often the dependence between waiting times dominates that between spatial increments [44] defining the process, which cannot be considered as renewal [45]. As without examining the role of interevent times, we are not able to describe the dynamics of financial markets, these studies are still at an early stage of development. This situation is the motivation and inspiration for our work, emphasizing the abovementioned key role of inter-event times. In this work, we study fluctuations of mean interevent times and their dependencies by relying on their absolute central moments and autocorrelations of fluctuations' absolute values. In the case of financial markets, the fluctuations are (generally speaking) a consequence of the double-auction mechanism (represented by the book of orders) [46][47][48]. The approach allows you to order the fluctuations according to the degree of their corresponding moments (cf. the Lyapunov inequality in ref. [49]). This is essential in a multiscaling analysis in many branches of science. Our approach is based on the fluctuation function, similarly to canonical multifractal detrended fluctuation analysis (MF-DFA). We correctly take into account the normalized partition function. This partition function is built on the basis of the escort probability or normalized fluctuation function. Such an approach is crucial in correctly reading from empirical data the generalized scaling exponents. We have demonstrated that the multifractality obtained is real and not apparent -the latter forced mainly by finite size effect. Moreover, we obtained multibranched left-sided multifractality, where first and second order phase transitions exist together with thermal stable and unstable phases as well. Nevertheless, it is still a challenge to find microscale physical mechanisms underlying this multifractality. We expect this to play a significant role in the future analysis of real-time series of different origins e.g. geophysical, medical, and financial. In addition to the above, the non-monotonic behavior of the generalized Hurst exponent producing turning points in the behavior of the Hölder exponent is directly responsible for the multi-branched left-sided spectrum of dimensions and for the first and second order phase transitions, together with thermal stable and unstable multifractal phases. To find this multi-branched spectrum on the financial market (in an alternative way to that used in papers [9,10]) and perform an analysis, the use of the Legendre-Fenchel transform is required. This transform is a generalization of the Legendre transform commonly used to extract usual (single-branched) multifractality from empirical data. The paper is organized as follows. In addition to present section, where we give the motivation of our work and its goal, indicating a possibility of extension of our approach to research areas far beyond the social sciences, it consists of Sec. II, where the ex-tension of the canonical MF-DFA is developed and applied to the description of poorly exploited the empirical time series of inter-event times. In Sec. III, we reveal the existence of the first and second order phase transitions in this type of multifractality and examine the main thermodynamic consequences. Finally, in Sec. IV, we discuss key results of the work, indicate their importance and summarize the whole work. II. NORMALIZED MULTIFRACTAL DETRENDED FLUCTUATION ANALYSIS As is well known, multifractality occurs where fluctuations and/or dependences occur in many different spatial and/or temporal scales under different scaling laws i.e. defined by various scaling exponents, which create a multiscaling phenomenon. For example, multifractality can be caused by the long-term dependence (e.g. temporal nonlinear longterm autocorrelations) or/and some broad distributions, leading to the hierarchical organization of many scales. The identification of multifractality in empirical data requires caution, not only due to the finite size effect [50] and crashes [51] (i.e., strong nonstationarities) but also because of the presence of spurious [52] and/or corrupted multifractality [53]. Fortunately, because multifractality is sensitive to these effects (or contaminations), they can be properly identified and eliminated or at least minimized. The main purpose of this work is the analysis of multifractality generated by the non-monotonic behavior of the generalized Hurst exponent. This nonmonotonicity results in left-sided multifractality and a multi-branch spectrum of dimensions. The central role in the analysis is, therefore, the generalized Legendre transform called the Legendre-Fenchel transform, which is also referred to as the contact transform. It must be said that the nonmonotonic qdependence of the generalized Hurst exponent was already observed in both true and spurious multifractality contexts in [54] and refs. therein. In this paper, we develop the normalized multifractal detrended fluctuation analysis ready for the analysis of both stationary and nonstationary detrended time series. This means that we allow that after detrending, time series may still contain some higher-order nonstationarities, which we call weak nonstationarities. In our case, volatility clustering of fluctuations is such a weak nonstationary phenomenon generating real multifractality. Our approach combines the statistical-physical analysis of weakly nonstationary states, based on the generalized statistical-mechanical partition function, with that based on the multiscale fluctuation function. In general, we are travelling from moments of arbitrary orders through the partition function to multifractality. This is due to the consistent definition of escort probability introduced herein, which is more proper than the non-normalized and sometimes even negative one given by Eq. (12) in ref. [33]. We are dealing only with the analysis of detrended absolute values, i.e., that bereft of the dichotomous noise. The motivation is that this type of nonlinear quantities can be long-term autocorrelated as opposed to the (usual) bilinear autocorrelations. In our case, the autocorrelations which are studied point to the existence of a distinct antipersistent structure of fluctuations behind them. Perhaps, this structure reflects the fact that after the period of high market activity there is period of significantly lesser activity and so on in an alternating fashion, leading to the effect of volatility clustering. A. Intraday fluctuations of interevent times The intraday autocorrelation of the absolute additively detrended profile is defined for a single typical day (or replica) ν and an arbitrary timescale s, F 2 (j; ν, s) = 1 s − j s−j i=1 | U ν (i) − y ν (i) | · | U ν (i + j) − y ν (i + j) |, ν = 1, . . . , N d ,(1) where j = 0, . . . , s − 1, defines time-step distance or number of time windows of length ∆ (see the schematic plot in Fig. 1 for details; in the original work [33] this length was marked by s) between both absolute deviations (detrended fluctuations) | U ν − y ν | present at day ν at time steps i and i + j (1 ≤ i ≤ s numbers the current time window, y ν is the detrending polynomial, and variable U ν is defined below), s is the total number of time windows within an arbitrary replica ν (the same for each one), defining the daily time scale, 1 ≤ ν ≤ N d , and N d is the number of trading days (replicas). Note that y ν (i) = M m=0 A m ν i M −m , M ≥ 0,(2) where in all our further considerations we assume M = 3 as it is the lowest order of the polynomial, which enables us to reproduce an inflection point present in the majority of empirical profiles, Y s, as a result of common (intraday) 'lunch effect'. Apparently, for j = 0 the detrended autocorrelation function (also called in this case the detrended self-correlation function) becomes a detrended fluctuation function. Therefore, we can introduce the notation F 2 (ν, s) def. = F 2 (j = 0; ν, s). The single-day (ν-day) profile U ν is defined by the corresponding difference between subsequent multiday profiles, Y s. This difference equals the cumulation of single-window mean inter-event times, ∆t ν i , shown in plot (a) in Fig. 2 (in our considerations we deal, in fact, with ∆t ν i ≤ ∆). That is, we use definitions, U ν (i) = Y [(ν − 1)s + i] − Y [(ν − 1)s] = i i =1 ∆t ν i ,(3) where Y ν [0] = 0 and for the first replica we have U ν=1 (i) = Y ν=1 [i] = i i =1 ∆t ν=1 i . Apparently, Eq. (3) makes it possible to determine recurrently the multi-day profile, Y [(ν − 1)s + i] = ν−1 ν =1 s i =1 ∆t ν i + i i =1 ∆t ν i , ν ≥ 2.(4) Eq. (3) can be interpreted in terms of directed (climbing) random walk -see the monotonically increasing broken curve drawn on plot (b) in Fig. 2. We use for all plots in this work tick-by-tick transaction data for KGHM (copper and silver production), one of the most liquid stock on the Warsaw Stock Exchange, from 3rd January 2011 to 14th July 2017 (1620 trading days). Fig. 2 is intended to show the daily structure of empirical data (including detrended data). Notably, a typical intraday pattern of a single-day inter-event time means, ∆t ν i , of transactions falling into the i single time window of day ν is shown vs single time window number i in the plot (a). The data bursts, that is spikes or explosions protruding no less than the standard deviation (solid strongly oscillating curve) above the average value (solid weakly oscillating curve) are well seen. Typically, local clusters of spikes around their local maximums are well visible. These clusters are separated by those of high system activity, where the shortest lengths of inter-event times can be observed not only close to lunchtime. We observe, more or less every 100 (= 20 x (∆ =)5) minutes, spikes of locally longest lengths, that is, a decrease in the activity of market investors four times a day (twice before noon and twice after). Such a long-term pattern constitutes a source of generalized volatility clustering effect within detrended mean inter-event time series, and hence their multifractality. All plots in Fig. 2 Detrended, weakly non-stationary ν-day profile U ν (i) − y ν (i), is shown in plot Fig. 2(c), and its square in plot Fig. 2(d). We say weakly because it is a remnant of removing the main non-stationarity, which is the trend, that is the component of the lowest frequency. The oscillatory (or waving) behavior in the former plot can be interpreted as a reminiscence of the volatility clustering effect. This is present even for the structure which appears as an average over the statistical ensemble of days (see , that is the mean interevent time ∆t ν i (see Fig. 1 for definition), together with its average over days, We suppose it is caused by negatively correlated clusters of the inter-event times (the negative feedback of clusters). That is, after the cluster of short inter-event times, the cluster of long ones appears leading alternately to the negative and positive deviations, respectively. Abnormally short and abnormally long interevent times appearing alternately additionally enhance this intraday pattern. Notably, the volatility clustering phenomenon is the basis herein for the creation of the clusters' structure mentioned above. Plots (e) -(g) show autocorrelators, | Uν (i) − yν (i) || Uν (i + j) − yν (i + j) |, for typical values of j = 1, 10, 47, which are quite similar. Plot (h), ending this series of plots, also extracts the wavy structure of autocorrelation's fluctuations. For all other days, the dependency of the autocorrelation function from j looks similar, although the local minima and maxima may be somewhat differently distributed and have a slightly different amplitude. ∆ti = 1 N d N d ν=1 ∆t ν i (solid curve) and its dispersion, σi = 1 N d N d ν=1 ∆t ν i − ∆ti 3(a) ). The volatility clustering effect means that a series of transactions occurring with higher frequencies are preceded by much less frequent transactions and vice versa. Thus the volatility clustering effect is essentially extended for clusters of inter-event times. Behavior analogous to that in plot Fig. 2(d) is shown in plots Fig. 2(e) - Fig. 2(g), where autocorrelators, | U ν (i) − y ν (i) || U ν (i + j) − y ν (i + j) |, are shown vs i for chosen typical j values. Apparently, how much similar are autocorrelators' behavior for several fixed values of j to the square of the detrended profile (shown in plot Fig. 2(d)), suggesting that autocorrelators are, in fact, under control of fluctuations' structure (that is j = 0). It is striking how stable their patterns are against j, even for its large values. Besides the above-mentioned plots, we also consider plot Fig. 2(h), showing the autocorrelation function defined by Eq. (1). This result is particularly interesting as it shows an oscillatory (and not only relaxing) character of the one-day autocorrelation function. Hence, we found fluctuation induced patterns of inter-event times based on a possible long-term dependence between local fluctuations (see Fig. 3 for details), which constitutes the main interest of this work. Note that plots in Fig. 3 present key quantities 2556, while U − y 2 was obtained independently. Note that the upper dashed-dotted horizontal line represents (U − y) 2 , while the bottom one U − y 2 , both obtained with high accuracy. The dotted horizontal line represents the above given const. Perhaps its location (above U − y 2 ) is due to the existence of a structure, e.g., such as shown in Fig. 3(a). (d) Solid curve shows j-dependence of usual autocorrelation function F 2 (j; s) = 1 N d N d ν=1F 2 (j; ν, s), whereF 2 (j; ν, s) is defined by an expression analogous to Eq. (1) in which there are ordinary differences and not their absolute values. It seems that this function is oscillating rapidly, it disappears quickly in comparison with F 2 (j; s) . The j-dependence of autocorrelation function F 2 (j; s) (solid curve) for the empirical weakly non-stationary time series (multiplicatively standardized here by their standard deviations -see the plot in Fig. 2(a)) was shown in plot (e). Analogously to the plot (c), the power-law fit is represented here by the dashed curve, where fit parameters are: A = 7.28 ± 12.42, a = 3.25 ± 2.15, α = 1.39 ± 0.70, const = 2.46 ± 0.05 resulting in analogous conclusions (even though the exponent here seems to be greater than 1.0). The decaying of the autocorrelation function is shown in the plot (f) for the Poisson process as a function of j. There the exponential function A exp(−a j) + const is fitted to empirical data (dashed curve), where A = 12.06 ± 0.08, a = 0.496 ± 0.031, const = 23.12 ± 0.08. Unfortunately, the range of empirical data is too short (limited to half a trading day in plots (c) -(f)) to say anything more definite, although we can suppose that very slow convergence (shown in plots (c) and (e)) and existing the structure manifested in const > U − y 2 are, indeed, the main causes of the multifractality considered in this paper. averaged over the statistical ensemble of days, whose individual realizations are given in the corresponding plots in Fig. 2. It seems that the autocorrelation function F 2 (j; s) slowly relaxes. This results from its construction on the basis of absolute values of deviations (fluctuations), which are always nonnegative. It contains some information about the existence of a long-term antipersistent structure which makes the autocorrelation non-vanishing quantity. We have grounds, however, to suppose that the antipersistent (quasi-periodic) fluctuation structure is the result of the existence of a long-term dependence or correlations between fluctuations -they are the reason for the creation of this structure and not the other way round. Thus the weakly non-stationary structure is produced. There is also a complementary interesting aspect shown in Fig. 3(b), where the empirical histogram of deviations U −y is compared with the corresponding one obtained from the Poisson distribution (by drawing the number of transactions in each time window ∆, separately). You can see a huge widening of the empirical histogram in relation to the one obtained from the Poisson distribution. Notably, for this distribution, a number of transactions in each time interval i of the same length ∆ is drawn, where the only control parameter, i.e. the mean number of transactions, is taken from empirical data the same for the whole multi-day time series. On this basis the local (fluctuating) mean time of interevent times, ∆t ν i , associated with each interval i and day ν is determined (see Fig. 1 for additional details). Thus, the source of multifractality can in our case be not only nonlinear long-term autocorrelations of absolute deviations | U − y | but also the broadened distribution of deviations U − y. One can say that the results presented in the Figs. 2 and 3 are the starting point of the following considerations. B. Generalized partition function An escort probability (that is, escorting the fluctuations) specifies the chance of occurrence of a certain fluctuation value for a given day ν within scale s. This probability can be constructed in the form, p(ν, s) = F 2 (ν, s) 1/2 N orm(s) , N orm(s) = N d ν=1 F 2 (ν, s) 1/2(5) that is, based on the fluctuation function defined by Eq. (1). Hence, the mean value, p(s) = 1/N d N d ν=1 p(ν, s) = 1/N d , is fixed (as a result of normalization). An even more refined approach based on a q-zooming escort probability has been shown in [57]. The generalized q-dependent (statisticalmechanic) partition function can be defined as usual by the sum, Z q (s) def. = N d ν=1 [p(ν, s)] q .(6) Assuming, the scaling hypothesis central to our work for fluctuations in the form, N d ν=1 F 2 (ν, s) q/2 ≈ N d A q s qh(q) ,(7) where prefactor A q , and the generalized Hurst exponent h(q) are s-independent, one derives from Eq. Z q (s) ≈ 1 N q−1 d A rel q s qh rel (q) ,(6) where the relative (or reduced) prefactor A rel q = A q /(A q=1 ) q and the relative (or reduced) generalized Hurst exponent h rel (q) = h(q) − h(q = 1). Note that scaling hypothesis (7) allowed to present N orm(s) (given by the second equality in Eq. (5)) in the form N orm(s) = N d ν=1 F 2 (ν, s) 1/2 ≈ N d A q=1 s h(q=1)(9) used, indeed, to obtain Eq. (8). Because we are considering a statistical ensemble consisting of N d replicas, from Eq. (6) we have Z q=0 (s) = N d ⇔ A q=0 = 1 Finally, we can write Eq. (8) in useful forms Z q (s) ≈ 1 N q−1 d A rel q s τ rel (q) = Z lin q (s)Z q (s),(10) where Z lin q (s) = 1 N q−1 d A rel q s −(q−1)D(q=0) , Z q (s) = s τ (q) ,(11) and the relative (or reduced) scaling exponent τ rel (q) = qh rel (q) = (q − 1)D rel (q), and the relative (or reduced) Rényi dimension D rel (q) = D(q) − D(q = 0), while D(q) is Rényi dimension defined, as usual, by scaling exponent τ (q), τ (q) = (q − 1)D(q),(12) where τ (q) = qh(q) − D(q = 0), here and above we assume D(q = 0) = h(q = 1) for self-consistency. Thanks to this, not only τ rel but also τ can be expressed in the required form by means of D rel and D, respectively. Hence, we have τ rel (q) = τ (q) − (q − 1)D(q = 0) vanishing at q = 1 and q = 0. Therefore, all the relative quantities defined above (and indexed by 'rel') disappear in q = 1 or q = 0, which justifies their relative character. Moreover, partial partition functions Z lin q and Z q are normalized separately, and the factorization given by Eq. (10) (up to multiplicative prefactor and additive exponents) is unique. These partition functions represent statistically independent monofractal and multifractal structures, respectively. We pay attention to the latter one. There are several characteristic values q of which two (q = 1 and q = 0) are considered in this section. For q → 1 one can write the expansion, τ (q) ≈ (q − 1)[h(q = 1) + q dh(q) dq | q=1 + 1 2 q(q − 1) d 2 h(q) dq 2 | q=1 ],(14) based on the expansion of h(q) at q = 1, where the expression in square brackets is indeed, D(q) ≈ D(q = 0) + q dh(q) dq | q=1 + 1 2 q(q − 1) d 2 h(q) dq 2 | q=1(15) or equivalently D rel (q) ≈ q dh(q) dq | q=1 + 1 2 (q − 1) d 2 h(q) dq 2 | q=1 .(16) Expansion in Eq. (16) emphasizes that D rel (q) depends (in the vicinity of q = 1) on the successive derivatives of the generalized Hurst exponent at q = 1 as parameters. For instance, combining Eqs. (6) with (10), we obtain an expression, D rel (q = 1) = 1 ln s N d ν=1 p(ν, s) ln p(ν, s) = 1 ln s ln p(ν, s) = 1 ln s I q=1 (s),(17) where I q=1 (s) can be identified with the Shannon information (within the scale of s). Thus we define two families of q-dependent quantities: relative and non-relative (usual) ones. For example, the Rényi dimensions (i.e. D rel (q) and D(q)), play a different role for q = 0 than in the canonical approach to multifractality. In the canonical approach, one can read directly from the scaling relation for the partition function that D(q = 0) is the fractal dimension of the substrate (i.e. the support of the measure or function). Since in our approach, the Rényi dimensions enter in a relative way into the generalized partition function, such a diagnosis does not take place. Therefore, D(q = 0) does not have to be a fractal dimension of the substrate, and the D rel (q = 0) even vanishes. For this reason, the D(q) family should rather be called pseudo Rényi dimensions, while D rel (q) family the relative or reduced one (despite the fact that for q = 0 both families have usual interpretations). C. Legendre-Fenchel transformation, and multi-branched left-sided multifractality In this section, we show that although the spectrum of dimensions built on τ and τ rel have the same shape, their locations within the multifractal coordinates are different. We prove that the proper location has the spectrum of dimensions built on the scaling exponent of fluctuations τ . We obtain directly from Eq. (7), ln F q (s) ≈ h(q) ln s + q −1 ln A q ,(18) where q-dispersion F q (s) def. = N −1 d N d ν=1 F 2 (ν, s) q/2 1/q . Using the dependence of F q (s) on the scale s (see Fig. 4 for details) for values of q from its wide range (that is −10.0 ≤ q ≤ 10.0), we have determined both the generalized Hurst exponent h(q) and related signatures of multifractality such as Rényi scaling exponent τ (q), Rényi dimensions D(q), the coarse Hölder exponent α(q) and multifractal spectrum f (α) (see plots (a), (c), (d), (e), (f), respectively in Fig. 5) as well as significant prefactor Fig. 5) related to reduced Rényi information. B(q) def. = q −1 ln A(q) (see plot (b) in It is worth emphasizing that aggregating events into time intervals of the same length (∆, see Fig. 1 for details) may have an influence on the analysis. Namely, if the intervals are too short with respect to the average waiting time between consecutive events, then too many intervals will be empty. On the contrary, if the intervals are too long, aggregation of too many points may lead to loss of information on the time structure of the process. Indeed, the analysis shown in Fig. 4 proposes a solution to this problem, showing that the appropriate range of ∆ is the one in which the scaling effect is observed, herein on F q (s) vs s(= T /∆), where T = 7h 50min and for all values of q we have a common range 3min 55s ≤ ∆ ≤ 19min 35s. For this range of s, the measure χ 2 per degree of freedom reaches the smallest value. Only slightly larger is this quantity when the left border of s is assumed to be s = 10 ≡ ∆ = 47min, while the right one is s = 150 ≡ ∆ = 3min 8s. Finally, having scaling exponent τ (q), the spectrum of dimensions can be found by using the Legendre-Fenchel transformation instead of Legendre transform, although formally both look the same, α(q) def. = dτ (q) dq , f (α) def. = qα(q) − τ (q),(19) hence, q = df (α) dα and f = − d (τ (q)/q) d (1/q) ,(20) where α is a local dimension (singularity or Hölder exponent -its q-dependence is shown in Fig. 5(f)), while f (α) is its distribution shown on plots (a) and (b) in Fig. 6. For the monofractal structure the scaling exponent τ (q) is a linear function of q, while for multifractal it is a non-linear one. It must be clearly stated that due to the nonmonotonic dependence of the generalized Hurst exponent h(q) from q, spectrum of dimensions f (α) is the multi-branched function of the Hölder α exponent (see again plots (a) and (b) in Fig. 6 for details). Recall that the Legendre transformation deals only with monotonous functions h(q). From this point of view, Eqs. (19) and (20), although formally identical to the Legendre transform, are its generalization. The Legendre transform is limited here only to the main branch of spectrum f defined by its contact relations: (i) f (α(q = 1)) = α(q = 1) and (ii) df dα | α(q=1) = 1. The inset plot shown in Fig. 6(a) illustrates this contact character. This is emphasized by a dashed straight line with directional coefficient (slope) of 1.0 tangent to spectra of singularities at the point [α(q = 1), f (α(q = 1))]. Breaking the contact character of the Legendre transformation results in the wrong location of the spectrum of singularities if it exists. Put more generally, the given contact relations above (for q = 1) provide unambiguous location of the full multi-branched spectrum of dimensions obtained using the Legendre-Fenchel transformation. Our multi-branched multifractal contains a single contact point which means that we are dealing here with a single multifractal. Fig. 6(a) and (b) shows a significant result because it offers a necessary (not sufficient) requirement for finding true multifractality in empirical time series. Thanks to the above, we can clarify the key term 'multi-branched left-sided multifractality'. We say that we are dealing with this type of multifractality if its main branch (that is, the branch that meets the condition of contact) is fully determined only by the positive values of q. III. FIRST AND SECOND ORDER PHASE TRANSITIONS From Eq. (19) one can obtain very useful expression for the specific heat of the multifractal structure [10] (and refs. therein) in the form, c(q) = dα(q) d(1/q) = −q 2 dα(q) dq ;(21) its q-dependence is shown in Fig. 6(c). Only two regions are visible in which the system is thermally stable, i.e. fulfilling inequality c(q) ≥ 0. One of them is located between vertical dashed lines a and c or points A 1 and C, defining the q-range of the main branch of left-sided spectrum of dimensions shown in plots (a) and (b). The second, the side thermally stable left-sided spectra of dimensions, is limited to the range of q preceding vertical dashed line b or point B 1 . For q ranging between points B 2 , A 1 and after point C, we deal with thermally unstable phases. In turning (bifurcation) points B 1 , A 1 , and C, there are phase transitions of the second order between thermally stable and unstable phases, which is consistent with specific heat vanishing there. Between points X 1 and X 2 , located in thermally stable phases, the first order (discontinuous) phase transition occurs. To prove the above given statements concerning the order of phase transitions, we study the behavior of the first, df /dα, and second, d 2 f /dα 2 , derivatives vs α, based on the result presented in Fig. 5(f). Using the Taylor expansion of α(q) function in the vicinity of its local extremes we obtain, α(q) ≈ α(q extr ) + 1 2 (q − q extr ) 2 d 2 α dq 2 | q=qextr ,(22) where q extr is a q-position of the local extreme or turning point of α(q) function. There are three such local extrema: one maximum A 1 and two minima B 1 , C. Inverting Eq. (22) and using the first equation in (20), after simple algebraic calculations, we obtain useful two-branched formulas, df dα ≈ ± 2 | α − α s α s | + q extr d 2 f dα 2 ≈ ± 1 2 |α s | 1 | α − α s | ,(23) where we use the abbreviated notation: α s = α(q extr ) andα s = d 2 f dα 2 | q=qextr . Apparently, spectrum of dimensions f has singularities of the second order at its turning points (see plots (a) and (b) in in Fig. 6 for details). Moreover, by substituting the formula given by Eq. (22) to Eq. (21), we obtain c(q) ≈ −q 2 (q − q extr )α s ,(24) i.e. that it linearly vanishes at turning points, which can be considered to be spinodal decomposition points. Additionally, at the point of intersection of branches, marked twice by X 1 , X 2 , the first order phase transition is present. Fig. 7 shows the behavior of the first (df /dα) and second (d 2 f /dα 2 ) order derivatives of spectrum of dimensions (f ) versus Hölder exponent (α). In combination with the plots (a) and (b) in Fig. 6, this allows us to define phase transitions at points A 1 , B 1 , and C and at a point having the double mark X 1 , X 2 . The Ehrenfest like classification of phase transitions which we use is based on the spectrum of dimensions f , which can be treated as the analogon of entropy [7,10]. Our classification is only inspired by the canonical Ehrenfest, because the latter classification uses the chemical potential and not the entropy, although both are the thermodynamic potentials. Apparently, f and df /dα are continuous functions of α as opposed to d 2 f /dα 2 . All these functions are multi-branched but only the second derivative has separated branches. These branches are divergent to ±∞ just in points A 1 , B 1 , and C according to the power-law with exponent equal to −1/2. This means that in these points, there are identical phase transitions of the second order (i.e. belonging to the same universality class), which confirms the behavior of specific heat in these points given by Eq. (24). At the second order phase transition points of specific heats, susceptibilities or other appropriate order parameters either diverge (obeying a non-trivial scaling law) or go to zero, which happens in our case. Let us note that black curves (X 1 , A 1 ) and (B 1 , X 2 ) define the thermally metastable phases, while the blue curve (A 1 , B 1 ) defines the unstable mixture of phases. If the system is located in this latter phase, it will spontaneously evolve towards a state, which favors either higher fluctuations (above point A1) or smaller fluctuations (below point B1). The probability of choosing one of these two options depends on how closely the state of the system is located near the edge of the phase (point A 1 favors large fluctuations while B 1 favors a small fluctuation). Speaking in sociological terms, in this mixture phase region, the members' moods/opinions are divided and the victory of one of them may lead ei-ther to a permanent increase in the diversity of the members' moods/opinions of the system or to their lasting calm. For the second unstable phase (also defined by the blue curve (C, D 1 )) a simplified interpretation should be developed as it is not placed between two metastable phases, although d 2 f /dα 2 diverges at transition point C in the same way as at points A 1 and B 1 . We can only say that the system left alone in this phase will spontaneously evolve into the stable phase. IV. CONCLUDING REMARKS Our work is part of mainstream research on the problem of long-term memory, long-term dependence, and long-term correlations in time series [61]. By using the Legendre-Fenchel transform we have examined, the resulting multi-branched left-sided true multifractal properties of time series of interevent times. We have chosen inter-event times for our research because they are a key measure of the activity of any system (not necessarily complex), research into which is only at the initial stage. The relationships between inter-event times form the foundation of other dynamic relationships occurring in an evolving system. Our research focuses on the search for multifractality because it is the most general, as of yet, characterization of time series at a macro scale, enabling the study of their universal properties from an ex- D 2 A 1 A 2 B 1 B 2 X 2 X 1 IP 1 IP 2 IP 3 IP 4 100 50 0 50 100 d 2 f(α) dα 2 IP 1 IP 2 IP 3 IP 4 D 1 D 2 Thermal stable phase Thermal unstable phase Second derivative of f(α) The second, having its local minimum at the replica of inflection point IP2 (also marked by IP2), is bound to branch (A1, IP2, B1) (blue curve) of the first order derivative, where points A1 and B1 can be considered as spinodal decomposition points -there is a thermally unstable territory between them (see again plot in Fig. 6(c) for details). The third singular solid curve, having its local maximum at a replica of inflection point IP1 (also denoted by IP1), is bound to branch (B1, IP1, A2). Of course, all branches of the first derivative are associated with the corresponding branches of spectrum of dimensions, f vs α, seen clearly in plot in Fig. 6(a). tended point of view (allowing their classification by using their singularity spectra). However, deriving a microscopic model from knowledge of the multifractal structure of series of interevent times is still under consideration. A step in this direction has been proposed in [10], where the surrogate model was the CTRW with waiting-time distribution weighted by stretched exponential i.e. defined by some superstatistics. This is an approach sufficient to describe multifractality generated by a broadened distribution, but in the case of multifractality generated by long-term autocorrelations of interevent times, it is still a major challenge. As is known, the search for true/real multifractality first requires the resolution of the role of at least the main factors: (i) main non-stationarity, (ii) finite size effect, and (iii) broadened distribution leading to true and/or spurious multifractality. Point (i) was solved by the detrending procedure described in Sec. II A, while points (ii) and (iii) are vividly illustrated in Fig. 8, where the Rényi scaling exponent τ (q) is presented for three different cases. The blue (almost) linearly increasing solid curve is obtained from the test Poisson distribution. For the Poisson distribution, a number of transactions in each time interval i are drawn, and on this basis the local mean time of interevent times, ∆t ν i , is determined (see Fig. 1 for detailed analysis). These local mean times create a time series with a length equal to the whole empirical time series of interevent times. The presence of possible spurious multifractality here is caused only by the finite length of the time series of interevent times of (almost) the same length as the empirical time series. Apparently, the spurious multifractality of the Poisson time series caused only by finite size effect is negligible in this case. Therefore, the influence of the finite size effect on real multifractalities is also negligible. The red solid curve indicates time series of interevent times drawn from the distribution of empirical inter-event times (cf. plot (b) on Fig. 3) that is, it reflects the kind of shuffled empirical data. Thanks to this approach, long-term autocorrelations were removed from the drawn time series of length equal (with a good approximation) to the empirical time series. As you can see, in this case too, we deal with almost linear dependence if we take into account the standard deviation leading to the corridor limited by dotted curves. Only the black solid curve obtained from empirical time series of inter-event times presented by us-ing our NMF-DFA, is sufficiently nonlinear (cf. also plot (d) in Fig. 5) to have a chance of generating multifractality. Thus we show, by using our NMF-DFA developed here, that an empirical series of the inter-event times gives a true multifractal located far beyond the finite size component and other multifractal pollutions. We suggest that the true multifractality found here is caused by the long-term autocorrelations between absolute values of detrended inter-event time profiles (see Fig. 3(c) for details). These autocorrelations create some true antipersistent structure of fluctuations' clusters of the inter-event times, seen clearly in Fig. 2c and Fig. 3a, defining the volatility clustering effect. It is interesting that intraday empirical data are sufficient to detect true multifractality, despite the fact that the autocorrelations of the interevent times mentioned are long-term, stretching for many days. A peculiar characteristics of our multifractal is the presence of negative spectra of dimensions, in the vicinity of the turning point C, which could be justified by the appearance of events that occur exceptionally rarely (see [16] for some suggestions). This work is based on two main pillars. First of all, on the NMF-DFA approach constructed in the work, which was inspired by the canonical MF-DFA. Thanks to this approach, it has been proved that the time series of inter-event times can have a multibranched left-sided multifractal character. Secondly, the work proves that this type of multifractality can lead to phase transformations of the first and second orders. For the traditional multifractality, the phase transition of the first order disappear, which reduced the area of metastable and unstable phases to zero. This means that traditional multifractality corresponds to critical or supercritical states of the system. Thanks to this, we better understand why long-term correlations play a key role in building of multifractality. In this situation, the details of microscopic models leading to this type of multifractality do not play a significant role. From this point of view, the multifractality presented in this work is subcritical, where stable, metastable and unstable phases are still present. From the level of this work traditional multifractality can be treated only as one of the elements of full classification. Thanks to this, the concept of multifractality has been broadened substantially. This paper is an extension of our earlier works [9,10]. However, a crucial question arises about the next extension of the NMF-DFA formalism to the autocorrelations of ordinary fluctuations, and not only their absolute values as they are here. FIG. 1 . 1Schematic diagram defining interevent times and their single-window (local) means. Apparently, this mean for ith time window, [i, i + 1[, 1 ≤ i ≤ s, (each of length ∆) is given by the time average ∆t ν i,l , of inter-event times, ∆t ν i,l , where n ν i ≥ 1 is the number of inter-event times associated with ith time window. We define the inter-event time ∆t ν i,l as associated with ith time window [i, i + 1[, when at least the left border of ∆t ν i,l belongs to it. By using the ensemble average we can write ∆t ν i ≈ ∆/n ν i , which is also used in the main text where we focus on ranges of s ≥ 1 (keeping simultaneously n ν i ≥ 1). For the Warsaw Stock Exchange, the duration of a single trading session T = 28 200 [sec]. FIG. 2 . 2Intraday patterns for the 14th January 2011 (ν = 9, Friday): typical dependences of significant quantities vs time window number i (plots (a) -(g)) and lag j (plot (h)). (a) The basic empirical quantity (shown in the form of a random comb) slightly below) made over statistical ensemble of all trading Fridays from 14th January 2011 to 29 June 2017 days. (b) Empirical profile U (the monotonically increasing broken curve), defining a directed (persistent) random walk, together with the solid (smooth thin) curve. This latter curve represents the best fit by the third order polynomial y reproducing well an inflection point present in the empirical curve. This drawing has been supplemented with the plot (c) (supported by the plot (d)) so clearly showing the fluctuating, wavy (antipersistent) structure of deviation (fluctuation) U − y. FIG. 3 . 3Characteristic averaged quantities (for instance, for s = 94), where . . . means the averaged over a statistical ensemble of N d days. (a) The average empirical (detrended) quantity U (i) − y(i) vs time window number 1 ≤ i ≤ s. Its typical antipersistent structure is clearly seen. This is a very significant result proving that not only the single realization (that is for a single day), shown in Fig. 2(c), has an antipersistent structure. For other values of s, analogous structures were obtained. The sharp increase in the curve at the end of the range is the well-known Runge effect of fitting by the polynomial (here denoted by y). (b) The empirical (not normalized) distribution of deviations U −y collected from all N d days, is presented in lin-log scale. Apparently, the empirical distribution of these deviations strongly differs from the corresponding (much narrower) Poisson one. Moreover, a systematic deviation from the initial exponential distribution (solid straight lines) is well seen for | U − y |> 200. Randomizing these (empirical) deviations from its distribution corresponds to the shuffling of interevent times. (c) The averaged autocorrelation function F 2 (j; s) = 1 N d N d ν=1 F 2 (j; ν, s) vs j (solid curve). It seems that it is a very slowly converging (waving) function as it is roughly approximated by the (shifted) power-law, F 2 (j; s) = A/(a+j) α +const (dashed curve), where fitted powerlaw exponent α = 0.49±0.43 is definitely smaller than 1, fitted amplitude A = 1049±387, while background parameter const = 1519±234 > U −y 2 = 1190; hence, shift parameter a = A (U −y) 2 −const 1/α = 1.0±0.90, where (U −y) 2 = FIG. 4 . 4Plots of function Fq(s) (defined by Eq. (18)) vs s for different values of −10 ≤ q ≤ 10. Vertical dashed lines define the region of scaling and they are located the first one at s = 24(≡ 19min 35s) and the second one at s = 120(≡ 3min 55s). FIG. 5 . 5The q-dependence of key empirical characteristics of multifractality. The nonlinear dependence of these characteristics on q is well seen. Plots (a) -(f) present dependence on q of the generalized Hurst exponent h(q), its spread ∆h(q) = h(−q) − h(q), Rényi scaling exponent τ (q), Rényi dimensions D(q), and the coarse Hölder exponent α(q), respectively. The vertical dashed lines visible in all plots define the range of positive values of q between the maximum and the minimum of function α(q) vs q shown in plot (f). The tangent dashed straight line visible in plot (c) has been fitted to the linear section of curve B(q) vs q. Additional thin solid olive curves present on all plots were obtained from the time series generated by the Poisson distribution. Apparently, their variations are negligible which means that the influence of a finite size effect on a time series with a size equal to the empirical one is negligible. FIG. 6 . 6Two plots: (a) and (b) (the enlarged part of plot (a)) different views of spectrum of dimensions f (α) (given by Eq.(19)) vs α, and (c) specific heat c(q) (given by Eq. (21)) vs q, which clearly define thermal stable and unstable phases. The vertical dashed lines c, b, a on plots (a) and (c) and b, a on plot (b) visibly define the range of positive values of q between the maximum and the minimum of function α(q) vs q shown inFig. 5(f) by the analogous lines. These vertical lines marked by a and c define the range of q where peak c(q) ≥ 0 that is, the range of thermal stability of the part of the system defined by the main (longest, increasing) branch of spectrum of dimensions. Apparently, also the side branch containing points B1 and A2 is thermally stable. Notably, at turning points points A1, B1, C the second order phase transitions occur. The vertical dashed line displayed on plot (b) in point α(q = 1), determines the position of the tangent dashed straight line with a slope of 1.0 for the main branch. This is a verification of the contact character of the L-F transform, which obeys: f (α(q = 1)) = α(q = 1) and df (α) dα |q=1= 1.0. FIG. 7 . 7The illustration of the Ehrenfest-like classification of phase transitions[62]. The first (black solid curve) and second (four separated red solid curves) derivatives of f over α showing three two-branched second order singularities of f vs α. Three dashed vertical straight lines (vertical asymptotics) c, b, and a are located, at α coordinates of singularities. The main branch of derivative df /dα is represented by the black curve (C, D2, B2, X1, A1) also containing the inflection point IP3. The corresponding red curve of the second derivative d 2 f /dα 2 contains the replica of the inflection point IP3 which diverges at asymptotics c and a. This curve is singular at turning points: the left one at α coordinate of point C and the right one at α coordinate of point A1. The other three singular curves (also in red) are associated with three side branches of the first derivative df /α. The first of them (ending at point D1), has its local minimum at replica of inflection point IP4, and it is bound to branch (C, D1) of the first derivative containing the inflection point IP4. This branch is thermally unstable (see plot inFig. 6(c) for details). FIG. 8 . 8Comparison of three types of Rényi scaling exponent τ (q) vs q. (i) The blue (almost) linearly increasing solid curve is obtained from the Poisson distribution. (ii) The red solid curve concerns time series of interevent times drawn from the distribution of empirical interevent times. (iii) The black solid curve obtained from empirical time series of interevent times by using our NMF-DFA.This comparison shows why we mainly studied time series by our NMF-DFA method. are prepared, for instance, for typical time window of length ∆ = 300 [sec] hence, the daily total number of time windows s = 28 200/300 = 94 (as the duration of a daily stock market session equals 7 [h] 50 [min] = 28 200 [sec]). It is worth mentioning that the mean number of transactions within a single time window ∆ = 300 [sec] is about 20 (as the empirical mean time distance between subsequent transactions equals approximately 15 [sec]). This would, however, require the transition to complex scaling exponents, e.g. the complex generalized Hurst exponent, which would go significantly beyond the scope of this work. ACKNOWLEDGMENTS R.K. is grateful for inspiring discussions with S. Drożdż, D. Grech, and P. Oświȩcimka. . P Oświȩcimka, J Kwapień, S Drożdż, Phys. Rev. E. 7416103P. Oświȩcimka, J. Kwapień, and S. Drożdż, Phys. Rev. E 74, 016103 (2006). . J Kwapień, S Drożdż, Phys. Rep. 515115J. Kwapień and S. Drożdż, Phys. Rep. 515, 115 (2012). . P Oświȩcimka, S Drożdż, M Forczek, S Jadach, J Kwapień, Phys. Rev. E. 8923305P. Oświȩcimka, S. Drożdż, M. Forczek, S. Jadach, and J. Kwapień, Phys. Rev. E 89, 023305 (2014). . S Drożdż, P Oświȩcimka, Phys. Rev. E. 9130902S. Drożdż, P. Oświȩcimka, Phys. Rev. E 91, 030902(R) (2015). A Multifractal Walk Down Wall Street. B B Mandelbrot, Scientific American. 280270B.B. Mandelbrot, A Multifractal Walk Down Wall Street, Scientific American 280(2), 70 (1999). . R J Buonocore, T Aste, T. Di Matteo, Phys. Rev. E. 9542311R.J. Buonocore, T. Aste, and T. Di Matteo, Phys. Rev. E 95, 042311 (2017). C Beck, F Schlögl, Thermodynamics of chaotic systems. An introduction. CambridgeCambridge Univ. PressC. Beck and F. Schlögl, Thermodynamics of chaotic systems. An introduction (Cambridge Univ. Press, Cambridge 1995). . Z.-Q Jianga, W.-J Xiea, W.-X Zhoua, D Sornette, arXiv:1805.04750Z.-Q. Jianga, W.-J. Xiea, W.-X. Zhoua and D. Sor- nette, arXiv:1805.04750 (2018). . J Perelló, J Masoliver, A Kasprzak, R Kutner, Phys. Rev. E. 7836108J. Perelló, J. Masoliver, A. Kasprzak, and R. Kut- ner, Phys. Rev. E 78, 036108 (2008). . A Kasprzak, R Kutner, J Perelló, J Masoliver, Eur. Phys. J. B. 76513A. Kasprzak, R. Kutner, J. Perelló, J. Masoliver, Eur. Phys. J. B 76, 513 (2010). . B B Mandelbrot, C J G Evertsz, Y Hayakawa, Phys. Rev. A. 424528B.B. Mandelbrot, C.J.G. Evertsz, and Y. Hayakawa, Phys. Rev. A 42, 4528 (1990). Evertsz Exactly Selfsimilar Left-sided Multifractals in Fractals and Disordered Systems. B B Mandelbrot, C J , Chapt. 10A. Bunde and S. HavlinSpringer-VerlagHaidelberg, New YorkB.B. Mandelbrot and C.J.G. Evertsz Exactly Self- similar Left-sided Multifractals in Fractals and Dis- ordered Systems A. Bunde and S. Havlin (Eds.) (Springer-Verlag, Haidelberg, New York 1996), Chapt. 10. . R Blumenfeld, A Aharony, Phys. Rev. Lett. 622977R. Blumenfeld and A. Aharony, Phys. Rev. Lett. 62, 2977 (1989). . T Bohr, P Cvitanowić, M H Jensen, Europhys. Lett. 6445T. Bohr, P. Cvitanowić, and M. H. Jensen, Euro- phys. Lett. 6, 445 (1988). . J Lee, H E Stanley, Phys. Rev. Lett. 612945J. Lee and H.E. Stanley, Phys. Rev. Lett. 61, 2945 (1988). . M H Jensen, G Paladin, A Vulpiani, Phys. Rev. E. 5064352M.H. Jensen, G. Paladin, and A. Vulpiani, Phys. Rev. E 50(6), 4352 (1994). . . C Th, K Halsey, B Honda, Duplantier, J. Stat. Phys. 85681Th.C. Halsey, K. Honda, and B. Duplantier, J. Stat. Phys. 85, 681 (1996). D Schertzer, S Lovejoy, P Hubert, An Introduction to Stochastic Multifractal Fields, ISFMA Symposium on Environmental Science and Engineering with related Mathematical Problems, Beijing. A. Ern and W. LiuHigh Education Press4106D. Schertzer, S. Lovejoy and P. Hubert, An In- troduction to Stochastic Multifractal Fields, ISFMA Symposium on Environmental Science and Engi- neering with related Mathematical Problems, Bei- jing, A. Ern and W. Liu (Eds.), High Education Press 4, 106 (2002). . B Enescu, K Ito, Z R Struzik, Geophysical Journal International. 164B. Enescu, K. Ito and Z.R. Struzik, Geophysical Journal International, 164, 63-74 (2006). . M V Rodkin, Izvestiya Physics Sol. Earth. 37663M. V. Rodkin, Izvestiya Physics Sol. Earth. 37, 663 (2001). . D Sornette, G Ouillon, Phys.Rev.Lett. 9438501D. Sornette and G. Ouillon, Phys.Rev.Lett. 94, 038501 (2005). The Weather and Climate. Emergent Laws and Multifractal Cascades. S Lovejoy, D Schertzer, Cambridge Univ. PressCambridgeS. Lovejoy and D. Schertzer, The Weather and Climate. Emergent Laws and Multifractal Cascades (Cambridge Univ. Press, Cambridge 2013). . -Jing Xu Jing, Hu Fei, Atmospheric and Oceanic Science Lett. 872Xu Jing-Jing and Hu Fei, Atmospheric and Oceanic Science Lett. 8, 72 (2015). B B Mandelbrot, Fractals and Scaling in Finance. New YorkSpringer-VerlagB. B. Mandelbrot, Fractals and Scaling in Finance (Springer-Verlag, New York, 1997). Multifractal Financial Markets. An Alternative Approach to Asset and Risk Management. Yasmine Hayek, Kobeissi , Springer Briefs in Finance. BerlinSpringer-VerlagYasmine Hayek Kobeissi, Multifractal Financial Markets. An Alternative Approach to Asset and Risk Management, Springer Briefs in Finance (Book 4) (Springer-Verlag, Berlin 2013). A L Karperien, H Jelinek, N Milosevic, Multifractals: a review with an application in neuroscience in Proceed. of 18th INTERNATIONAL CONFER-ENCE ON CONTROL SYSTEMS AND COM-PUTER SCIENCE: APPLICATIONS OF FRAC-TAL ANALYSIS IN MEDICINE IAFA 1.5 Multifractals: a Review with an Application in Neuroscience. 1A. L. Karperien, H. Jelinek, N. Milosevic, Multifrac- tals: a review with an application in neuroscience in Proceed. of 18th INTERNATIONAL CONFER- ENCE ON CONTROL SYSTEMS AND COM- PUTER SCIENCE: APPLICATIONS OF FRAC- TAL ANALYSIS IN MEDICINE IAFA 1.5 Multi- fractals: a Review with an Application in Neuro- science, May 2011, p. 1. . D Fettenhorf, R A Kraft, R A Sandler, I Opris, . A Ch, V Z Sexton, R E Mamarells, S A Hampson, Deadwyler, Front. Syst. Neurosci. 9130D. Fettenhorf, R. A. Kraft, R. A. Sandler, I. Opris, Ch. A. Sexton, V. Z. Mamarells, R. E. Hampson, S. A. Deadwyler, Front. Syst. Neurosci. 9, 130 (2015). Method for Diagnosing Heart Disease, Predicting Sudden Death and Analyzing Treatment Response Using Multifractal Analysis. J T Flick, J Joseph, United States Patent US6993377 B2J. T.Flick and J. Joseph, Method for Diagnosing Heart Disease, Predicting Sudden Death and Ana- lyzing Treatment Response Using Multifractal Anal- ysis, United States Patent US6993377 B2, 31 Jan- uary 2006. . P Ch, L A Ivanov, A L Nunes Amaral, S Goldberger, M G Havlin, H E Rosenblum, Z R Stanley, Struzik, An Interdisciplinary Journal of Nonlinear Science. 11CHAOSP.Ch. Ivanov, L.A. Nunes Amaral, A.L. Goldberger, S. Havlin, M.G. Rosenblum, H.E. Stanley and Z.R. Struzik, CHAOS, An Interdisciplinary Journal of Nonlinear Science, 11, 641-652 (2001). . Z R Struzik, J Hayano, R Soma, S Kwak, Y Yamamoto, IEEE Transactions on Biomedical Engineering. 53Z.R. Struzik, J. Hayano, R. Soma, S. Kwak and Y. Yamamoto, IEEE Transactions on Biomedical En- gineering, 53, 89-94 (2006). Z Nagy, P Mukli, P Herman, A Eke, doi.org/10.3389/fphys.2017.00533Frontiers in Physiology. Methods Article. Z. Nagy, P. Mukli, P. Herman, and A. Eke, Fron- tiers in Physiology. Methods Article, 26 July 2017, doi.org/10.3389/fphys.2017.00533 . J W Kantelhardt, S A Zschiegner, E Koscielny-Bunde, A Bunde, S Havlin, H E Stanley, Physica A. 31687J. W. Kantelhardt, S. A. Zschiegner, E. Koscielny- Bunde, A. Bunde, S. Havlin, and H. E. Stanley, Physica A 316, 87 (2002). . J.-P Bouchaud, M Potters, M Meyer, Eur. Phys. J. B. 13595J.-P. Bouchaud, M. Potters, and M. Meyer, Eur. Phys. J. B 13, 595 (2000). . M Montero, J Masoliver, Phys. Rev. E. 7661115M. Montero and J. Masoliver, Phys. Rev. E 76, 061115 (2007). . V Tejedor, R Metzler, J. Phys. A: Math. Theor. 4382002V. Tejedor and R. Metzler, J. Phys. A: Math. Theor. 43, 082002 (2010). . M Magdziarz, R Metzler, W Szczotka, P Zebrowski, J. Stat. Mech. 4010M. Magdziarz, R. Metzler, W. Szczotka, and P. Zebrowski, J. Stat. Mech., P04010 (2012). . P Oświȩcimka, J Kwapień, S Drożdż, Physica A. 347626P. Oświȩcimka, J. Kwapień, S. Drożdż, Physica A 347, 626 (2005). . P Oświȩcimka, J Kwapień, R Rak, Acta Phys. Pol. B. 362447P. Oświȩcimka, J. Kwapień, R. Rak, Acta Phys. Pol. B 36, 2447 (2005). . P Oświȩcimka, S Drożdż, R Gȩbarowski, A Z Górski, J Kwapień, Acta Phys. Pol. B. 461579P. Oświȩcimka, S. Drożdż, R. Gȩbarowski, A.Z. Górski, J. Kwapień, Acta Phys. Pol. B 46 1579 (2015). . M Denys, T Gubiec, R Kutner, M Jagielski, H E Stanley, Phys. Rev. E. 9442305M. Denys, T. Gubiec, R. Kutner, M. Jagielski and H.E. Stanley, Phys. Rev. E 94, 042305 (2016). . J W Haus, K W Kehr, Phys. Rep. 150263J. W. Haus and K. W. Kehr, Phys. Rep. 150, 263 (1987). . J.-P Bouchaud, A Georges, Phys. Rep. 195127J.-P. Bouchaud and A. Georges, Phys. Rep. 195, 127 (1990). . J Klamut, T Gubiec, arXiv:1807.01934[q-fin.ST]J. Klamut, T. Gubiec, arXiv:1807.01934 [q-fin.ST] (2018). An Introduction to ProbabilityTheory. W Feller, Wiley1New YorkW. Feller, An Introduction to ProbabilityTheory, Vols. 1 and 2, Wiley, New York (1971). . E Scalas, Physica A. 362225E. Scalas, Physica A 362, 225 (2006). . E Scalas, Chaos Soliton. Fract. 3433E. Scalas, Chaos Soliton. Fract. 34, 33 (2007). . M Politi, E Scalas, Physica A. 3872025M. Politi and E. Scalas, Physica A 387, 2025 (2008). . D Grech, G Pamua, Acta Phys. Pol. A. 12134D. Grech and G. Pamua, Acta Phys. Pol. A 121, B-34 (2012). . L Czarnecki, D Grech, Acta Phys. Pol. A. 117623L. Czarnecki and D. Grech, Acta Phys. Pol. A 117, 623 (2010). . J Ludescher, M I Bogachev, J W Kantelhardt, A Y Schumann, A Bunde, Physica A. 3902480J. Ludescher, M. I. Bogachev, J. W. Kantelhardt, A. Y. Schumann, A. Bunde, Physica A 390, 2480 (2011). . A Y Schumann, J W Kantelhardt, Physics A. 3902637A. Y. Schumann and J.W. Kantelhardt, Physics A 390, 2637 (2011). . R Rak, D Grech, Physica A. 50848R. Rak and D. Grech, Physica A 508, 48 (2018). Modeling Financial Time series: Multifractal Cascades and Rényi Entropy. P Jizba, Jan Korbel, Interdisciplinary Symposium on Complex Systems, Emergence, Complexity and Computation. ISCS) 2013, A. Sanayes, I. Zelinka, and O.E. Rssler8227Springer-VerlagP. Jizba and Jan Korbel, Modeling Financial Time series: Multifractal Cascades and Rényi Entropy, Interdisciplinary Symposium on Complex Systems, Emergence, Complexity and Computation 8 (ISCS) 2013, A. Sanayes, I. Zelinka, and O.E. Rssler (Eds.) (Springer-Verlag, Berlin, Heidelber 2014), p. 227. . Q Cheng, Nonlin. Processes Geophys. 21477Q. Cheng, Nonlin. Processes Geophys., 21, 477 (2014). . P Jizba, T Arimitsu, Annals of Physics. 31217P. Jizba and T. Arimitsu, Annals of Physics 312, 17 (2004). . M Kale, F B Butar, J. Math. Sci & Math. Edu. 55M. Kale and F. B. Butar, J. Math. Sci & Math. Edu. 5, 5 (2011). . B B Mandelbrot, B Benoit, Physica Scripta. 32257B. B. Mandelbrot, Benoit B, Physica Scripta 32, 257 (1985). . T Gneiting, M Schlather, SIAM Review. 46269T. Gneiting and M. Schlather, SIAM Review 46, 269 (2004). . A F Boriviera, Physica A. 3904426A. F. Boriviera, Physica A 390, 4426 (2011). . G Jaeger, Arch. Hist. Exact Sci. 5351G. Jaeger, Arch. Hist. Exact Sci. 53, 51 (1998).
[]
[ "Biosignal Analysis with Matching-Pursuit Based Adaptive Chirplet Transform", "Biosignal Analysis with Matching-Pursuit Based Adaptive Chirplet Transform" ]
[ "Jie Cui \nInstitute of Biomaterials and Biomedical Engineering\nUniversity of Toronto\nM5S 3G9TorontoONCanada\n", "Dinghui Wang \nBarrow Neurological Institute, St. Joseph's Hospital and Medical Center\n85013PhoenixAZUSA\n" ]
[ "Institute of Biomaterials and Biomedical Engineering\nUniversity of Toronto\nM5S 3G9TorontoONCanada", "Barrow Neurological Institute, St. Joseph's Hospital and Medical Center\n85013PhoenixAZUSA" ]
[]
Chirping phenomena, in which the instantaneous frequencies of a signal change with time, are abundant in signals related to biological systems. Biosignals are non-stationary in nature and the timefrequency analysis is a viable tool to analyze them. It is well understood that Gaussian chirplet function is critical in describing chirp signals. Despite the theory of adaptive chirplet transform (ACT) has been established for more than two decades and is well accepted in the community of signal processing, application of ACT to bio-/biomedical signal analysis is still quite limited, probably because that the power of ACT, as an emerging tool for biosignal analysis, has not yet been fully appreciated by the researchers in the field of biomedical engineering. In this paper, we describe a novel ACT algorithm based on the "coarse-refinement" scheme. Namely, the initial estimate of a chirplet is implemented with the matching-pursuit (MP) algorithm and subsequently it is refined using the expectation-maximization (EM) algorithm, which we coin as MPEM algorithm. We emphasize the robustness enhancement of the algorithm in face of noise, which is important to biosignal analysis, as they are usually embedded in strong background noise. We then demonstrate the capability of our algorithm by applying it to the analysis of representative biosignals, including visual evoked potentials (bioelectrical signals), audible heart sounds and bat ultrasonic echolocation signals (bioacoustic signals), and human speech. The results show that the MPEM algorithm provides more compact representation of signals under investigation and clearer visualization of their time-frequency structures, indicating considerable promise of ACT in biosignal analysis. The MATLAB R code repository is hosted on GitHub R for free download (https://github.com/jiecui/mpact).
null
[ "https://arxiv.org/pdf/1709.08328v1.pdf" ]
67,203,546
1709.08328
2ca4ef5f72f8212823e4d2fa2f32ad7e7da6c653
Biosignal Analysis with Matching-Pursuit Based Adaptive Chirplet Transform Jie Cui Institute of Biomaterials and Biomedical Engineering University of Toronto M5S 3G9TorontoONCanada Dinghui Wang Barrow Neurological Institute, St. Joseph's Hospital and Medical Center 85013PhoenixAZUSA Biosignal Analysis with Matching-Pursuit Based Adaptive Chirplet Transform ChirpletBiosignal processingMatching pursuitTime-frequency analysisSparse representa- tionNoise robustness Chirping phenomena, in which the instantaneous frequencies of a signal change with time, are abundant in signals related to biological systems. Biosignals are non-stationary in nature and the timefrequency analysis is a viable tool to analyze them. It is well understood that Gaussian chirplet function is critical in describing chirp signals. Despite the theory of adaptive chirplet transform (ACT) has been established for more than two decades and is well accepted in the community of signal processing, application of ACT to bio-/biomedical signal analysis is still quite limited, probably because that the power of ACT, as an emerging tool for biosignal analysis, has not yet been fully appreciated by the researchers in the field of biomedical engineering. In this paper, we describe a novel ACT algorithm based on the "coarse-refinement" scheme. Namely, the initial estimate of a chirplet is implemented with the matching-pursuit (MP) algorithm and subsequently it is refined using the expectation-maximization (EM) algorithm, which we coin as MPEM algorithm. We emphasize the robustness enhancement of the algorithm in face of noise, which is important to biosignal analysis, as they are usually embedded in strong background noise. We then demonstrate the capability of our algorithm by applying it to the analysis of representative biosignals, including visual evoked potentials (bioelectrical signals), audible heart sounds and bat ultrasonic echolocation signals (bioacoustic signals), and human speech. The results show that the MPEM algorithm provides more compact representation of signals under investigation and clearer visualization of their time-frequency structures, indicating considerable promise of ACT in biosignal analysis. The MATLAB R code repository is hosted on GitHub R for free download (https://github.com/jiecui/mpact). Introduction Chirping activity can be encountered in many natural signals. A chirp is a signal, in which the instantaneous frequency changes as a function of time. Chirps can not only arise in such a variety of physical phenomena as radar signals (Mann and Haykin 1991;Wang et al. 2003), mechanical vibrations (Guo, Yu, and Hu 2006), ultrasonic echoes (Lu et al. 2005), seismic waveforms (Boashash and Whitehouse 1986), transionospheric signals (Shie Qian, Mark E. Dunham, and Freeman 1995;Doser and M. E. Dunham 1997), and gravitational waves (Jenet and Prince 2000;Candes, Charlton, and Helgason 2008), but also, more relevant to biomedical engineering, abound in biological systems. One may find chirping phenominon, for example, in complex bird songs of different bird species (Glotin, Ricard, and Balestriero 2016), in powerful whale vocalization (M. Bahoura and Y. Simard 2008;Glotin, Ricard, and Balestriero 2016), in wolf choruses signals (Dugnol et al. 2008), in human speech (Kepesi and Weruaga 2006), in neural responses correlated with auditory cortical processes (Mercado, Myers, and Gluck 2000), and in electromagnetic field related to brain activities measured as general electroencephalograph (EEG) (Kuś, Różański, and Piotr Jerzy Durka 2013;Sanei and Chambers 2007) and event related potentials (ERPs) (J. Cui and Wong 2006b;J. Cui and Wong 2008). It is well understood that the chirp function, particularly, the Gaussian chirplet (Mann and Haykin 1995), is one of the most important functions to characterize the signals with variable frequency. However, its applications are relatively limited, mainly because of two major obstacles. (1) One difficulty is that the chirplet functions do not generally constitute an orthogonal basis; as such, the decomposition of a signal into the basis functions is not unique, and the optimal approximation of a signal by linearly expanding chirplet basis is a N P -hard problem (Davis, S. Mallat, and Avellaneda 1997). (2) The other one is that biosignals are generally recorded under a condition of low signal-to-noise ratio (SNR). For example, the signal of visual evoked potentials (VEPs) are typically measured under the condition of SNR ≈ −10 dB for a single trial and SNR ≈ 0 dB for an average signal of 50 trials (Regan 1989;Jie Cui 2006). Therefore, chirplet decomposition of a low-SNR biosignal is in essence an estimation problem. To overcome these limitations in the analysis of biological signals, the solutions suggested by the previous studies may be roughly classified into two categories: single chirplet estimation from segmented signals and multi-component chirplet extraction. For instance, in a study to classify the calls of North Atlantic blue whales, Bahoura et al. (M. Bahoura and Y. Simard 2008;Mohammed Bahoura and Yvan Simard 2012) first employed a bandpass filter to suppress the influence of background noise on the main frequency band of whale call and then adopted the chirplet transform to approximate the call with a single chirplet atom only. This approach avoided the problem of multi-component extraction, but whale vocalization cannot be completely characterized. To further reduce computational cost, some studies fixed the rate of frequency changing (i.e. chirp-rate) of the lone chirplet (e.g. Shaik, Naganjaneyulu, and Narasimhadhan 2015). A similar approach of extracting a single chirplet atom after partitioning the original signals was also proposed in the analysis of bioacoustics (Glotin, Ricard, and Balestriero 2016) and visual evoked potentials (J. Cui and Wong 2006a;J. Cui and Wong 2008). In general, a longterm signal was partitioned into contiguous, non-overlapping and equal length segments using rectangular truncations. A single chirplet was then estimated from each segment and hence the entire signal was approximated by a sequence of non-overlapping chirplets. The advantages of this approach is its capability of representing the main time-frequency features of a signal with relatively low cost of computation, which is often crucial in the cases of longterm monitoring and data compression. However, the representation with a single chirplet is not able to characterize the signals with complex time-frequency structures, such as EEG, which have multiple major components within an overlapping time intervals. For these signals, a multi-component decomposition is necessary. Typically, the estimation of chirplets from multi-component biosignals involves a "coarse-refinement" scheme. For example, in the studies of quantification of frequency-changing characters of sleep spindles in EEG (Schönwald et al. 2011;Carvalho et al. 2014), the authors obtained the initial estimation of one chirplet (coarse step) by adopting the matching pursuit (MP) algorithm with Gabor logon (or atom) dictionaries (S. G. Mallat and Z. Zhang 1993). In this first step it was assumed that the chirp-rate of the estimated spindle was zero. Subsequently, in the second step (refinement step) the time spread and the rate of linear frequency change of the spindle were re-estimated with a procedure named "ridge pursuit" (Gribonval 2001), in which the time-frequency structure around the initial estimated Gabor logon was further explored at several testing points and then the time-spread and chirp-rate were estimated through a fast parabolic interpolation. After the estimation of one chirplet spindle, the component was extracted from the signal and the next one was estimated from the residual signal, as is proposed in the standard procedure of MP algorithm (S. G. Mallat and Z. Zhang 1993). It is worth noting that Yin et al (Yin, S. Qian, and Feng 2002) proposed a similar, two-step approach of multi-component chirplet extraction. Similar to Gribonval's method, the initial parameters of a chirplet were estimated by fixing the chirp-rate to zero. Unlike the former method, however, after a clever regroup of the equations of the inner product between the residual signal and the chirplet, the parameters were refined with a conventional curve fitting, leading to the reduction of computational cost. Essentially, both methods acquire the Gabor logon (chirp-rate = 0) as the initial guess and then refine the chirp-rate and other parameters of the chirplet with linear operations. However, this Gabor-to-chirplet approach generally lacks robustness to strong noise, which is relevant to biosignal processing, especially when the centers of crossing chirplet components are in vicinity on the time-frequency plane (Lyu and He 2015). In this article, we show a new approach to the multi-component chirplet decomposition for the analysis of signals with biological nature. It inherits the idea of "coarse-refinement" scheme. Unlike the previous works, however, the initial estimates are obtained with MP algorithm using Gaussian chirplet dictionary (Bultan 1999), rather than Gabor logons. Furthermore, the estimation of the chirplets are further refined using the expectation-maximization (EM) algorithm (Dempster, Laird, and Rubin 1977;Feder and Weinstein 1988;Mann and Haykin 1992). We refer to this approach as the MPEM algorithm. Our results indicate that this approach is superior especially in the case when the analyzed signal is embedded in strong noise and the signal components cross each other on the time-frequency plan. Although the theories of chirplet transform and overcomplete representation are well known in the community of signal processing, their applications in the field of bio-/biomedical engineering is relatively new. We thus provide adequate background information and a detailed description of the implementation in the next section. Subsequently, we demonstrate the merits of the proposed method by numerical simulation and several applications to real biological data, followed by the discussion. Finally, we summarize our work in the last section. The code of the algorithm is freely available via GitHub R (https://github.com/jiecui/mpact) under the GNU (GPLv3) public license 1 . Matching-pursuit based adaptive chirplet transform The chirplet transform was formulated as a generalization of Gabor and wavelet transforms in the early 1990s (Mann and Haykin 1991;Baraniuk and Jones 1993;Mann and Haykin 1995). There is particular interest in developing Gaussian chirplet transform (GCT), because the basis function, Gaussian chirplet, is implemented as a modification to the original Gabor logon function (Gabor 1946), and thus it inherits some beneficial properties. Particularly, thanks to the Gaussian envelope, Gaussian chirplet has the highest joint time-frequency resolution and its Wigner-Ville distribution (WVD) is non-negative (Cohen 1995). Moreover, the mathematical manipulation of GCT is usually tractable. Thus, the GCT plays a unique role in time-frequency analysis. From "wavelet" to "chirplet" The wavelet transform has been proposed to partially overcome the problem of the fixed time-frequency resolution with the short-time Fourier transform (STFT). Here we denote an arbitrary piece of sinusoid resulting from a windowing operation as a "wavelet", with the only constraint being that the windowing function is a Gaussian function and hence each "wavelet" is in fact a Gabor logon. A family of basis "wavelet" functions can then be derived from a mother "wavelet" by applying to it two operations: scaling (or time-spread) and time translation. A "wavelet" has good time resolution but poor frequency resolution in the higher frequency bands, and vice versa for the lower frequencies. This is the reason why the wavelet transform is well-suited for analyzing signals with discontinuity or abrupt changes. However, this property also means that the wavelet transform does not provide precise estimates of the time-frequency structures for signal components that do not match the tradeoff characteristics of the wavelet signal. The wavelet is not efficient in representing chirp-like signals either. In order to overcome these difficulties, the "wavelet" is further modified to allow it to rotate in the timefrequency plane (the "chirping" operation as introduced below). This is equivalent to windowing the chirp signal by using a Gaussian window and the resultant function is coined the "chirplet". The chirplet can also be constructed from a unitary Gaussian function g(t) = π −1/4 exp(−t 2 /2) by applying four mathematical operations to it (Fig. 1, see Eq. (1) for the notation), i.e., (1) scaling (π∆ 2 t ) −1/4 exp(−t 2 /2∆ 2 t ), (2) chirping π −1/4 exp(−t 2 /2) exp(jct 2 ), (3) time-shift π −1/4 exp[−(t − t c ) 2 /2],(4) frequency-shift π −1/4 exp(−t 2 /2) exp(jω c t). (3) (4) (B) Figure 1: Construction of a Gaussian chirplet. A chirplet may be constructed by applying the four mathematical operations to a unitary Gaussian function g(t) = (π) −1/4 exp −t 2 /2 . Panel (A) displays a 3-D visualization of the WVD of the unitary Gaussian function, while Panel (B) depicts the effect on (1) the unit Gaussian (represented as WVD contour) of the four operations, that is, (2) scaling π∆ 2 t −1/4 exp − (t/∆ t ) 2 /2 , (3) chirping (π) −1/4 exp −t 2 exp jct 2 , (4) time-shift (π) −1/4 exp −(t − t c ) 2 , and frequency-shift (π) −1/4 exp −t 2 exp jω c t 2 , respectively. A sequential application of these operations leads to a family of wave packets with four adjustable parameters called Gaussian chirplets g tc,ωc,c,∆t (t) = 1 √ π∆ t exp − 1 2 t − t c ∆ t 2 + j [c (t − t c ) + ω c ] (t − t c ) ,(1) where j = √ −1, t c is the time center, ω c the frequency center, ∆ t > 0 the effective time spread, and c the chirp-rate that characterizes the "quickness" of frequency changes. The effects of the four operations on the WVD of a chirplet are shown in Fig. 1. It can be seen that the chirplet is simply a natural extension to "wavelet" by applying an additional chirping or rotational operation. Indeed, both the Gabor logon and "wavelet" are just special cases of chirplet, i.e., the case where the chirp-rate is zero. Gaussian chirplet transform and adaptive analysis The Gaussian chirplet transform (GCT) of a signal is defined as the inner product between the signal f (t) and the Gaussian chirplet g tc,ωc,c,∆t (t) defined in Eq. (1) a I = f, g I = +∞ −∞ f (t)g * I (t) dt,(2) where I = (t c , ω c , c, ∆ t ) ∈ R 3 × R + denotes the continuous index set of the chirplet parameters and '*' the complex conjugate operation. The coefficient a I is the projection of the signal f (t) onto a time-frequency region specified by the chirplet g I . The absolute value of the coefficient is the amplitude of the projection. An arbitrary signal can then be represented as a linear combination of Gaussian chirplets f (t) = P n=1 a In g In (t) + R P f (t) = f P (t) + R P f (t),(3) where P is the number of chirplets, I n is the parameter set of the nth chirplet, f P (t) is defined as the P thorder approximation of the signal and R P f (t) denotes the residue. Notice that the coefficient a In is complex and hence the decomposition information at each iteration n is described by six real parameters, i.e., two from a In and the other four from I n . The calculation of a In involves selecting g In from a predefined set of chirplets known as dictionary. The approach is then to find the optimal subset of P chirplets from the dictionary so as to minimize the difference f − f p . Unfortunately, the optimal solution of a In and I n is an N P -hard problem (Davis, S. Mallat, and Avellaneda 1997), i.e., there exists no known polynomial-time algorithm to solve this problem. In practice, some suboptimal techniques have been developed (e.g., S. Qian and Chen 1994;Bultan 1999;Gribonval 2001;Yin, S. Qian, and Feng 2002) and we will describe one such approach. The essence is to approximate the signals energy in the time-frequency space using straight lines with arbitrary slopes (Jie Cui 2006, p.40). The general signal of interest here may be characterized by the following model, which assumes the signal is a sum of P chirplets in complex, white, Gaussian noise: f (t) = P n=1 a In g In (t) + w(t),(4) where w(t) denotes the additive noise. We assume that any real signal of interest has been converted to a complex (analytic) signal, whose real part is the original signal and the imaginary part its Hilbert transform (Cohen 1995;Patrick Flandrin 1999), so as to simply the theory. To decompose a given signal, two procedures are involved at each iterative step: (1) initial coarse estimates obtained using a chirplet matching pursuit (MP) algorithm and (2) progressive refinement of the estimates with the estimation-maximization (EM) procedure ( Table 2). The initial stage of the algorithm includes the construction of a chirplet "dictionary" and the initialization of the residue, R 0 f = f . A dictionary is a repertoire of chirplet basis functions selected to cover efficiently the entire time-frequency plane. We follow the method proposed by Bultan (Bultan 1999) and summarize the discretization of the parameters in Table 1. For a signal with size N , the number of decomposition levels D is determined from N and the radix a. The first level in the decomposition is denoted as Level Zero. Next, the scale index k and angle index m are calculated, from which the discrete chirp-rate c and time-spread ∆ t are found. The time-center t c and frequency-center ω c are directly determined by the signal size N . The parameter i 0 indicates the first level to rotate a logon. The reason for introducing i 0 may be understood in this way: a chirplet is close to the unitary Gabor logon if its time-spread is close to one. If so, there is little significance to rotate it. The parameter i 0 may be used to avoid unecessary rotation of chirplet close to unitary logon. γ γ ∈ [0, N ), γ ∈ Z Signal range T N Normalized time range F 2π Normalized frequency range D 1 2 log a N Number of levels of decomposition k k ∈ [0, D − i 0 ), k ∈ Z Scale (time-spread) index m k 4a 2k Number of chirplets at each scale M N 2 (i 0 + k m k ) Total number of chirplets in dictionary m m ∈ [0, m k − 1], m ∈ Z Chirp rate/rotational angle index α m arctan(m/a 2k ) Discretized angle for each scale t c t c ∈ γ T N = γ Discrete time-center of chirplets ω c ω c ∈ γ F N = 2πγ/N Discrete frequency-center of chirplets c F T tan(α m ) = F T m ∆t Discrete chirp rate ∆ t a 2k Discrete time-spread At each iteration, a single (new) chirplet g In and coefficient a In are decided from R P f (t). This is termed "coarse estimation". The results are further optimized using a Newton-Raphson method to refine the match. The refined results are then subtracted from the signal and the steps are repeated to estimate a new chirplet from the residue R P +1 f (t). We emphasize that the adaptive nature of the mechanism of the algorithm comes from the optimal selection of the basis functions for decomposition. The parameters of these functions are predefined in the dictionary, which differs from the approach of adaptive filtering where the parameters are varied on a sample by sample basis. In the case of estimating multiple chirplets, we have adopted the expectation-maximization (EM) algorithm (Dempster, Laird, and Rubin 1977) to further refine the estimates of the chirplets, which follows a framework for estimating superimposed signals using the EM algorithm (Feder and Weinstein 1988). More specifically, assuming the number of chiprlets P in the signal is known, the EM algorithm consists of an expectation step (E-step) and a maximization step (M-step). (1) In the E-step, the error between the signal under analysis f and the initial signal estimated is computed as e = f − P n=1 a In g In ,(5) and the complete data are formed as y n = a In g In + β n e, n = 1, . . . , P, where P n=1 β n = 1. Step Description 1 Construct chirplet dictionary (Table 1) 2 Initialize residue: P ← 0, R P f ← f 3 Estimate a single (new) chirplet 3a. Estimate one chirplet from R P f with MP algorithm 3b. Refine with Newton-Raphson (NR) method 3c. P ← P + 1 4 Refine multiple chirplets with EM algorithm 4a. Initialize iteration counter: i ← 0 4b. E-step: e ← f − P n=1 a In g In ; y n ← a In g In + β (i) n e 4c. M-step: Update a In and g In in y n with MP+NR 4d. i ← i + 1 4e. Goto Step 4a, if stop criteria are not met. 5 Update R P f ← f − P n=1 a In g In 6 Stop MPEM if criteria are met; Otherwise goto Step 3. (2) In the M-step, the same algorithm employed in the estimation of a single chirplet is applied to each of the y n to refine the estimate of the corresponding chirplet. To alleviate computational cost, however, we follow the procedure proposed by O'Neill et al (Jeffrey C. O'Neill, Patrick Flandrin, and Karl 2000) to update only one chirplet at each EM iteration by defining β n for the i-th iteration as β (i) n = δ(n mod i), where δ(·) designates the Kronecker function. The EM algorithm may be repeated several times until the change of error in Eq. (5) is below a threshold, or the number of iteration reaches a predefined level. We'd like to point out that the β n 's may use other values, other than the one we suggested, as long as they satisfied the constraint stated in Eq. (7). As is for all applications of EM algorithm, the selection of the complete data is crucial to the performance of specific algorithm. Since the β n 's determines the complete data in our algorithm, they may influence the convergence of the algorithm and possibly be used to avoid the convergence to unwanted local stationary point. Moreover, re-estimation in each y n may be computed in parallel whenever the computer architecture of parallel computing is available. These considerations are currently under investigation. Finally, we summarize the MPEM algorithm in Table 2. The results shown in Fig. 4 demonstrate the performance of the technique (cf. MultiDecompChirplet.m in the code). In this analysis, we show the decomposition of a complex signal into a number of Gaussian chirplets with MPEM algorithm. The adaptive chirplet spectrogram (ACS) -which is a direct sum of the Wigner-Ville distribution of the individual chirplets -clearly shows the time-frequency structures of the signal. Note the large error of representing delta function 'e'. This is because the delta function δ(t) is not included in the dictionary, and thus the spike-like structure is approximated by a Gaussian chirplet with small time-spread. The error of representing saw-tooth wave structure 'b' is also larger than that of sinusoid structure 'a', because Gaussian chirplets are able to approximate sinusoid better than saw-tooth function. We'd like to emphasize that the chirping structure 'g' cannot be represented efficiently using Gabor logons alone (cf. Jie Cui 2006, p.41). In practice, a critical point of the analysis is to determine the number of chirplets required to sufficiently represent the signals. One method is to employ the coherent coefficients (cc) (S. G. Mallat and Z. Zhang 1993) of the extracted chirplets, which is defined as the ratio of the energy of the projection to the energy of the residue. The more coherent a signal is with respect to the dictionary, the larger the cc values are. Therefore, a small cc value indicates low correlation between the signal and the dictionary. A threshold based upon the cc value can be chosen as a stopping criterion. cc n = |a In | 2 ||R n f || 2 , n = 0, . . . , P − 1,(8) where |a In | 2 is the energy of the projection and ||R n f || 2 is the energy of the residue. Robustness in low SNR As is discussed above, in general biological signals are collected under low SNR condition. Thus the effectiveness of the proposed method to estimate signals in low SNR is an important factor in practice. In this session, we quantify the robustness of MPEM algorithm under different levels of SNR by comparing it with the algorithm adopting the framework of maximum likelihood estimation (MLE) (J. C. O'Neill and P. Flandrin 1998). The MLE algorithm also adopts the "coarse-refinement" scheme by first estimating the duration and the frequency-center of a chirplet, and then, from these local estimation, refining the estimation of the chirp-rate and the time-center. We choose this algorithm to compare, since it has been widely cited in literature, has relatively low cost of computation, and possesses apparent robustness in high noise (as MLE algorithm avoids derivatives). The relative robustness against noise is indicated by the Robustness Index (RI), a function of squared errors of the two algorithms, R = E l − E p E l + E p ,(9) where E l and E p are the squared errors of MLE and MPEM algorithms, respectively. To calculate the squared error, we synthesized the simulation signal and embedded it in different levels of Gaussian white noise. The simulation signal consists of an upward chirplet I u = (N/2+1, π/2, π/N, N/3) and a downward chirplet I d = (N/2 + 1, π/2, −π/N, N/3), where N is the signal length and the amplitude of both chirplets are set to one ( Fig. 2; cf. noise robustness exp.m in the code). These two chirplet components share the same time and frequency centers, albeit have opposite chirp-rates. The signals with such "deep crossed" timefrequency structure is reportedly difficult for some coarse-refinement algorithms (Lyu and He 2015). The chirplet components and their corresponding reconstructed signals have been estimated from the noisy signal using the two algorithms. The squared error is then found from the difference between the reconstructed and the clean synthesized signal per testing SNR level. Fig. 2 presents a typical example of multiple chirplet decomposition of a noisy signal with MPEM and MLE algorithms. We can see that MLE incurs larger errors than MPEM algorithm at most of the time points. Notice that RI is between -1 and 1, i.e. |I| ≤ 1, Eq. (9). If the error of MLE algorithm is higher than that of MPEM algorithm (i.e. E l > E p ), RI will be greater than zero. Thus, the higher RI is, the larger the squared error of MLE algorithm (E l ) than that of MPEM algorithm (E p ) is, which means the robustness of MPEM is better than MLE. We formulated the statistics of RI by using Bootstrap resampling technique (Efron and Tibshirani 1993) as follows. First, the noisy signal was created by adding additive Gaussian white noise at five SNR levels, i.e. -30, -20, -10, 0, 10 and 20 dB, as biosignals are usually collected within these SNR ranges. Second, at each testing SNR, 100 RIs were calculated. Finally, the mean and 95% confidence interval were estimated by resampling the RIs for 1000 times with the Bootstrap approach. The results are presented in Fig. 3 (cf. noise robustness analysis.m in the code). The scatter plot of dots indicate the mean values of RIs, and the error bars the confidence intervals. The RIs between the testing SNR levels are indicated with a smoothed curve approximated by shape-preserving piecewise cubic interpolation, assuming a smooth change of RIs as a function of SNR. We can see that, with the decrease of SNR, RIs increase significantly above zero, which indicate that the robustness of MPEM is higher than that of MLE in stronger noise. However, the change of RI is not monotonic, but with a peak around SNR = 0 dB. That is, the relative increase of performance of MPEM algorithm is strongest when the energy of signal and noise is roughly equal. At higher SNR levels, i.e. 10 and 20 dB, the RIs are close to zeros, indicating that when noise is low, the performance of the two algorithms are approximately equal. We also performed Wilcoxon Rank Sum test with the null hypothesis that RI is not significantly higher than zero (Right-tailed hypothesis test). We found that the null hypothesis was rejected at all testing SNR levels (p < 10 −4 ), which indicates that the robustness of MPEM algorithm is consistently higher than that of MLE algorithm. However, we should point out that the superior performance of MPEM against noise is not achieved without consequence. The average time cost of MPEM algorithm is significantly higher than MLE algorithm. To extract the two chirplets of the simulation signal, the time MPEM requires is ∼ 5.46 folds higher than MLE method (tested on iMac-Late 2014 equipped with 3.5 GHz Intel Core i5 processor, 24 GB 1600 MHz DDR3 memory and AMD Radeon R9 M290X graphics). Applications to biosignal analysis In this section we demonstrate the capability of our method by applying it to a set of selected biosignals. Most biological systems are accompanied by or manifested themselves as signals that reflect the nature of their normal or abnormal processes. However, biosignals are notorious for their variability, or nonstationarity, and thus joint time-frequency analysis plays a major role in biosignal processing. Given the : Testing the robustness of the algorithms against different levels of noise using the Robustness Index (RI). The relative robustness of the two algorithms against noise of different levels is measured as RI (ordinate), Eq. (9), at each testing SNR point (abscissa). The error bar indicates the 95% confidence interval (the intervals are smaller than the dot sizes at 10 dB and 20 dB). The higher the index, the higher the robustness of MPEM algorithm than MLE algorithm is. The RIs between the testing SNR points are indicated as the smooth curve obtained with a shape-preserving piecewise interpolation method. Notice that statistically at all test points the robustness of MPEM algorithm is significantly higher than MLE algorithm (see the text). abundance of chirping phenomena in biological systems, the adaptive chirplet transform (ACT), as a newly emerged tool of time-frequency analysis, has potential applications in the analysis of signals that involve frequency-changing components. Here we show the efficiency of the MPEM algorithm in the analyses of some representative data from two broad categories of biosignals: bioelectrical signals (e.g. visual evoked potentials) and bioacoustical signals (e.g. heart sounds, bat echolocation signals, bird songs and human speech). Apply ACT to visual evoked potentials Visual evoked potentials (VEPs) are scalp electrical signals generated in response to rapid and repetitive visual stimuli. VEPs have prominent clinical significance and can help diagnose sensory dysfunctions. They are traditionally employed in testing the integrity of the visual pathway and used as a supplement to other techniques in research into specific clinical conditions. A variety of clinical applications require the analysis of the steady-state visual evoked potentials (ssVEPs), which are elicited when the the repetition rate of visual stimuli is sufficiently high (Regan 1989). We describe an application of the ACT to the analysis of ssVEP signals. The goal of these approaches is to characterize the time-dependent behavior of VEP from its initial transient portion to the steady-state portion by a series of time-frequency atoms, or chirplet basis functions. The ACT technique allows us to clearly visualize, perhaps for the first time, the early moments of a VEP response. The VEPs were obtained through experiments involving a matrix of moving bars aligned to visual fixation crosses. The details are described elsewhere (J. Cui and Wong 2006b). Briefly, an averaged signal was obtained from 50 trials of a single subject. All data were collected in accordance with the protocol of human experimentation of the University of Toronto, Ontario, Canada. Ten Gaussian chirplets were estimated with the algorithm described in Section 2.2. We believe that this number of chirplets is sufficient for representing the VEP of interest as the residue after ten iterations was virtually indistinguishable from white noise. In general, the chirplets extracted first have higher amplitudes and higher cc. Particularly, we found that the amplitudes of the first three chirplets are significantly higher than those of the remaining chirplets. As shown in Fig. 5, the first chirplet, c 1 , represents the steady-state component of the VEP signal, as it has a long time-spread (∆ t ) and near zero chirp rate. The remaining two chirplets c 2 and c 3 have negative chirp rates, indicating that the instantaneous frequency of the prominent early components decreased with time. Fig. 5 also shows the visualization of the results using ACS and compares it to the conventional spectrogram calculated with STFT. Panel (B) shows the ACS of the ten chirplets accompanied by the reconstructed signal shown directly below and the spectrum on the left. It can be seen that the reconstructed signal provides a less noisy waveform. Chirplet c 1 − c 3 are shown separately in Panel (C), a typical representation of ssVEPs. In Panel (D) the signal was approximated with the traditional Gabor logons. The transient portion was represented by four Gabor logons (g 2 − g 5 ), instead of two chirplets (c 2 and c 3 ) as shown in Panel (C), which demonstrates the efficiency of chirplet representation. By adopting the ACT approach, we can thus achieve a very sparse representation of the original VEP signal. Furthermore, the conventional information of EP analysis, such as amplitude and latency, of the signal has been retained and can be retrieved readily from the reconstructed signal. The ACS allows us to visualize the time-frequency structure of the VEP response at higher resolution than previously possible. Spectrograms that have been constructed using STFT will invariably involve smoothing of some sort yielding an overall lower resolution picture. Although the spectrogram can show some of the salient timefrequency structures of the VEP response, most of the detail is lost due to smearing. However, as we have shown with the ACS, e.g., (C) in Fig. 5, the resulting time-frequency decomposition provides a clear picture of the underlying process. The estimated parameters obtained from the decomposition analysis provide detailed information about the local time-frequency structures of the signal, which are not easily obtainable from the standard spectrogram alone. Apply ACT to heart sounds The phonocardiogram (PCG), or heart sounds, is perhaps the most traditional biomedical signal, as indicated by the fact that the stethoscope is the primary instrument carried by physicians. The normal heart sounds provide an indication of the general state of the heart, while cardiovascular disease and defects cause changes or additional sounds and murmurs that could be useful in diagnosis. In a normal cardiac cycle one may hear two distinctive sounds -the first (S1) and second (S2) heart sound. The epoch of S1 is directly related to the event of ventricular contraction, reflecting a sequence of events related to closure of cardiac valves and ejection of the blood from the ventricles. The epoch of S2 reflects a series of events related to the end of ventricular contraction, signified by closure of the aortic and pulmonary valves. The frequency contents of heart sound have a long history in diagnosis, which are believed in significant value in the evaluation of the heart condition. As heart sounds are non-stationary in nature, more recent research adopts the time-frequency analysis in order to capture the temporal variation in the heart sound signals. The analysis of heart sounds by MP approach has been proposed before (X. Zhang et al. 1998). However, the dictionary employed in the previous study was composed of Gabor logons, and thus the frequency changing components could not be represented efficiently. Here, we demonstrate the capability of ACT in phonocardiogram analysis. The signal of heart sounds 2 consists of one cycle of heart beat with a 2 The original signal is available through Department of Medicine at University of Washing-duration of 700 ms. The original signal was sampled at 8 kHz. Since most of the diagnosis relevant components of heart sounds are below 300 Hz, however, we further downsampled the signal to 800 Hz, resulting in a signal of 700 sample points. The results are shown in Fig. 6. Panel (A) shows the spectrogram (estimated by STFT) of a heart sound signal. The waveform of the signal in the time domain is immediately shown below and its spectrum in the frequency domain is shown on the left. The first (S1) and second (S2) heart sound are represented by two distinctive energy blobs in the spectrogram, but their detailed time-frequency structures are not readily appreciated. On the other hand, Panel (B) displays the ACS of the four major chirplets extracted from the signal. We can see that the dominant components, i.e., c 1 of S1 and c 2 of S2, show the positive chirp-rates, indicating an increment of frequency in the beat. This information was not clearly available with neither the conventional spectrogram, nor the previous approach using Gabor logons (X. Zhang et al. 1998). Apply ACT to bio-acoustical signals As mentioned in Section 1 Introdcution, bio-acoustical signals are full of chirping components. Two examples of chirplet representations are shown in Fig. 7. The signals are a bat echo signal and a bird whistle of American Robin 3 , which are in the ultrasonic range and audible frequency range, respectively. The bat signal was sampled at 7 µs for a duration of 2.8 ms, and the bird song was sampled at 8 kHz for a duration of Apply ACT to speech signal As a novel tool of time-frequency analysis, the ACT has potential applications in the analysis of human speech. Fig. 8 shows a chirplet analysis of a female utterance of the word "Matlab" 4 . The signal length is 2000 points sampled at 3.71 kHz. 60 chiprlets were extracted and a reconstructed speech signal was obtained from these chirplets. There is minimal perceptual difference between the original and the reconstructed signals. The waveforms of the original and the reconstructed signals are shown in Panel (A) of Fig. 8. Panel (B) displays the STFT of the original signal. Panel (C) shows the ACS of the extracted chirplet components, of which the STFT of the reconstructed signal is shown in Panel (D). Recall that one chirplet can be described by six real-valued parameters, i.e., two for the complex coefficient in Eq. (2) and four for chirplet parameters I = (t c , ω c , c, ∆ t ) defined in Eq. (1). Therefore, the 60 chirplets required only 360 real values for the entire data recording, showing an approximately 80% reduction in data size when compared with the 2000 real values of the original signal. ton(https://depts.washington.edu/physdx/heart/demo.html) 3 At the courtesy of C. Condon, K. White, and A. Feng of the Beckman Center at the University of Illinois 4 The sound file was included in MATLAB R 7 R14. Discussion Biosignals are non-stationary in nature. The phenomenon of varying frequency exists abundantly in biological systems. The ACT, which in essence approximates the energy of a signal on the time-frequency plane by straight lines, is considerably promising for compactly representing biosignals. We'd like to emphasize that despite the fact that the chirplet transform has been proposed for more than 25 years and that ACT is well accepted in the community of signal processing (Mann and Haykin 1992;Baraniuk and Jones 1993), the application of ACT to the field of biomedical engineering is still relatively limited. A search with Google Scholar, by far, only yields a double-digit number of studies related to the application of chirplet analysis to biosignals. One possible reason is that the researchers in the field have not fully appreciated the merits of ACT for the analysis and representation of non-stationary signals with biological nature. In this article, we have introduced a new approach to the ACT, namely MPEM algorithm, in which the coarse estimation of the chirplet parameters is obtained with the chirplet MP algorithm and subsequently, the parameters are refined with EM algorithm. We have demonstrated the capacity of our algorithm by applying it to four representative, highly non-stationary biosignals, i.e. visual evoked potentials, heart sounds, bio-acoustic signals (ultrasonic bat echolocation signal and audible bird chirps), and human speech. These results demonstrate the value of sparse representation with chirplet basis. Signal information is diluted less and packed into a few coefficients of high energy, which makes ACT an appealing alternative to data compression for longterm signals (e.g., monitoring signals generated by the devices in ICU or ambulance). Moreover, The ACS provides a clear picture of a signal's energy content, and thus captures the "signature" of the signal in the time-frequency plane, which should be especially useful in pattern recognition problems, such as computer aided diagnosis. One unique feature of our method of ACT is that, unlike other chirplet based MP algorithms that employ the dictionary of Gabor logons for their coarse estimation, our algorithm uses chirplet dictionary directly. This approach avoids the difficulty of decomposition of multiple chirplets when their time-frequency centers cluster closely (so called deep crossed situation), which is a typical defect for the Gabor-to-chirplet approach (see Section 1. Also, cf. DecompDeepCrossChirplet.m in the code). Furthermore, the adoption of chirplet dictionary for MP algorithm leads to increment of robustness against noise, which is relevant to biomedical application, as biosignals usually have low SNR. This advantage may be understood from a point of view of matched filters. It is known that the best detection of signals in the presence of noise is based on the matched filter output, which is essentially an inner product between the noisy measurements and the target signal (Van Trees 2001). From the discussion in Section 2.2 we know that the Gaussian chirplet transform (GCT) defined in Eq. (2) is the inner product between the signal and elements of the dictionary. Thus, we can think of the elements in the chirplet dictionary as many templets used in the procedure of matched filtering. Since Gabor logons is a subset of chirplets, the chirplet based MP has more templets to match the time-frequency structure of the noisy signal and thus is more robust against noise. An important issue in future research on MP approach in general, and on chirplet based MP approach in specific, for biomedical application, is the construction of the dictionary. Previous studies have indicated that the choice of elements, or templates, to form an overcomplete set to cover signal space is crucial to the possession of greater robustness in face of noise, and to the production of efficient coding of the biosignals. For example, in an EEG study the generally structured, signal independent dictionary, could lead to biased representation (i.e. artifacts) of sleep spindles (P. J. Durka, Ircha, and Blinowska 2001). Interestingly, humans and animals may adopt some similar mechanisms to detect and perceive signals by matching them with biological "templets" (Wong and Barlow 2000), which, acquired through experience, are behaviorally relevant and environmentally adapted. An efficient code is intrinsically entangled with the class of signals being encoded (David J. Field 1987). Existing evidences suggest that somatosensory (Macfarlane and Graziano 2009), visual (Olshausen and D. J. Field 1996) and auditory (Lewicki 2002) cortexes may employ overcomplete basis sets to sparsely code natural signals impinging on our sensory systems. Inspired by the scientific discoveries, recent work in engineering has been exploring the ways to find specific dictionary for a given set of signals (M. Michal Aharon, Michael Elad, and A. M. Bruckstein 2006;Donoho, M. Elad, and Temlyakov 2006), which is aimed at seeking unique and stable representation of the signal of specific class in the presence of noise. Particularly, the techniques of artificial intelligence and machine learning have been increasingly showing promise in this quest (M. Qiu et al. 2010). These techniques are expected to refine the dictionary relevant to the class of signals under investigation, and thus enhance the efficiency of sparse representation (Bengio, Courville, and Vincent 2013). Summary In this article, we have introduced a new approach to the adaptive chirplet transform (ACT) and emphasized its merits in the analysis of signals produced in biological systems. Biosignals are non-stationary in nature and usually measured with low signal-to-noise ratio (SNR). Our method employ a coarse-refinement strategy to alleviate the high cost of computation in chirplet transform. Particularly, we adopt the chirplet dictionary directly in the coarse step of estimation and the expectation-maximization (EM) algorithm to refine the parameters. The matching-pursuit (MP) algorithm has been used to implement the multi-component extraction of chirplet atoms. We have demonstrated the capability of our approach by applying it to some representative biosignals, including bioelectrical signals (i.e. visual evoked potentials), bio-acoustical signals (i.e. heart sounds, echolocation sounds) and human speech. Our techniques result in more compact representation of these signals, clearer visualization of their time-frequency structures, and increase of estimation robustness in the face of strong noise, which shows considerable promise of chirplet analysis for biosignal processing. Finally, we point out in the discussion that the technology developed in the field of machine learning would be crucial in the future to construct "dictionary", or "code book", for efficient coding of the biosignals under investigation. The synthetic signal consists of seven (7) components, namely a = one period of a sinusoid, b = one period of a saw-tooth wave, c and d = sinusoids modulated by a Gaussian, e = delta function, f = sinusoid and g = Gaussian chirplet (cf. the code provided online for the detailed description of the parameters). Panel (B) shows the original synthetic signal (top), the reconstructed signal obtained from the seven estimated chirplets (middle), and the error of the difference between the original signal and the reconstructed one (bottom). . Panel (A) shows the spectrogram (short-time Fourier transform with 11-point Gaussian window) of the VEPs. The waveform of the original VEP signal is presented immediately below the time-frequency representation, while the corresponding spectrum of the signal is presented on the left side. Panel (B) shows the ACS of the 10 estimated Gaussian chirplets of the VEPs by using the MP based adaptive chirplet transform (MPEM algorithm). The reconstructed signal from these 10 chirplets is shown below and the spectrum of the reconstructed signal on the left. Note the transient phase of the response evoked immediately after the stimulus onset, which possesses the decrease of instantaneous frequency from high to low in less than one second, and the steady-state phase in the later portion of the response. This characteristic transition from initial to steady response can be better represented concisely by as few as three chirplets, as is shown in Panel (C). The three chirplets have the highest correlation coefficients among the estimated ones. As a comparison, Panel (D) displays the timefrequency distribution of Gabor logons of the VEPs. Notice that at least five logons are needed to adequately characterize the transition and steady-state phase of the evoked potentials. The pink vertical line indicates the onset of visual stimulus. shows the spectrogram of a recorded heart sounds within one cardiac cycle. The first heart (S 1 ) and second heart sound (S 2 ) can be clearly identified. A four-chirplet representation of the heart sound is shown in Panel (B), where each major component is represented by two Gaussian chirplets (c 1 , c 4 for S 1 , and c 2 , c 3 for S 2 ). Importantly, since the chirp-rates of these chirplets significantly deviates from zero, the chirplet analysis indicates the frequency-changing character of the sound components. Figure 2 : 2Simulation signal and an example of decomposition with MPEM and MLE algorithms. Panel (A) shows the waveforms of the simulated signal, consisting of an upward and a downward chirplet, embedded in strong noise. The top plot is the upward chirplet with the chirp-rate changing from zero to π, and the second plot is the downward chiplet with the chip-rate changing from π to zero, and the signal length is 100 points. The third plot is the synthetic, clean signal, consisting of the two chirplets. The bottom plot is an instance of the synthetic signal embedded in noise of SNR = 0 dB. Panel (B) shows a typical example of the robustness of the algorithms MPEM and MLE against noise. The top plot displays the original clean signal ("Clean") superimposed with the reconstructed signal ("Recon") estimated by MPEM algorithm from the noisy signal. The middle plot is the same as the top plot except that the reconstructed signal is estimated by MLE algorithm. The bottom plot compares the point-wise squared error produced by the two algorithms. Note that MLE algorithm typically induces larger errors than MPEM algorithm at most of the time points. Figure 3 3Figure 3: Testing the robustness of the algorithms against different levels of noise using the Robustness Index (RI). The relative robustness of the two algorithms against noise of different levels is measured as RI (ordinate), Eq. (9), at each testing SNR point (abscissa). The error bar indicates the 95% confidence interval (the intervals are smaller than the dot sizes at 10 dB and 20 dB). The higher the index, the higher the robustness of MPEM algorithm than MLE algorithm is. The RIs between the testing SNR points are indicated as the smooth curve obtained with a shape-preserving piecewise interpolation method. Notice that statistically at all test points the robustness of MPEM algorithm is significantly higher than MLE algorithm (see the text). 3 s. Panel (A) shows the results of the analysis of bat echo-location signal. The top panel shows the waveform of the bat signal in the time domain, and the middle is the corresponding spectrogram. The bottom illustrates the ACS of the five estimated chirplet atoms with the proposed MPEM algorithm. We can see that the chirplet representation can clearly show the major time-frequency structures of the ultrasonic signal. Panel (B) demonstrates the capability of the ACT in the audible frequency range. The chirp signal (top plot) of an American Robin was represented by 18 chirplets. Not only can the ACS (bottom plot) provide a clearer visualization of the energy content of the bird song, but also the reconstructed signal produces a cleaner sound perceptually. These examples demonstrate the value of the compact representations using chirplets. Figure 4 : 4Decomposition of the simulated signal. Panel (A) illustrates the simulated signal (waveforms at the lower portion of the panel) and the ACS of the estimated chirplets (the upper portion of the panels). Figure 5 : 5Time-frequency analysis of visual evoked potentials (VEPs) Figure 6 : 6Time-frequency analysis of heart sounds. Panel (A) Figure 7 : 7Chirplet representation of bio-acoustical signals. Panel (A) demonstrates the application of MPEM ACT to the analysis of an ultrasonic bio-signal. The top plot shows the time domain waveform of an echo-location signal of large brown bat (sampling frequency ≈140 kHz). The middle image is the spectrum of the signal (calculated with a 0.45 ms Gaussian window), and bottom image the adaptive chirplet spectrum (ACS) representation (consisting of five chirplets). Panel (B) shows another demonstration by analyzing an audiable sound -the chirping sounds of American Robin. The top plot is the waveform of the bird song in the time domain (sampled at 8 kHz), the middle one the spectrogram (with an 8 ms Gaussian window), and the bottom one the ACS represented by 20 chirplets. Figure 8 : 8Chirplet representation and compression of speech signal. Panel (A) displays the waveform of an acoustic speech signal of the spoken word "Matlab", of which the top plot shows the original signal and bottom one the reconstructed signal from the 60 estimated chirplet components. The spectrogram (STFT) of the original signal is shown in Panel (B) and the ACS of the speech signal represented by 60 chirplets is shown in Panel (C). As a comparison, Panel (D) shows the spectrogram of the reconstructed signal. Table 1 : 1Discretization of chirplet parameters in dictionary constructionSymbol Value Description N Signal size (number of samples) i 0 1 (default) The first level to chirp/rotate logons a 2 (default) Radix of scales Table 2 : 2The MPEM algorithm For the latest details of GPLv3 license please refer to http://www.gnu.org. K-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation. M Aharon, M Elad, A Bruckstein, 10.1109/TSP.2006.881199IEEE Transactions on Signal Processing. 54Aharon, M., M. Elad, and A. Bruckstein (2006). "K-SVD: An Algorithm for Designing Overcomplete Dic- tionaries for Sparse Representation". In: IEEE Transactions on Signal Processing 54.11, pp. 4311-4322. ISSN: 1053-587X. DOI: 10.1109/TSP.2006.881199. On the uniqueness of overcomplete dictionaries, and a practical way to retrieve them. Michal Aharon, Michael Elad, Alfred M Bruckstein, 10.1016/j.laa.2005.06.035Linear Algebra and its Applications 416.1. Aharon, Michal, Michael Elad, and Alfred M. Bruckstein (2006). "On the uniqueness of overcomplete dictionaries, and a practical way to retrieve them". In: Linear Algebra and its Applications 416.1, pp. 48- 67. ISSN: 0024-3795. DOI: 10.1016/j.laa.2005.06.035. Chirplet transform applied to simulated and real blue whale (Balaenoptera musculus) calls. M Bahoura, Y Simard, Image and Signal Processing. 5099Bahoura, M. and Y. Simard (2008). "Chirplet transform applied to simulated and real blue whale (Bal- aenoptera musculus) calls". In: Image and Signal Processing 5099, pp. 296-303. ISSN: 0302-9743. Serial combination of multiple classifiers for automatic blue whale calls recognition. Mohammed Bahoura, Yvan Simard, 10.1016/j.eswa.2012.01.156Expert Systems with Applications 39.11. Bahoura, Mohammed and Yvan Simard (2012). "Serial combination of multiple classifiers for automatic blue whale calls recognition". In: Expert Systems with Applications 39.11, pp. 9986-9993. ISSN: 0957- 4174. DOI: 10.1016/j.eswa.2012.01.156. Warped wavelet bases: unitary equivalence and signal processing. R G Baraniuk, D L Jones, 10.1109/ICASSP.1993.319500IEEE International Conference on Acoustics, Speech, and Signal Processing. 3Baraniuk, R. G. and D. L. Jones (1993). "Warped wavelet bases: unitary equivalence and signal processing". In: IEEE International Conference on Acoustics, Speech, and Signal Processing. Vol. 3, pp. 320-323. DOI: 10.1109/ICASSP.1993.319500. Representation Learning: A Review and New Perspectives. Y Bengio, A Courville, P Vincent, 10.1109/TPAMI.2013.50IEEE Transactions on Pattern Analysis and Machine Intelligence. 35Bengio, Y., A. Courville, and P. Vincent (2013). "Representation Learning: A Review and New Perspec- tives". In: IEEE Transactions on Pattern Analysis and Machine Intelligence 35.8, pp. 1798-1828. ISSN: 0162-8828. DOI: 10.1109/TPAMI.2013.50. Seismic applications of the Wigner-Ville distribution. Boualem Boashash, H J Whitehouse, Proceedings of International Conference in Systems and Circuits. IEEE Society of Circuits System. International Conference in Systems and Circuits. IEEE Society of Circuits SystemBoashash, Boualem and H. J. Whitehouse (1986). "Seismic applications of the Wigner-Ville distribution". In: Proceedings of International Conference in Systems and Circuits. IEEE Society of Circuits System, pp. 34-37. A four-parameter atomic decomposition of chirplets. A Bultan, IEEE Transactions on Signal Processing. 47Bultan, A. (1999). "A four-parameter atomic decomposition of chirplets". In: IEEE Transactions on Signal Processing 47.3, pp. 731-745. ISSN: 1053-587X. Gravitational wave detection using multiscale chirplets. E J Candes, P R Charlton, H Helgason, 10.1088/0264-9381/25/18/184020ISSN: 0264-9381. DOI: 10 . 1088 / 0264-9381/25/18/184020Classical and Quantum Gravity 25. 18Candes, E. J., P. R. Charlton, and H. Helgason (2008). "Gravitational wave detection using multiscale chirplets". In: Classical and Quantum Gravity 25.18, pp. 1-12. ISSN: 0264-9381. DOI: 10 . 1088 / 0264-9381/25/18/184020. Loss of sleep spindle frequency deceleration in Obstructive Sleep Apnea. Diego Z Carvalho, 10.1016/j.clinph.2013.07.0051872-8952. DOI: 10.1016/j. clinph.2013.07.005Clinical Neurophysiology. 125Carvalho, Diego Z. et al. (2014). "Loss of sleep spindle frequency deceleration in Obstructive Sleep Apnea". In: Clinical Neurophysiology 125.2, pp. 306-312. ISSN: 1388-2457, 1872-8952. DOI: 10.1016/j. clinph.2013.07.005. Time-frequency analysis. Prentice-Hall signal processing series. Leon Cohen, Prentice Hall PTR135945321Englewood Cliffs, N.JCohen, Leon (1995). Time-frequency analysis. Prentice-Hall signal processing series. Englewood Cliffs, N.J: Prentice Hall PTR. ISBN: 0135945321. Optimal window length in the windowed adaptive chirplet analysis of visual evoked potentials. J Cui, W Wong, The 28th IEEE EMBS Annual International Conference. IEEE. 53The adaptive chirplet transform and visual evoked potentialsCui, J. and W. Wong (2006a). "Optimal window length in the windowed adaptive chirplet analysis of visual evoked potentials". In: The 28th IEEE EMBS Annual International Conference. IEEE, pp. 4580-4583. -(2006b). "The adaptive chirplet transform and visual evoked potentials". In: IEEE Transactions on Biomedical Engineering 53.7, pp. 1378-1384. ISSN: 0018-9294. Investigation of short-term changes in visual evoked potentials with windowed adaptive chirplet transform. 10.1109/Tbme.2008.918439IEEE Transactions on Biomedical Engineering. 55-(2008). "Investigation of short-term changes in visual evoked potentials with windowed adaptive chirplet transform". In: IEEE Transactions on Biomedical Engineering 55.4, pp. 1449-1454. ISSN: 0018-9294. DOI: 10.1109/Tbme.2008.918439. Adaptive chirplet transform for the analysis of visual evoked potentials. Jie Cui, Toronto, Ontario, CanadaInstitute of Biomaterials and Biomedical Engineering, University of TorontoPhD Dissertation.Cui, Jie (2006). "Adaptive chirplet transform for the analysis of visual evoked potentials". PhD Dissertation. Toronto, Ontario, Canada: Institute of Biomaterials and Biomedical Engineering, University of Toronto. Adaptive greedy approximations. G Davis, S Mallat, M Avellaneda, Constructive Approximation 13.1. Davis, G., S. Mallat, and M. Avellaneda (1997). "Adaptive greedy approximations". In: Constructive Ap- proximation 13.1, pp. 57-98. Maximum likelihood from incomplete data via the EM algorithm. A P Dempster, N M Laird, D B Rubin, Journal of the Royal Statistical Society, Series B. 391Dempster, A. P., N. M. Laird, and D. B. Rubin (1977). "Maximum likelihood from incomplete data via the EM algorithm". In: Journal of the Royal Statistical Society, Series B 39.1, pp. 1-38. Stable recovery of sparse overcomplete representations in the presence of noise. D L Donoho, M Elad, V N Temlyakov, 10.1109/TIT.2005.860430IEEE Transactions on Information Theory. 52Donoho, D. L., M. Elad, and V. N. Temlyakov (2006). "Stable recovery of sparse overcomplete represen- tations in the presence of noise". In: IEEE Transactions on Information Theory 52.1, pp. 6-18. ISSN: 0018-9448. DOI: 10.1109/TIT.2005.860430. Transionospheric signal detection with chirped wavelets. A B Doser, M E Dunham, 10.1109/ACSSC.1997.679154Conference Record of the Thirty-First Asilomar Conference on Signals, Systems and Computers (Cat. No.97CB36136). 2Doser, A. B. and M. E. Dunham (1997). "Transionospheric signal detection with chirped wavelets". In: Conference Record of the Thirty-First Asilomar Conference on Signals, Systems and Computers (Cat. No.97CB36136). Vol. 2, pp. 1499-1503. DOI: 10.1109/ACSSC.1997.679154. On a chirplet transform-based method applied to separating and counting wolf howls. B Dugnol, 10.1016/j.sigpro.2008.01.018Signal Processing 88. 7Dugnol, B. et al. (2008). "On a chirplet transform-based method applied to separating and counting wolf howls". In: Signal Processing 88.7, pp. 1817-1826. ISSN: 0165-1684. DOI: 10.1016/j.sigpro. 2008.01.018. Stochastic time-frequency dictionaries for matching pursuit. P J Durka, D Ircha, K J Blinowska, 10.1109/78.905866DOI: 10. 1109/78.905866IEEE Transactions on Signal Processing. 49Durka, P. J., D. Ircha, and K. J. Blinowska (2001). "Stochastic time-frequency dictionaries for matching pursuit". In: IEEE Transactions on Signal Processing 49.3, pp. 507-510. ISSN: 1053-587X. DOI: 10. 1109/78.905866. An introduction to the bootstrap. Monographs on statistics and applied probability. Bradley Efron, Robert Tibshirani, Chapman & Hall, xvi436412042312New YorkEfron, Bradley and Robert Tibshirani (1993). An introduction to the bootstrap. Monographs on statistics and applied probability. New York: Chapman & Hall, xvi, 436 p. ISBN: 0412042312. Parameter estimation of superimposed signals using the EM algorithm. M Feder, E Weinstein, 10.1109/29.1552IEEE Transactions on Acoustics, Speech, and Signal Processing. 36Feder, M. and E. Weinstein (1988). "Parameter estimation of superimposed signals using the EM algorithm". In: IEEE Transactions on Acoustics, Speech, and Signal Processing 36.4, pp. 477-489. ISSN: 0096-3518. DOI: 10.1109/29.1552. Relations between the statistics of natural images and the response properties of cortical cells. David J Field, 10.1364/JOSAA.4.002379In: JOSA A 4.12Field, David J. (1987). "Relations between the statistics of natural images and the response properties of cor- tical cells". In: JOSA A 4.12, pp. 2379-2394. ISSN: 1520-8532. DOI: 10.1364/JOSAA.4.002379. Time-frequency/time-scale analysis. Wavelet analysis and its applications. Patrick Flandrin, Academic PressSan Diego, CalifISBN: 0122598709 (alk. paper)Flandrin, Patrick (1999). Time-frequency/time-scale analysis. Wavelet analysis and its applications. San Diego, Calif: Academic Press. ISBN: 0122598709 (alk. paper). Theory of communication. D Gabor, Journal of IEE 93. 26Gabor, D. (1946). "Theory of communication". In: Journal of IEE 93.26, pp. 429-457. Fast Chirplet Transform feeding CNN, application to orca and bird bioacoustics. Herve Glotin, Julien Ricard, Randall Balestriero, 29th Conference on Neural Information Processing Systems. Barcelona, SpainGlotin, Herve, Julien Ricard, and Randall Balestriero (2016). "Fast Chirplet Transform feeding CNN, ap- plication to orca and bird bioacoustics". In: 29th Conference on Neural Information Processing Systems (NIPS 2016). Barcelona, Spain. Fast matching pursuit with a multiscale dictionary of Gaussian chirps. R Gribonval, IEEE Transactions on Signal Processing. 49Gribonval, R. (2001). "Fast matching pursuit with a multiscale dictionary of Gaussian chirps". In: IEEE Transactions on Signal Processing 49.5, pp. 994-1001. ISSN: 1053-587X. Fault feature extraction by using adaptive chirplet transform. Q J Guo, H B Yu, J T Hu, WCICA 2006: Sixth World Congress on Intelligent Control and Automation. 1Guo, Q. J., H. B. Yu, and J. T. Hu (2006). "Fault feature extraction by using adaptive chirplet transform". In: WCICA 2006: Sixth World Congress on Intelligent Control and Automation. Vol. 1. 12, pp. 5643-5647. Detection of variable frequency signals using a fast chirp transform. F A Jenet, T A Prince, 10.1103/PhysRevD.62.1220011089-4918. DOI: 10 . 1103 / PhysRevD.62.122001Physical Review D 62.12. Jenet, F. A. and T. A. Prince (2000). "Detection of variable frequency signals using a fast chirp transform". In: Physical Review D 62.12, pp. 120001-122010. ISSN: 0556-2821, 1089-4918. DOI: 10 . 1103 / PhysRevD.62.122001. Adaptive chirp-based time-frequency analysis of speech signals. M Kepesi, L Weruaga, 10.1016/j.specom.2005.08.004Speech Communication 48.5. Kepesi, M. and L. Weruaga (2006). "Adaptive chirp-based time-frequency analysis of speech signals". In: Speech Communication 48.5, pp. 474-492. ISSN: 0167-6393. DOI: 10.1016/j.specom.2005. 08.004. Multivariate matching pursuit in optimal Gabor dictionaries: theory and software with interface for EEG/MEG via Svarog. Rafał Kuś, Piotr Tadeusz Różański, Piotr Jerzy Durka, 10.1186/1475-925X-12-94BioMedical Engineering OnLine. 12Kuś, Rafał, Piotr Tadeusz Różański, and Piotr Jerzy Durka (2013). "Multivariate matching pursuit in opti- mal Gabor dictionaries: theory and software with interface for EEG/MEG via Svarog". In: BioMedical Engineering OnLine 12, p. 94. ISSN: 1475-925X. DOI: 10.1186/1475-925X-12-94. Efficient coding of natural sounds. Michael S Lewicki, 10.1038/nn831Nature Neuroscience. 5Lewicki, Michael S. (2002). "Efficient coding of natural sounds". In: Nature Neuroscience 5.4, pp. 356-363. ISSN: 1097-6256. DOI: 10.1038/nn831. Chirplet transform for ultrasonic signal analysis and NDE applications. Y Lu, 10.1109/ULTSYM.2005.1602909ISBN: 0-7803-9382-1. DOI: 10 . 1109 / ULTSYM . 2005.1602909IEEE Ultrasonics Symposium. 14Lu, Y. et al. (2005). "Chirplet transform for ultrasonic signal analysis and NDE applications". In: IEEE Ultrasonics Symposium. Vol. 1-4, pp. 536-539. ISBN: 0-7803-9382-1. DOI: 10 . 1109 / ULTSYM . 2005.1602909. Maximum matching initial selection for adaptive Gaussian chirplet decomposition. Guizhou Lyu, Qiang He, 10.1117/12.2197095Seventh International Conference on Digital Image Processing. Lyu, Guizhou and Qiang He (2015). "Maximum matching initial selection for adaptive Gaussian chirplet de- composition". In: Seventh International Conference on Digital Image Processing (ICDIP 2015). Vol. 9631. SPIE, pp. 1-5. DOI: 10.1117/12.2197095. Diversity of grip in Macaca mulatta. Nicholas B W Macfarlane, S A Michael, Graziano, 10.1007/s00221-009-1909-zDOI: 10.1007/ s00221-009-1909-zIn: Experimental Brain Research 197.3Macfarlane, Nicholas B. W. and Michael S. A. Graziano (2009). "Diversity of grip in Macaca mulatta". In: Experimental Brain Research 197.3, pp. 255-268. ISSN: 0014-4819, 1432-1106. DOI: 10.1007/ s00221-009-1909-z. Matching pursuit with time-frequency dictionaries. S G Mallat, Z Zhang, IEEE Transactions on Signal Processing 41. 12Mallat, S. G. and Z. Zhang (1993). "Matching pursuit with time-frequency dictionaries". In: IEEE Transac- tions on Signal Processing 41.12, pp. 3397-3415. The chirplet transform: A generalization of Gabor's logon transform. S Mann, S Haykin, Vision Interface '91. Calgary, Canada: Canadian Image Processing and Pattern Recognition Society. Mann, S. and S. Haykin (1991). "The chirplet transform: A generalization of Gabor's logon transform". In: Vision Interface '91. Calgary, Canada: Canadian Image Processing and Pattern Recognition Society, pp. 205-212. Adaptive chirplet transform -An adaptive generalization of the wavelet transform. Optical Engineering. 31-(1992). "Adaptive chirplet transform -An adaptive generalization of the wavelet transform". In: Optical Engineering 31.6, pp. 1243-1256. The chirplet transform -physical considerations. IEEE Transactions on Signal Processing. 43-(1995). "The chirplet transform -physical considerations". In: IEEE Transactions on Signal Processing 43.11, pp. 2745-2761. Modeling auditory cortical processing as an adaptive chirplet transform. E Mercado, C E Myers, M A Gluck, Neurocomputing. 32Mercado, E., C. E. Myers, and M. A. Gluck (2000). "Modeling auditory cortical processing as an adaptive chirplet transform". In: Neurocomputing 32, pp. 913-919. ISSN: 0925-2312. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. B A Olshausen, D J Field, 10.1038/381607a0ISSN: 0028-0836. DOI: 10.1038/ 381607a0In: Nature 381.6583Olshausen, B. A. and D. J. Field (1996). "Emergence of simple-cell receptive field properties by learning a sparse code for natural images". In: Nature 381.6583, pp. 607-609. ISSN: 0028-0836. DOI: 10.1038/ 381607a0. Chirp hunting. J C O&apos;neill, P Flandrin, 10.1109/TFSA.1998.721452DOI: 10 . 1109 / TFSA . 1998 . 721452Proceedings of the IEEE-SP International Symposium on Time-Frequency and Time-Scale Analysis. the IEEE-SP International Symposium on Time-Frequency and Time-Scale AnalysisO'Neill, J. C. and P. Flandrin (1998). "Chirp hunting". In: Proceedings of the IEEE-SP International Sym- posium on Time-Frequency and Time-Scale Analysis, pp. 425-428. DOI: 10 . 1109 / TFSA . 1998 . 721452. Sparse Representations with Chirplets via Maximum Likelihood Estimation. Jeffrey C O&apos;neill, William C Patrick Flandrin, Karl, ReportO'Neill, Jeffrey C., Patrick Flandrin, and William C. Karl (2000). Sparse Representations with Chirplets via Maximum Likelihood Estimation. Report. URL: http://tfd.sourceforge.net/. Signal Representation Using Adaptive Normalized Gaussian Functions. S Qian, D P Chen, Signal Processing. 36Qian, S. and D. P. Chen (1994). "Signal Representation Using Adaptive Normalized Gaussian Functions". In: Signal Processing 36.1, pp. 1-11. Transionospheric signal recognition by joint time-frequency representation". In: Radio Science 30. Qian, Mark E Shie, Matthew J Dunham, Freeman, 10.1029/95RS015276Qian, Shie, Mark E. Dunham, and Matthew J. Freeman (1995). "Transionospheric signal recognition by joint time-frequency representation". In: Radio Science 30.6, pp. 1817-1829. ISSN: 1944-799X. DOI: 10.1029/95RS01527. Consistent sparse representations of EEG ERP and ICA components based on wavelet and chirplet dictionaries. J W Qiu, 10.1109/IEMBS.2010.5627995Annual International Conference of the IEEE Engineering in Medicine and Biology. Qiu, J. W. et al. (2010). "Consistent sparse representations of EEG ERP and ICA components based on wavelet and chirplet dictionaries". In: Annual International Conference of the IEEE Engineering in Medicine and Biology, pp. 4014-4019. DOI: 10.1109/IEMBS.2010.5627995. Human brain electrophysiology. Evoked potentials and evoked magnetic fields in science and medicine. D Regan, Elsevier Science PulisherNew YorkRegan, D. (1989). Human brain electrophysiology. Evoked potentials and evoked magnetic fields in science and medicine. New York: Elsevier Science Pulisher. EEG signal processing. Chichester. Saeid Sanei, Jonathon Chambers, John Wiley & Sons9780470025819Hoboken, NJSanei, Saeid and Jonathon Chambers (2007). EEG signal processing. Chichester ; Hoboken, NJ: John Wiley & Sons, xxii, 289 p. ISBN: 9780470025819. Quantifying chirp in sleep spindles. Suzana V Schönwald, 10.1016/j.jneumeth.2011.01.025Journal of Neuroscience Methods. 197Schönwald, Suzana V. et al. (2011). "Quantifying chirp in sleep spindles". In: Journal of Neuroscience Methods 197.1, pp. 158-164. ISSN: 0165-0270. DOI: 10.1016/j.jneumeth.2011.01.025. A novel approach for QRS delineation in ECG signal based on chirplet transform. B S Shaik, G V S S K R Naganjaneyulu, A V Narasimhadhan, 10.1109/CONECCT.2015.7383914DOI: 10 . 1109 / CONECCT.2015.7383914IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT). Shaik, B. S., G. V. S. S. K. R. Naganjaneyulu, and A. V. Narasimhadhan (2015). "A novel approach for QRS delineation in ECG signal based on chirplet transform". In: IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT), pp. 1-5. DOI: 10 . 1109 / CONECCT.2015.7383914. Detection, estimation, and modulation theory. Van Trees, L Harry, Wiley471095176New YorkVan Trees, Harry L. (2001). Detection, estimation, and modulation theory. New York: Wiley. ISBN: 0471095176. Manoeuvring target detection in over-the-horizon radar using adaptive clutter rejection and adaptive chirplet transform. G Wang, 10.1049/ip-rsn:20030700IEE Proceedings-Radar Sonar and Navigation 150. 4Wang, G. et al. (2003). "Manoeuvring target detection in over-the-horizon radar using adaptive clutter rejec- tion and adaptive chirplet transform". In: IEE Proceedings-Radar Sonar and Navigation 150.4, pp. 292- 298. ISSN: 1350-2395. DOI: 10.1049/ip-rsn:20030700. Pattern recognition -Tunes and templates. W Wong, H Barlow, In: Nature 404.6781Wong, W. and H. Barlow (2000). "Pattern recognition -Tunes and templates". In: Nature 404.6781, pp. 952- 953. ISSN: 0028-0836. A fast refinement for adaptive Gaussian chirplet decomposition. Q Y Yin, S Qian, A G Feng, IEEE Transactions on Signal Processing. 5002Yin, Q. Y., S. Qian, and A. G. Feng (2002). "A fast refinement for adaptive Gaussian chirplet decom- position". In: IEEE Transactions on Signal Processing 50.6, pp. 1298-1306. ISSN: 1053-587X. DOI: PiiS1053-587x(02)04398-2. Time-frequency scaling transformation of the phonocardiogram based of the matching pursuit method. Xuan Zhang, 10.1109/10.704866IEEE Transactions on Biomedical Engineering. 45Zhang, Xuan et al. (1998). "Time-frequency scaling transformation of the phonocardiogram based of the matching pursuit method". In: IEEE Transactions on Biomedical Engineering 45.8, pp. 972-979. ISSN: 0018-9294. DOI: 10.1109/10.704866.
[ "https://github.com/jiecui/mpact).", "https://github.com/jiecui/mpact)" ]
[ "Antiferromagnetic noise correlations in optical lattices", "Antiferromagnetic noise correlations in optical lattices" ]
[ "G M Bruun \nNiels Bohr International Academy\nUniversity of Copenhagen\nDK-2100Copenhagen ØDenmark\n\nMathematical Physics\nLund Institute of Technology\nP. O. Box 118SE-22100LundSweden\n", "O F Syljuåsen \nDepartment of Physics\nUniversity of Oslo\nBlindernP. O. Box 1048N-0316OsloNorway\n", "K G L Pedersen \nNiels Bohr Institute\nUniversity of Copenhagen\nDK-2100Copenhagen ØDenmark\n", "B M Andersen \nNiels Bohr Institute\nUniversity of Copenhagen\nDK-2100Copenhagen ØDenmark\n", "E Demler \nDepartment of Physics\nHarvard University\n02138CambridgeMassachusettsUSA\n", "A S Sørensen \nNiels Bohr Institute\nUniversity of Copenhagen\nDK-2100Copenhagen ØDenmark\n" ]
[ "Niels Bohr International Academy\nUniversity of Copenhagen\nDK-2100Copenhagen ØDenmark", "Mathematical Physics\nLund Institute of Technology\nP. O. Box 118SE-22100LundSweden", "Department of Physics\nUniversity of Oslo\nBlindernP. O. Box 1048N-0316OsloNorway", "Niels Bohr Institute\nUniversity of Copenhagen\nDK-2100Copenhagen ØDenmark", "Niels Bohr Institute\nUniversity of Copenhagen\nDK-2100Copenhagen ØDenmark", "Department of Physics\nHarvard University\n02138CambridgeMassachusettsUSA", "Niels Bohr Institute\nUniversity of Copenhagen\nDK-2100Copenhagen ØDenmark" ]
[]
We analyze how noise correlations probed by time-of-flight (TOF) experiments reveal antiferromagnetic (AF) correlations of fermionic atoms in two-dimensional (2D) and three-dimensional (3D) optical lattices. Combining analytical and quantum Monte Carlo (QMC) calculations using experimentally realistic parameters, we show that AF correlations can be detected for temperatures above and below the critical temperature for AF ordering. It is demonstrated that spin-resolved noise correlations yield important information about the spin ordering. Finally, we show how to extract the spin correlation length and the related critical exponent of the AF transition from the noise.
10.1103/physreva.80.033622
[ "https://arxiv.org/pdf/0907.0652v1.pdf" ]
4,987,386
0907.0652
c6c34bd790a29bf870a718c61c3c60eb68d40b26
Antiferromagnetic noise correlations in optical lattices 3 Jul 2009 G M Bruun Niels Bohr International Academy University of Copenhagen DK-2100Copenhagen ØDenmark Mathematical Physics Lund Institute of Technology P. O. Box 118SE-22100LundSweden O F Syljuåsen Department of Physics University of Oslo BlindernP. O. Box 1048N-0316OsloNorway K G L Pedersen Niels Bohr Institute University of Copenhagen DK-2100Copenhagen ØDenmark B M Andersen Niels Bohr Institute University of Copenhagen DK-2100Copenhagen ØDenmark E Demler Department of Physics Harvard University 02138CambridgeMassachusettsUSA A S Sørensen Niels Bohr Institute University of Copenhagen DK-2100Copenhagen ØDenmark Antiferromagnetic noise correlations in optical lattices 3 Jul 2009(Dated: July 3, 2009)arXiv:0907.0652v1 [cond-mat.quant-gas]numbers: 0540Ca3710Jk7540Cx7550Ee We analyze how noise correlations probed by time-of-flight (TOF) experiments reveal antiferromagnetic (AF) correlations of fermionic atoms in two-dimensional (2D) and three-dimensional (3D) optical lattices. Combining analytical and quantum Monte Carlo (QMC) calculations using experimentally realistic parameters, we show that AF correlations can be detected for temperatures above and below the critical temperature for AF ordering. It is demonstrated that spin-resolved noise correlations yield important information about the spin ordering. Finally, we show how to extract the spin correlation length and the related critical exponent of the AF transition from the noise. Atoms in optical lattices hold the potential to unravel the fundamental physics of phenomena related to quantum systems in periodic potentials including spin phases and high-T c superconductors [1]. One has already observed the Mott insulator transition for bosons [2], the emergence of superexchange interactions [3], the transition between metallic, band-insulator and Mott phases for fermions [4,5], and fermionic pairing [6]. In addition to creating these lattice systems at sufficiently low temperatures T , a major challenge is how to detect the various phases predicted theoretically. These phases can be investigated via higher order correlation functions which only show up as quantum noise in most experiments. Quantum spin noise spectroscopy [7,8] and the measurement of correlations of the momentum distribution of the atoms after release from the lattice (TOF experiments) are two ways to probe these correlation functions [9]. TOF experiments have already been used to detect pairing correlations in a Fermi gas [10], bosonic bunching and fermionic anti-bunching of atoms in optical lattices [11], the Mott-superfluid transition for bosons [12], and the effects of disorder in the Mott phase [13]. Here, we show how AF correlations of fermionic atoms in optical lattices give rise to distinct measurable signals in TOF experiments even above the critical temperature for magnetic ordering. Spin-resolved experiments are demonstrated to yield additional information which can be used to identify the magnetic ordering and broken symmetry axis. The main results are illustrated in Figs. 1-2 which show noise correlations in the momentum distributions after expansion as a function of temperature and momentum. We finally discuss how the spin correlation length and the related critical exponent ν can be extracted experimentally from the noise. We consider a two-component Fermi gas in an optical lattice of size N = N x N y N z . In the limit of strong repulsion the gas is in the Mott phase for low T at half-filling and can be described by the Heisenberg model H = J l,m [ŝ x lŝ x m +ŝ y lŝ y m + (1 + ∆)ŝ z lŝ z m ].(1) Hereŝ l is the spin-1/2 operator for atoms at site r l and l, m denotes neighboring pairs. The interaction is J = 4t ↑ t ↓ /U , with U > 0 the on-site repulsion between atoms and t σ the spin-dependent tunneling matrix element. The anisotropy parameter is ∆ = −(t ↑ − t ↓ ) 2 /(t 2 ↑ + t 2 ↓ ) . Below we consider both cubic (3D) and square (2D) lattices with lattice constant a of unity. We do not include any effects of a trapping potential. A major experimental goal is to detect the onset of AF correlations with decreasing T .TOF experiments probe correlation functions of the form [11] C AB (r − r ′ ) = Â (r)B(r ′ ) − Â (r) B (r ′ ) Â (r) B (r ′ ) ,(2) whereÂ(r) andB(r) are atomic observables measured at r after expansion. In Refs. 10, 11, the atomic density correlations C nn (d) were measured withÂ(r) = B(r) = σn σ (r) the density operator of the atoms. Heren σ (r) =ψ † σ (r)ψ σ (r) withψ σ (r) the field operator for atoms in spin state σ. We also consider the correlations between images of each of the spin components C (d) corresponding toÂ(r) =n ↑ (r) andB(r) = n ↓ (r). This may be achieved by, e.g., spatially separating the two atomic spin states using a Stern-Gerlach technique [14]. By applying a π/2 pulse before the expansion one can also gain access to the spin noise perpendicular to the z component C ⊥ (d) corresponding toÂ(r) = exp(iŝ y π/2)n ↑ (r) exp(−iŝ y π/2) andB(r) = exp(iŝ y π/2)n ↓ (r) exp(−iŝ y π/2). After normal ordering and expansion of the field oper-ators in the lowest band Wannier states, we obtain C nn (k) = 1 N − 1 2 δ k,K − 2 ŝ k ·ŝ −k ,(3)C (k) = 1 2N − ŝ z 0ŝ z 0 − ŝ x kŝ x −k − ŝ y kŝ y −k , (4) C ⊥ (k) = 1 2N − ŝ x 0ŝ x 0 − ŝ y kŝ y −k − ŝ z kŝ z −k ,(5) where K is a reciprocal lattice vector, andŝ k = N −1 lŝ l e −ik·r l . We assume free expansion of the atoms for a duration t, neglect autocorrelation terms ∝ δ(d) in (3)- (5), and express C in terms of the momentum k = md/t ( = 1 throughout). These correlation functions have contributions with different scalings. For 3D systems, the spins order below the Néel temperaure T N . Assuming the broken symmetry axis along the z axis, C nn and C ⊥ have a contribution from ŝ z kŝ z k ∼ O(1) at k = (π, π, π) for T < T N . At other momenta or for T > T N the correlation functions scale as 1/N . Note that we will perform calculations where the total spin ∝ŝ z 0 of the lattice is allowed to fluctuate, which is different from the experimental situation where the number of particles in each spin state is fixed. This may give rise to different momentum independent terms in (3)-(5) for the experiment, but the k dependent part should be accurately captured by our calculations. Experiments take 2D column density images of the expanding cloud corresponding to integrating over z and z ′ in both the numerator and denominator in (2). The cameras also introduce a smoothening in the xy-plane, which can be modeled by convolution with a Gaussian [11]. In total, the experimental procedure corresponds to measuring the averaged correlation function C exp (k ⊥ ) = 1 4πN κ 2 k ′ e − 1 4κ 2 1 (2π) 2 (k ′ ⊥ −k ⊥ ) 2 C(k ′ ),(6) where k ⊥ = (k x , k y ) and κ = w/l with l = 2πt/ma (keeping a for clarity) and w a width depending on the pixel resolution of the CCD camera. The averaging in the z-direction reduces the contributions to C nn and C ⊥ at k = (π, π, π) from O(1) for T < T N to 1/N z and the Gaussian smoothening further reduces it to O(1/κ 2 N ). This reduction happens because the correlations are restricted to a single point k = (π, π, π) and we are averaging over N z · N x N y /κ 2 points. Away from k ⊥ = (π, π) or for T > T N the correlations have a wider distribution and they are less affected by the averaging. We plot in Fig. 1 C exp nn (k, k) as a function of k = k x = k y and T for a 3D lattice of size N = 32 3 . The spin correlation functions ŝ k ·ŝ −k were calculated using QMC simulations using the Stochastic Series Expansion method [15] with directed-loop updates [16]. This method is very efficient for Heisenberg models and gives accurate results for a wide range of T for large systems. was then calculated from (3) with the k z average in (6) included to simulate the experimentally relevant situation [11]. The main feature of the plot is the dip at k = π coming from AF ordering. For T < T N , this gives rise to a large Bragg dip at k x = k y = π [9,17]. We see that the Bragg dip remains also above T N but has a larger width due to AF correlations without long range order. The spin correlation length ξ can be extracted directly from the width of the dip which decreases with decreasing T − T N reflecting that ξ increases as ξ ∼ 1/|T − T N | ν for T → T + N . We show below how the noise indeed can be used to extract the critical exponent ν. At high T the AF correlations lead to singlet formation of neighboring spins. This gives rise to a precursor of the Bragg peak at k x = k y = π and an equal signal of opposite sign at k ⊥ = 0 [see (7)]. These momentum correlations can be understood by noting that two fermions in a singlet are more (less) likely to have the same (opposite) momentum due to the Pauli exclusion principle. The uniform spin noise case k = 0 was also considered in [8]. In the high-T limit J/T = 1/T ≪ 1 (k B = 1), controlled analytical results for the correlation functions (3)-(5) may be obtained by expanding inT −1 . We obtain C nn (k) = − 1 2 δ k,K − 1 2N + 3Zγ k 8NT + O(T −2 ),(7)C (k) = C ⊥ (k) = − 1 4N + Z 8NT ( 1 2 + γ k ) + O(T −2 ) (8) with Z = 4(6) for 2D(3D) lattices. For simplicity we have taken ∆ = 0 and γ k = Z −1 a 2 cos(k · a) where a sums over the unit vectors spanning the lattice. The average of (7) and (8) over z can be obtained by simply omitting the z direction in the sum. To obtain analytic expressions for low T , we perform a spin-wave calculation, which should be fairly accurate for a 3D system at T ≪ T N . This yields after some algebra where tanh 2Θ k = γ k /(1 + ∆) and f k = [exp ω k /T − 1] −1 is the Bose distribution function for the spin-waves with energy ω k = 3J (1 + ∆) 2 − γ 2 k . We also have ŝ z 0ŝ z 0 = k ′ sinh −2 (βω k /2)/2N 2 where the sum extends over the AF reduced Brillouin zone [8]. A similar but somewhat more lengthy expression can be derived for C ⊥ (k). We plot in Fig. 2 C exp nn (k ⊥ ), C exp (k ⊥ ), and C exp ⊥ (k ⊥ ) as a function of T for k ⊥ = (π, π) (a) and k ⊥ = (π/4, π/4) (b) for the same parameters as above. The solid curves include the average over k z only and the dotted curves include a Gaussian smoothing in the (k x , k y ) plane as well. The values for T → 0 with Gaussian smearing are consistent with the results of Refs. [11] given the different system size and slightly different measured quantities. For simplicity we have excluded the Gaussian averaging for k ⊥ = (π/4, π/4) since it [contrary to the k ⊥ = (π, π) case] leads to only negligible changes. We trigger the AF ordering along the z-direction for T < T N by including a small anisotropy ∆ = 0.01 for C exp and C exp ⊥ . Figure 2(a) shows the Bragg dip for T < T N coming from ŝ z k s z −k ≈ | s z i | 2 for k = (π, π, π) with | s z i | > 0. Extrapolating to T = 0 we have | s z i | ≃ 0.43 ± 0.01, a value reduced from 1/2 by quantum fluctuations. On the other hand the spin noise parallel to the broken symmetry axis C is reduced with the onset of magnetic ordering for T < T N as can be seen from the inset in Fig. 2(a). (The minimal value of C at T N is dependent on the anisotropy ∆.) Note that even though the density correlations are larger and hence more easily measurable, the spin resolved measurements are crucial in verifying e.g. whether the correlation dip is indeed due to magnetic ordering and not caused, for example, by formation of a perioddoubled charge-density wave. The difference between C and C ⊥ for ∆ > 0 can furthermore be used to identify the broken symmetry axis. In the isotropic case, there is no broken symmetry axis and C exp = C exp ⊥ = C exp iso for ∆ → 0, as can be seen from the green curve in Fig. 2(a). C (k) = 1 2N − ŝ z 0ŝ z 0 − 1 2N (1 + 2fk)e −2Θ k ,(9) The calculated correlations are rather small even for the quantities including the macroscopic contribution ŝ z k s z −k ≈ | s z i | 2 at k = (π, π, π), e.g., for low temperatures T J/2 C exp nn ≈ 10 −2 (10 −3 ) without (with) Gaussian smoothening and scales like 1/N z (1/N ). This is, however, still significantly larger than the correlations of order C nn ∼ 10 −4 which were recently measured with slightly bigger system sizes [11] . The correlations are also close to the measured experimental values even for temperatures above T N , i.e., for T = 2J we have C exp nn ≈ 6 · 10 −5 (scaling as 1/N ), which is comparable to the recent experiments [11]. The measurement of noise correlations can thus be used to show the onset of AF order even above the critical temperature. Furthermore, for current experiments the spin temperature is uncertain because there are no sensitive probes. If the noise correlations are measured, the detailed theoretical curves presented here would provide a means of assessing the spin temperature in the experiments. The density and spin noise at k ⊥ = (π/4, π/4) depicted in Fig. 2(b) exhibit a different behavior from that at k ⊥ = (π, π): it now decreases in numerical value for high T and even changes sign above T N . This unusual behavior is a geometric effect of the lattice. Note that since the noise scales as 1/N away from the Bragg point, a measurement requires higher experimental resolution than what is presently available or more sensitive detection methods such as spin noise spectroscopy [8]. Atomic gases are well suited to study fundamental problems in 2D physics as the observation of the Berezinskii-Kosterlitz-Thouless transition illustrates [19]. Recently, single layer 2D atomic gases have been produced which avoids the averaging over z discussed above [20]. We now study the Mott phase at half-filling for a 2D square lattice. In 2D there is no ordered phase for T > 0 due to fluctuations, but there are still significant AF correlations [21]. This is illustrated in Fig. 3(a), which shows C exp iso (k ⊥ ) and C exp nn (k ⊥ ) as a function of T . The AF correlations give rise to a large dip for k ⊥ = (π, π) both in the density and spin noise which is a precursor of the AF ordered state at T = 0. Since there is no averaging over the z-direction, the correlations are much stronger than for a 3D system above T N . Figure 3 illustrates that the noise exhibits the same non-trivial features as a function of k and T as the 3D case. Critical exponents characterizing continuous phase transitions are often difficult to measure. We now demon- strate how noise measurements can be used to obtain the critical exponent ν. The correlation length ξ can be extracted from the width of the AF dip in the 3D resolved function C nn (k) at k = (π, π, π). We find that this yields a critical exponent ν ≈ 0.70 as expected for a 3D Heisenberg model [22]. Importantly, ν can also be extracted from the experimentally relevant k zsummed correlation function C exp nn . Figure 4(a-c) show C exp nn (k ⊥ ) at three fixed temperatures above T N . To obtain ν, we fit C exp nn (k ⊥ ) to a summed lattice propagator of the form kz [2(3 − α=x,y,z cos(k α − π))ξ 2 + 1] −1 with k z = 2πn z /N z . Figure 4(d) shows the extracted correlation length ξ for various system sizes. One clearly sees the finite-size effects setting in at decreasing T −T N . The extracted power-law yields ν ≈ 0.70. To obtain a robust value for ν one needs to probe the noise for temperatures where it is somewhat smaller than what has been measured to date. This would however enable one to probe the critical properties of the AF transition. In summary, we performed analytic and numerical calculations modeling TOF experiments for repulsive fermionic atoms in optical lattices using experimentally realistic parameters. This demonstrated that such experiments are well suited to detect AF correlations both below and above the critical temperature. Spin-resolved measurements were shown to yield valuable additional information and we finally discussed how to extract the critical exponent governing the correlation length close to the AF transition from the noise. For the isotropic Heisenberg model, the QMC calculations yield T N ≃ 0.945J in agreement with Ref.18. online) Plot of C exp nn (k ⊥ ) versus temperature T and momentum along a diagonal cut k = kx = ky. online) Noise correlation functions C exp ⊥ (k ⊥ ) (red), C exp (k ⊥ ) (black), C exp iso (k ⊥ ) (green), and C exp nn (k ⊥ ) (blue) versus T at k ⊥ = (π, π) (a) and k ⊥ = (π/4, π/4) (b). The dotted (solid) lines are with (without) Gaussian smearing (κ = 1/40). Dashed lines show the analytical results (7)-(9). Inset (a): same plot but zoomed in near TN . FIG. 3 : 3(Color online) (a) Noise correlation functions C exp iso (k ⊥ ) (red) and C exp nn (k ⊥ ) (blue) versus T at k ⊥ = (π, π) for a 2D lattice. The curves are obtained from QMC simulations on 64 × 64 lattices. The dotted (solid) lines are with (without) Gaussian smearing using κ = 1/40. Inset: same as solid lines in main panel except at k ⊥ = (π/4, π/4). (b) Same asFig. 1but for a 2D system without the kz summation. FIG . 4: (Color online) Panels (a-c) show (−1)C exp nn (k ⊥ ) at T /J = 1.0 (a), T /J = 1.2 (b), and T /J = 1.6 (c). (d) Log-log plot of extracted correlation length ξ versus |T − TN | displaying the power-law behavior ξ ∼ |T − TN | −0.70 . . W Hofstetter, Phys. Rev. Lett. 89220407W. Hofstetter et al., Phys. Rev. Lett. 89, 220407 (2002); . L.-M Duan, E Demler, M D Lukin Ibid, 9190402L.-M. Duan, E. Demler, and M. D. Lukin ibid, 91, 090402 (2003). . M Greiner, Nature. 41539M. Greiner et al., Nature 415, 39 (2002). . S Trotzky, Science. 319295S. Trotzky et al., Science 319, 295 (2008). . R Jördens, Nature. 455204R. Jördens et al., Nature 455, 204 (2008). . U Schneider, Science. 3221520U. Schneider et al., Science 322, 1520 (2008). . J K Chin, Nature. 443961J. K. Chin et al., Nature 443, 961 (2006). . K Eckert, Nat. Phys. 450K. Eckert et al. Nat. Phys. 4, 50 (2007). . G M Bruun, Phys. Rev. Lett. 10230401G. M. Bruun et al. Phys. Rev. Lett. 102, 030401 (2009). . E Altman, E Demler, M D Lukin, Phys. Rev. A. 7013603E. Altman, E. Demler, and M. D. Lukin, Phys. Rev. A 70, 013603 (2004). . M Greiner, Phys. Rev. Lett. 94110401M. Greiner et al. Phys. Rev. Lett. 94 110401 (2005). . S Fölling, Nature. 434481S. Fölling et al., Nature 434, 481 (2005); . T Rom, ibid. 444733T. Rom et al., ibid 444, 733 (2006). . I B Spielman, W D Phillips, J V Porto, Phys. Rev. Lett. 9880404I. B. Spielman, W. D. Phillips, and J. V. Porto, Phys. Rev. Lett. 98, 080404 (2007). . V Guarrera, Phys. Rev. Lett. 100250403V. Guarrera et al., Phys. Rev. Lett. 100, 250403 (2008). . T Esslinger, private communationT. Esslinger, private communation. . A W Sandvik, J Kurkijärvi, Phys. Rev. B. 435950A. W. Sandvik and J. Kurkijärvi, Phys. Rev. B 43, 5950 (1991). . O F Syljuåsen, A W Sandvik, Phys. Rev. E. 6646701O. F. Syljuåsen and A. W. Sandvik, Phys. Rev. E 66, 046701 (2002). . B M Andersen, G M Bruun, Phys. Rev. A. 7641602B. M. Andersen and G. M. Bruun, Phys. Rev. A 76, 041602 (2007). . A W Sandvik, Phys. Rev. Lett. 805196A. W. Sandvik, Phys. Rev. Lett. 80, 5196 (1998). . Z Hadzibabic, Nature. 4411118Z. Hadzibabic et al., Nature 441, 1118 (2006); . V Schweikhard, S Tung, E A Cornell, Phys. Rev. V. Schweikhard, S. Tung, and E. A. Cornell, Phys. Rev. . Lett, 9930401Lett. 99, 030401 (2007); . P Cladé, arXiv:0805.3519P. Cladé et al., arXiv:0805.3519. . N Gemelke, arXiv:0904.1532N. Gemelke et al. arXiv:0904.1532; . J I Gillen, arXiv:0812.3630J. I. Gillen et al., arXiv:0812.3630. A Auerbach, Interacting Electrons and Quantum Magnetism. Springer VerlagA. Auerbach, Interacting Electrons and Quantum Mag- netism, (Springer Verlag, 1998). . A Pelissetto, E Vicari, Phys. Rep. 368549A. Pelissetto and E. Vicari, Phys. Rep. 368, 549 (2002).
[]
[ "Black holes in an ultraviolet complete quantum gravity", "Black holes in an ultraviolet complete quantum gravity" ]
[ "Leonardo Modesto ", "John W Moffat ", "Piero Nicolini ", "\nPerimeter Institute for Theoretical Physics\nPerimeter Institute for Theoretical Physics\nDepartment of Physics and Astronomy\nFrankfurt Institute for Advanced Studies (FIAS) and Institut für Theoretische Physik\nUniversity of Waterloo\n31 Caroline St., 31 Caroline Street NorthN2L 2Y5, N2L 2Y5, N2L 3G1Waterloo, Waterloo, WaterlooON, ON, ONCanada, Canada, Canada\n", "\nJohann Wolfgang Goethe-Universität\nRuth-Moufang-Strasse 1D-60438Frankfurt am MainGermany\n" ]
[ "Perimeter Institute for Theoretical Physics\nPerimeter Institute for Theoretical Physics\nDepartment of Physics and Astronomy\nFrankfurt Institute for Advanced Studies (FIAS) and Institut für Theoretische Physik\nUniversity of Waterloo\n31 Caroline St., 31 Caroline Street NorthN2L 2Y5, N2L 2Y5, N2L 3G1Waterloo, Waterloo, WaterlooON, ON, ONCanada, Canada, Canada", "Johann Wolfgang Goethe-Universität\nRuth-Moufang-Strasse 1D-60438Frankfurt am MainGermany" ]
[]
In this Letter we derive the gravity field equations by varying the action for an ultraviolet complete quantum gravity. Then we consider the case of a static source term and we determine an exact black hole solution. As a result we find a regular spacetime geometry: in place of the conventional curvature singularity extreme energy fluctuations of the gravitational field at small length scales provide an effective cosmological constant in a region locally described in terms of a deSitter space. We show that the new metric coincides with the noncommutative geometry inspired Schwarzschild black hole. Indeed we show that the ultraviolet complete quantum gravity, generated by ordinary matter is the dual theory of ordinary Einstein gravity coupled to a noncommutative smeared matter. In other words we obtain further insights about that quantum gravity mechanism which improves Einstein gravity in the vicinity of curvature singularities. This corroborates all the existing literature in the physics and phenomenology of noncommutative black holes.
10.1016/j.physletb.2010.11.046
[ "https://arxiv.org/pdf/1010.0680v3.pdf" ]
119,206,380
1010.0680
bd1b5adce3034fbe6ff5d05f019273185ccafd91
Black holes in an ultraviolet complete quantum gravity 20 Dec 2010 (Dated: December 21, 2010) Leonardo Modesto John W Moffat Piero Nicolini Perimeter Institute for Theoretical Physics Perimeter Institute for Theoretical Physics Department of Physics and Astronomy Frankfurt Institute for Advanced Studies (FIAS) and Institut für Theoretische Physik University of Waterloo 31 Caroline St., 31 Caroline Street NorthN2L 2Y5, N2L 2Y5, N2L 3G1Waterloo, Waterloo, WaterlooON, ON, ONCanada, Canada, Canada Johann Wolfgang Goethe-Universität Ruth-Moufang-Strasse 1D-60438Frankfurt am MainGermany Black holes in an ultraviolet complete quantum gravity 20 Dec 2010 (Dated: December 21, 2010)PACS numbers: Keywords: quantum gravity, black holes In this Letter we derive the gravity field equations by varying the action for an ultraviolet complete quantum gravity. Then we consider the case of a static source term and we determine an exact black hole solution. As a result we find a regular spacetime geometry: in place of the conventional curvature singularity extreme energy fluctuations of the gravitational field at small length scales provide an effective cosmological constant in a region locally described in terms of a deSitter space. We show that the new metric coincides with the noncommutative geometry inspired Schwarzschild black hole. Indeed we show that the ultraviolet complete quantum gravity, generated by ordinary matter is the dual theory of ordinary Einstein gravity coupled to a noncommutative smeared matter. In other words we obtain further insights about that quantum gravity mechanism which improves Einstein gravity in the vicinity of curvature singularities. This corroborates all the existing literature in the physics and phenomenology of noncommutative black holes. An ultraviolet (UV) complete quantum gravity theory has been formulated using a diffeomorphism invariant action in which the gravitational strength is G(x) = G N F (x)/Λ 2 G ,(1) where G N is Newton's constant, = g µν ∇ µ ∇ ν is the generally covariant D'Alembertian operator, and F is an entire function [1]. Moreover, Λ G is a constant gravitational energy scale and the entire function F has no poles in the finite complex plane. The quantum gravity perturbation theory expanded against a fixed Minkowski background spacetime is locally gauge invariant and unitary to all orders. The graviton-graviton and graviton-matter loops in Euclidean momentum space are finite to all orders. The graviton tree graphs are point-like and local maintaining the macroscopic local and causal property of gravity. The attempts to use noncommutative geometry to deduce phenomenological results from a perturbative expansion in the noncommutative parameter θ, run into the difficulty that a truncation of the Moyal ⋆-product makes the theory local and leads to a lack of renormalizabilty in the quantum gravity version of the theory [2] (for a general review on the topic see [3]). In contrast, we find that the nonlocal nature of the vertex function in the perturbative UV complete quantum gravity theory does not require a truncation, retaining its full nonlocality while the graviton is described by a local, microcausal field and propagator. The same holds true for the UV complete standard model Feynman rules in which the interaction of particles is nonlocal but the physical fields and propagators are local and causal [4]. In the following, we investigate the consequences of the UV complete quantum gravity for black holes. Along the lines in [1] we start with the four-dimensional action for gravity: S grav = 1 16π d 4 x √ −g G −1 (x) (R − 2λ) ,(2) where we use the signature (− + ++) and λ is the cosmological constant. The action (2) has a nonlocal character because of the presence of the term F −2 . In the Euclidean momentum space representation: G(p 2 ) = √ G N F p 2 /Λ 2 G and in addition we require the on shell condition G(0) = G N . The field equations are obtained by varying the action (2) with respect to the metric g µν . By neglecting surface terms coming from the variation of the generally-covariant D'Alembertian [5], we find F −2 (x)/Λ 2 G R µν − 1 2 g µν R = 8πG N T µν ,(3) where we have set λ = 0. We notice that (3) can be cast in a different form by "shifting" the the operator F −2 to the r.h.s. leaving the l.h.s. in the canonical form, i.e., R µν − 1 2 g µν R = 8πG N S µν ,(4) where the tensor S µν ≡ F 2 (x)/Λ 2 G T µν .(5) We notice that the new source term is conserved, i.e., ∇ µ S µν = 0. As a matter of fact, (4) describes Einstein gravity coupled to a generalized matter source term, while (3) describes the UV complete quantum gravity produced by ordinary matter. The two interpretations are physically equivalent. Our main purpose is to solve the field equations by assuming a static source, i.e., the four-velocity field u µ has only a non-vanishing time-like component u µ ≡ (u 0 , 0) u 0 = (−g 00 ) −1/2 . The component T 0 0 of the energymomentum tensor for a static source is given by [6] T 0 0 = − M 4π r 2 δ(r).(6) The metric of our spacetime is assumed to be given by the usual static, spherically symmetric form ds 2 = −f (r)dt 2 + dr 2 f (r) + r 2 Ω 2 ,(7) where f (r) = 1 − 2G(r)M r .(8) In Einstein gravity G(r) = G N and one obtains the Schwarzschild geometry. To solve field equations we follow the form (4), by determining the generalized matter source term S µν . The metric component can be written as f (r) = 1 − 2G N m(r) r ,(9) where m(r) = −4π drr 2 S 0 0 . For later convenience we temporarily adopt free falling Cartesian-like coordinates and we calculate S 0 0 = −M F 2 (x)/Λ 2 G δ( x) ≡ −ρ ΛG ( x).(11) The covariant conservation and the additional condition, g 00 = −g −1 rr , completely specify the form of S µ ν . Before proceeding further, we need to specify the form of F within the class of entire functions. We do not know the unique choice. However, a simple form of F fulfilling the properties we require is F p 2 = exp −p 2 2Λ 2 G(12) in Euclidean momentum space [1]. As a check of consistency we can see that all Feynman graviton loops containing at least one vertex function F are ultraviolet finite. As a consequence we have F 2 (x)/Λ 2 G δ( x) = = e ∇ 2 /Λ 2 G δ( x) = 1 (2π) 3 d 3 p e −p 2 /Λ 2 G e i x· p .(13) By calculating the above integral, one gets ρ ΛG ( x) = M 1 2 Λ 2 G π 3 e − x 2 Λ 2 G /4 .(14) We notice that the generalized matter energy density profile is a Gaussian whose width is 1/Λ G . This means that for energies smaller than Λ G the function ρ ΛG ( x) approaches the Dirac delta distribution δ( x). This is equivalent to say that the function m(r) becomes the total mass M in Newtonian gravity, since we are probing the system at asymptotic length scales where the UV complete quantum gravity is nothing but Einstein gravity. The final step is to obtain the mass function of the matter. From (10) one finds m(r) = M 1 − Γ 3/2; r 2 Λ 2 G /4 Γ(3/2) ,(15) where Γ 3/2; r 2 Λ 2 G /4 = ∞ r 2 Λ 2 G /4 dt t 1/2 e −t(16) and Γ(3/2) = √ π/2 is Euler's gamma function. By expanding (15) for r ≫ 1/Λ G we have m(r) ≈ M 1 − Λ G √ π r e −r 2 Λ 2 G /4 ,(17) which matches the required value M up to exponentially suppressed corrections. Such corrections are important since the UV complete quantum gravity can lead to experimentally testable deviations from Newton's law [7] φ N (r) = G N M r 1 − Λ G √ π r e −r 2 Λ 2 G /4 .(18) On the other hand we can observe the UV completeness of the theory at work in the high energy regime. Indeed if we expand (15) for r ≪ 1/Λ G we get m(r) ≈ 1 6 M √ π r 3 Λ 3 G .(19) At this point we can substitute this value into (9) to get ds 2 ≈ − 1 − 1 3 Λ eff r 2 dt 2 + dr 2 1 − 1 3 Λ eff r 2 + r 2 Ω 2 .(20) This is a deSitter line element whose effective cosmological constant Λ eff = M G N Λ 3 G / √ π, accounts for the "vacuum energy" of the "field" g µν . In other words we show that in the UV complete quantum gravity the gravitational field acquires a repulsive character as far as one probes the seething fabric of spacetime. Again we can say that the intrinsic nonlocality of the action (2) is able to tame the curvature singularity of the Schwarzschild solution (see the Fig. 1). By calculating curvature tensors at the origin one finds that they are finite. For instance the Ricci scalar reads To get more insights about the nature of the generalized matter that generates the above regular geometry, it is worthwhile to analyse the energy conditions. First one has to determine the nonvanishing pressure terms coming from the conservation of the stress tensor S µν . For instance the strong energy condition reads R(0) = 4M G N Λ 3 G √ π .(21)ρ ΛG + p r + 2p ⊥ ∼ e −r 2 Λ 2 G /4 r 2 Λ 2 G 2 − 2 ≥ 0,(22) where p r is the radial pressure and p ⊥ is the angular one. From Fig. 2 we can see that in the vicinity of the origin the matter has an exotic character, i.e., strong, dominant and weak energy conditions are violated. At this point we could proceed further by studying the horizon equation, the thermodynamic properties and the global structure of the solution. However we prefer to stop here, since the line element we have found f (r) = 1 − 2G N M r γ 3/2; r 2 Λ 2 G /4 Γ(3/2)(23) is nothing but the noncommutative geometry inspired Schwarzschild black hole [8], where γ (3/2; x) = Γ (3/2)− Γ (3/2; x). In this scenario the noncommutative geometry induced minimal length √ θ is nothing but 1/Λ G . The above metric was derived by one of us and his coworkers Smailagic, Spallucci after a long path. At the time there were already several attempts of incorporating noncommutative effects in black hole physics. All such attempts were based on expansions of the ⋆-product among vielbein fields entering gravity Lagrangians [2]. The problem is that any truncation at a desired order in the noncommutative parameter basically destroys the non-locality encoded in the ⋆-product and gives rise to a local theory, plagued by spurious momentum-dependent terms. As a result, in spite of the mathematical exactitude, all the proposed corrections coming from this kind of approach failed in curing the bad short distance behavior of black hole solutions in Einstein gravity [9]. Against this background, the noncommutative geometry inspired Schwarzschild solution was derived in an effective way. Instead of embarking on the interesting but difficult problem of formulating a computationally viable noncommutative gravity, it is worthwhile to study the average effect of manifold noncommutative fluctuations on point like sources. In a series of papers based on the use of coordinate coherent states [10][11][12][13] (and recently confirmed by means of another approach based on Voros products [14]), it has been shown that the mean position of a pointlike object in noncommutative geometry is no longer governed by a Dirac delta function, but by a Gaussian distribution. As a second step toward the solution (23), it has been shown that primary corrections to any field equation in the presence of a noncommutative smearing can be obtained by replacing the source term (matter sector) with a Gaussian distribution, while keeping formally unchanged differential operators (geometry sector) [15]. In the specific case of the gravity field equations this is equivalent to saying that the only modification occurs at the level of the energy-momentum tensor, while G µν is formally left unchanged. In this spirit further solutions have been derived corresponding to the case of dirty [16], charged [17], spinning [18] black holes (for a review see [19]). Another important feature concerns the new thermodynamics of these black holes. Indeed, even for the neutral solution, the Hawking temperature reaches a maximum before running a positive heat capacity, cooling down phase towards a zero temperature remnant configuration [20]. As a consequence, according to this scenario quantum back reaction is strongly suppressed in contrast to conventional limits of validity of the semiclassical approximation in the terminal phase of the evaporation. Furthermore the higher-dimensional solutions [21,22], due to their attractive properties have been recently taken into account in Monte Carlo simulations as reliable candidate models to describe the conjectured production of microscopic black holes in particle accelerators [23]. Let us consider further our solution and the relation between UV complete quantum gravity and noncommutative geometry. First, the form of equations (3) tells us that we are working in the framework of UV complete quantum gravity [1]. Indeed the matter sector is unchanged, while nonlocal modifications enter the geometry. The dual theory is governed by equation (4), that is based on a generalized matter energy-momentum tensor, keeping the Einstein tensor in the canonical form. It is now clear that the exotic nature of matter is nothing but a "seething" noncommutative character of the source term. More specifically, the duality between the two descriptions holds also at the level of the specific choice of the operator F . Indeed the natural choice (12), i.e., the simplest form within the class of entire functions, corresponds to the case of primary noncommutative geometry corrections to the manifold, as often advocated in [10][11][12][13]. The virtue of these primary corrections is that they are not the result of a truncation in a perturbative expansion, but are intrinsically nonlocal. The natural duality link between the UV complete quantum gravity and the Einstein field equations with a generalized energymomentum tensor sheds light on the interpretation of this key point. P.N. is supported by the Helmholtz International Center for FAIR within the framework of the LOEWE pro- FIG. 1 : 1The solution admits one, two or no horizons depending on M . In the case of two horizons, f (r±) = 0, the Penrose diagram resembles the Reissner-Norström geometry, except for the origin where a regular deSitter core lies in place of the curvature singularity. FIG. 2 : 2The dashed curve is the function (ρΛ G +pr +2p ⊥ )/Λ 4 G vs rΛG (strong energy condition); the solid curve is (ρΛ G − |p ⊥ |)/Λ 4 G (dominant energy condition); the dotted curve is (ρΛ G + p ⊥ )/Λ 4 G (weak energy condition). In a region within r = 6/ΛG all conditions are violated. gram (Landesoffensive zur Entwicklung Wissenschaftlich-Okonomischer Exzellenz) launched by the State of Hesse. P.N. would like to thank the Perimeter Institute for Theoretical Physics, Waterloo, ON, Canada for the kind hospitality during the period of work on this project. Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Research & Innovation. The authors thank E. Spallucci for valuable comments about the manuscript. * Electronic address: [email protected] † Electronic address: [email protected] ‡ Electronic address: [email protected]. de* Electronic address: [email protected] † Electronic address: [email protected] ‡ Electronic address: [email protected] . J W Moffat, arXiv:1008.2482gr-qcJ. W. Moffat, arXiv:1008.2482 [gr-qc]. . J W Moffat, Phys. Lett. B. 491345J. W. Moffat, Phys. Lett. B 491, 345 (2000). . R J Szabo, Phys. Rept. 378207R. J. Szabo, Phys. Rept. 378, 207 (2003). . J W Moffat, arXiv:1006.1859hep-phJ. W. Moffat, arXiv:1006.1859 [hep-ph]. . P Gaete, J A Helayel-Neto, E Spallucci, Phys. Lett. B. 693155P. Gaete, J. A. Helayel-Neto and E. Spallucci, Phys. Lett. B 693, 155 (2010). . J R Mureika, E Spallucci, Phys. Lett. B. 693129J. R. Mureika and E. Spallucci, Phys. Lett. B 693, 129 (2010). . H Balasin, H Nachbagauer, Class. Quant. Grav. 102271H. Balasin and H. Nachbagauer, Class. Quant. Grav. 10, 2271 (1993). . P Nicolini, Phys. Rev. D. 8244030P. Nicolini, Phys. Rev. D 82, 044030 (2010). . P Nicolini, A Smailagic, E Spallucci, Phys. Lett. B. 632547P. Nicolini, A. Smailagic and E. Spallucci, Phys. Lett. B 632, 547 (2006). . F Nasseri, Gen. Rel. Grav. 372223F. Nasseri, Gen. Rel. Grav. 37, 2223 (2005). . F Nasseri, Int. J. Mod. Phys. D. 151113F. Nasseri, Int. J. Mod. Phys. D 15, 1113 (2006). . M Chaichian, A Tureanu, G Zet, Phys. Lett. B. 660573M. Chaichian, A. Tureanu and G. Zet, Phys. Lett. B 660, 573 (2008). . M Chaichian, M R Setare, A Tureanu, G Zet, JHEP. 080464M. Chaichian, M. R. Setare, A. Tureanu and G. Zet, JHEP 0804, 064 (2008). . P Mukherjee, A Saha, Phys. Rev. D. 7764014P. Mukherjee and A. Saha, Phys. Rev. D 77, 064014 (2008). . A Kobakhidze, Phys. Rev. D. 7947701A. Kobakhidze, Phys. Rev. D 79, 047701 (2009). . A Smailagic, E Spallucci, J. Phys. 36467A. Smailagic and E. Spallucci, J. Phys. A36, L467 (2003). . A Smailagic, E Spallucci, J. Phys. 36517A. Smailagic and E. Spallucci, J. Phys. A36, L517 (2003). . A Smailagic, E Spallucci, J. Phys. A. 377169Erratum-ibid. AA. Smailagic and E. Spallucci, J. Phys. A 37, 1 (2004) [Erratum-ibid. A 37, 7169 (2004)]. . E Spallucci, A Smailagic, P Nicolini, Phys. Rev. D. 7384004E. Spallucci, A. Smailagic and P. Nicolini, Phys. Rev. D 73, 084004 (2006). . R Banerjee, S Gangopadhyay, S K Modak, Phys. Lett. B. 686181R. Banerjee, S. Gangopadhyay and S. K. Modak, Phys. Lett. B 686, 181 (2010). . P Nicolini, J. Phys. A. 38631P. Nicolini, J. Phys. A 38, L631 (2005). . P Nicolini, E Spallucci, Class. Quant. Grav. 2715010P. Nicolini and E. Spallucci, Class. Quant. Grav. 27, 015010 (2010). . S Ansoldi, P Nicolini, A Smailagic, E Spallucci, Phys. Lett. B. 645261S. Ansoldi, P. Nicolini, A. Smailagic and E. Spallucci, Phys. Lett. B 645, 261 (2007). . A Smailagic, E Spallucci, Phys. Lett. B. 68882A. Smailagic and E. Spallucci, Phys. Lett. B 688, 82 (2010). . L Modesto, P Nicolini, arXiv:1005.5605gr-qcL. Modesto and P. Nicolini, arXiv:1005.5605 [gr-qc]. . P Nicolini, Int. J. Mod. Phys. A. 241229P. Nicolini, Int. J. Mod. Phys. A 24, 1229 (2009). . R Banerjee, B R Majhi, S Samanta, Phys. Rev. D. 77124035R. Banerjee, B. R. Majhi and S. Samanta, Phys. Rev. D 77, 124035 (2008). . W H Huang, K W Huang, Phys. Lett. B. 670416W. H. Huang and K. W. Huang, Phys. Lett. B 670, 416 (2009). . R Casadio, P Nicolini, JHEP. 081172R. Casadio and P. Nicolini, JHEP 0811, 072 (2008). . T G Rizzo, JHEP. 060921T. G. Rizzo, JHEP 0609, 021 (2006). . E Spallucci, A Smailagic, P Nicolini, Phys. Lett. B. 670449E. Spallucci, A. Smailagic and P. Nicolini, Phys. Lett. B 670, 449 (2009). . D M Gingrich, JHEP. 100522D. M. Gingrich, JHEP 1005, 022 (2010).
[]
[ "Polaritonic spectroscopy of intersubband transitions", "Polaritonic spectroscopy of intersubband transitions" ]
[ "Y Todorov \nSolid State Electronics Institute TU Wien\nLaboratoire Matériaux et Phénomènes Quantiques\nUMR7162\nUniv. Paris Diderot\nSorbonne Paris Cité\nFloragasse 775013, A-1040Paris, ViennaFrance, Austria\n", "L Tosetto \nSolid State Electronics Institute TU Wien\nLaboratoire Matériaux et Phénomènes Quantiques\nUMR7162\nUniv. Paris Diderot\nSorbonne Paris Cité\nFloragasse 775013, A-1040Paris, ViennaFrance, Austria\n", "A Delteil \nSolid State Electronics Institute TU Wien\nLaboratoire Matériaux et Phénomènes Quantiques\nUMR7162\nUniv. Paris Diderot\nSorbonne Paris Cité\nFloragasse 775013, A-1040Paris, ViennaFrance, Austria\n", "A Vasanelli \nSolid State Electronics Institute TU Wien\nLaboratoire Matériaux et Phénomènes Quantiques\nUMR7162\nUniv. Paris Diderot\nSorbonne Paris Cité\nFloragasse 775013, A-1040Paris, ViennaFrance, Austria\n", "C Sirtori \nSolid State Electronics Institute TU Wien\nLaboratoire Matériaux et Phénomènes Quantiques\nUMR7162\nUniv. Paris Diderot\nSorbonne Paris Cité\nFloragasse 775013, A-1040Paris, ViennaFrance, Austria\n", "A M Andrews \nSolid State Electronics Institute TU Wien\nLaboratoire Matériaux et Phénomènes Quantiques\nUMR7162\nUniv. Paris Diderot\nSorbonne Paris Cité\nFloragasse 775013, A-1040Paris, ViennaFrance, Austria\n", "G Strasser \nSolid State Electronics Institute TU Wien\nLaboratoire Matériaux et Phénomènes Quantiques\nUMR7162\nUniv. Paris Diderot\nSorbonne Paris Cité\nFloragasse 775013, A-1040Paris, ViennaFrance, Austria\n" ]
[ "Solid State Electronics Institute TU Wien\nLaboratoire Matériaux et Phénomènes Quantiques\nUMR7162\nUniv. Paris Diderot\nSorbonne Paris Cité\nFloragasse 775013, A-1040Paris, ViennaFrance, Austria", "Solid State Electronics Institute TU Wien\nLaboratoire Matériaux et Phénomènes Quantiques\nUMR7162\nUniv. Paris Diderot\nSorbonne Paris Cité\nFloragasse 775013, A-1040Paris, ViennaFrance, Austria", "Solid State Electronics Institute TU Wien\nLaboratoire Matériaux et Phénomènes Quantiques\nUMR7162\nUniv. Paris Diderot\nSorbonne Paris Cité\nFloragasse 775013, A-1040Paris, ViennaFrance, Austria", "Solid State Electronics Institute TU Wien\nLaboratoire Matériaux et Phénomènes Quantiques\nUMR7162\nUniv. Paris Diderot\nSorbonne Paris Cité\nFloragasse 775013, A-1040Paris, ViennaFrance, Austria", "Solid State Electronics Institute TU Wien\nLaboratoire Matériaux et Phénomènes Quantiques\nUMR7162\nUniv. Paris Diderot\nSorbonne Paris Cité\nFloragasse 775013, A-1040Paris, ViennaFrance, Austria", "Solid State Electronics Institute TU Wien\nLaboratoire Matériaux et Phénomènes Quantiques\nUMR7162\nUniv. Paris Diderot\nSorbonne Paris Cité\nFloragasse 775013, A-1040Paris, ViennaFrance, Austria", "Solid State Electronics Institute TU Wien\nLaboratoire Matériaux et Phénomènes Quantiques\nUMR7162\nUniv. Paris Diderot\nSorbonne Paris Cité\nFloragasse 775013, A-1040Paris, ViennaFrance, Austria" ]
[]
We report on an extensive experimental study of intersubband excitations in the THz range arising from the coupling between a quantum well and a zero-dimensional metal-metal microcavities. Because of the conceptual simplicity of the resonators we obtain an extremely predictable and controllable system to investigate light-matter interaction. The experimental data is modelled by combining a quantum mechanical approach with an effective medium electromagnetic simulation that allows us to take into account the losses of the system. By comparing our modelling with the data we are able to retrieve microscopic information, such as the electronic populations on different subbands as a function of the temperature. Our modelling approach sets the base of a designer tool for intersubband light-matter coupled systems.PACS numbers:
10.1103/physrevb.86.125314
[ "https://arxiv.org/pdf/1212.5134v1.pdf" ]
119,206,386
1212.5134
6a8ba17bb1ef1117277544a5c05696c6b15c738a
Polaritonic spectroscopy of intersubband transitions 20 Dec 2012 Y Todorov Solid State Electronics Institute TU Wien Laboratoire Matériaux et Phénomènes Quantiques UMR7162 Univ. Paris Diderot Sorbonne Paris Cité Floragasse 775013, A-1040Paris, ViennaFrance, Austria L Tosetto Solid State Electronics Institute TU Wien Laboratoire Matériaux et Phénomènes Quantiques UMR7162 Univ. Paris Diderot Sorbonne Paris Cité Floragasse 775013, A-1040Paris, ViennaFrance, Austria A Delteil Solid State Electronics Institute TU Wien Laboratoire Matériaux et Phénomènes Quantiques UMR7162 Univ. Paris Diderot Sorbonne Paris Cité Floragasse 775013, A-1040Paris, ViennaFrance, Austria A Vasanelli Solid State Electronics Institute TU Wien Laboratoire Matériaux et Phénomènes Quantiques UMR7162 Univ. Paris Diderot Sorbonne Paris Cité Floragasse 775013, A-1040Paris, ViennaFrance, Austria C Sirtori Solid State Electronics Institute TU Wien Laboratoire Matériaux et Phénomènes Quantiques UMR7162 Univ. Paris Diderot Sorbonne Paris Cité Floragasse 775013, A-1040Paris, ViennaFrance, Austria A M Andrews Solid State Electronics Institute TU Wien Laboratoire Matériaux et Phénomènes Quantiques UMR7162 Univ. Paris Diderot Sorbonne Paris Cité Floragasse 775013, A-1040Paris, ViennaFrance, Austria G Strasser Solid State Electronics Institute TU Wien Laboratoire Matériaux et Phénomènes Quantiques UMR7162 Univ. Paris Diderot Sorbonne Paris Cité Floragasse 775013, A-1040Paris, ViennaFrance, Austria Polaritonic spectroscopy of intersubband transitions 20 Dec 2012(Dated: May 6, 2014)PACS numbers: We report on an extensive experimental study of intersubband excitations in the THz range arising from the coupling between a quantum well and a zero-dimensional metal-metal microcavities. Because of the conceptual simplicity of the resonators we obtain an extremely predictable and controllable system to investigate light-matter interaction. The experimental data is modelled by combining a quantum mechanical approach with an effective medium electromagnetic simulation that allows us to take into account the losses of the system. By comparing our modelling with the data we are able to retrieve microscopic information, such as the electronic populations on different subbands as a function of the temperature. Our modelling approach sets the base of a designer tool for intersubband light-matter coupled systems.PACS numbers: I. INTRODUCTION The ability of engineering semiconductor devices at the nanometer scale, by modifying their quantum properties, has been an essential ingredient that has allowed the continuous impressive technological development of electronics and optoelectronics. A prominent example is provided by the emergence of new laser sources in the Mid-IR and THz regions, the so called Quantum Cascade Lasers [1]. In these structures the electronic transport, under the application of an electrical bias is carefully designed in order to obtain a redistribution of the carriers on the different energy levels and finally population inversion between subbands of multiple-coupled semiconductor quantum wells. The engineering of the light-matter interaction in an intersubband (ISB) system resides somehow on the opposite limit, as it can be enhanced by achieving a large electronic population on the fundamental subband of a quantum well inserted in a microcavity with a very small volume. In this case the electromagnetic response of the system is dominated by collective plasmonic excitations of the 2D electron gaz [2]. When combined with an optical microcavity, these systems enter the strong coupling regime, in which the interaction between the electromagnetic and the electronic collective modes yield new mixed intersubband polariton states [3]. The splitting between the mixed states, also called Rabi splitting, is a direct measure of the light-matter coupling strength. The latter can be increased through doping by increasing the number of electrons participating to the interaction. This enabled to reach the ultra-strong coupling regime, where the Rabi splitting becomes comparable with the energy of the ISB plasmon mode [4]. This regime has been in- * Electronic address: [email protected] tensively studied and reported by several groups, even at THz frequencies and up to room temperature [5][6][7][8]. The aim of this work is to interpret the spectroscopic features of the polariton states as a function of the temperature in order to extract microscopic information on intersubband transitions. In this respect we will show that measurements of the Rabi splitting, 2Ω R , permit to follow the electronic distribution on the different subbands. Indeed, 2Ω R has a straightforward dependence on the subband population and the corresponding transition oscillator strengths. This is clearly illustrated in the formula of the Rabi splitting for a single ISB excitation between the fundamental and second subband, coupled with the lowest order TM 0 mode of a double-metal waveguide [9,10]: 2Ω R = f 12 e 2 N QW (N 1 − N 2 ) εε 0 m * L cav(1) Here e is the electron charge and ε 0 is the vacuum permittivity. The effective electron mass m * and the dielectric constant of the semiconductor core ε are well known material parameters. The parameter L cav is the cavity thickness, N QW is the number of quantum wells contained in the cavity, N 1 /N 2 is the electronic population of the fundamental/second subband, and f 12 is the transition oscillator strength that depends on the overlap between wave-functions. Eq. (1) indicates that the populations, N 1 and N 2 can be inferred from the study of the splitting 2Ω R as a function of the temperature, by applying the Fermi-Dirac statistics. Our study is conducted with 0D double-metal dispersionless square-patch resonators, as described in Fig.1, which operate on standing wave-like TM 0 modes [11]. The conceptual simplicity of this geometry reduces considerably the photonic degrees of freedom, allowing the direct study of the electronic component of the polariton states. Indeed, these cavities feature a flat photonic dispersion, and the resonant frequency is independent from the incident angle of the probing beam. Moreover, the electric field that couples to the intersubband polarization is homogeneous along the growth axis of the quantum wells. The only parameter that governs the coupling is the cavity mode detuning from the ISB resonance, which in turn is deterministically set by the size s of the patch [11]. Although a more complex fabrication procedure, the metallic patch cavities have several advantages with respect to the multi-pass absorption geometry that is commonly used to study the intersubband transitions [12]. First of all they allow performing intersubband transition spectroscopy with light propagating normal to the wafer surface and therefore with a much larger surface exposed to the impinging. Moreover, in this geometry the light does not traverse the semiconductor substrate and avoids all the spurious absorptions in the bulk material. We should nevertheless carry in mind that Eq.(1) describes an ideal system, which does not take into account all the details of a typical experimental sample. To correctly deduce the electronic distribution from the experimentally measured Rabi splitting one needs to consider several other complications, which are typically not included in an purely energy preserving Hamiltonian description: i) An important aspect is the dissipation in the system, which causes finite linewidths both for the cavity modes and intersubband resonances. Therefore the measured Rabi splitting is actually reduced with respect to values predicted by Eq.(1) [13]. ii) In highly doped samples, several ISB resonances can be active at the same time. Their interaction causes a redistribution of the oscillator strength [14]. Therefore we need a multi-transition generalization of Eq.(1). iii) For the THz region, where large quantum wells are employed, the Hartree potential induces important correction to the heterostructure potential. Therefore energy levels and wave functions should be obtained by selfconsistent Schrödinger-Poisson calculation. This should be taken into account for the correct application of the Fermi-Dirac statistics. iv) The presence of metal-dielectric junctions gives rise to a charge transfer, which changes the carrier concentration of the quantum wells close to the junction [15]. This effect is important for the correct estimate of the number of charged quantum wells N QW . The structures that we investigate, already reported in Ref. [16], are detailed in the subsequent sections. Our study is based on experimental results that were not presented in that reference, and on an analysis that takes into account the afore mentioned points i)-iv). The paper is organized as follows. In part II we present the experimental system and we briefly review the photonic confinement in our structures. Part III deals with an extensive study of the intersubband polaritons at different temperatures and for different cavity detuning with respect to the ISB resonances. We show how the information collected in these experiments, in combination with a self-consistent simulation of the heterostructure potential allows to infer the electronic population of different subbands as a function of the temperature. In particular, we interpret our absorption data at THz frequency as a function of the temperature [16], and show that they actually display the transfer of oscillator strength [14]. II. EXPERIMENTAL SYSTEM A. Microcavity The photonic structure employed in our experiments is summarized in Fig.1 that sketches a single square-patch double-metal microcavity containing 25 GaAs/AlGaAs modulation doped quantum wells. The characterization of the quantum wells will be presented in details in the next section. The sample contains a dense array of such micro-cavities, as illustrated in Fig.1(b), in which the period of the array p is kept close to the square size s, so to optimize the coupling efficiency with the incident radiation [11]. The overall thickness of the core region between the two metals is L cav =1.5µm. The structure is tested by varying the incident angles θ (as defined in Fig.1(a)) and the electric field polarization of the incident beam. Characteristic reflectivity spectra for a cavity made of an undoped GaAs thick layer (dotted line) and for a cavity with quantum wells (solid line) are shown in Fig.2(a). Data are taken at room temperature (300 K) and for an angle of incidence of θ = 45 • . For both samples the patch sizes are equal to s = 12µm. These spectra display dips at the energies of the resonant cavity modes of the structure. For both structures we observe two resonances which correspond to the excitation of the first two TM 0 -like standing wave modes [11], around 3 THz and 6 THz. In the case of the cavity with quantum wells, at room temperature, there is no clear signature of the ISB resonances. However the effect the electron gas is still visible in the data of Fig.2(a) through the considerable enlargement of the cavity peaks for the doped sample. Indeed, at high temperature, the electrons provided by the Si-donors can scatter in the confined and continuum states of the heterostructure by giving rise to sizeable free carrier losses. In Fig.2(b) we summarize the resonant frequencies of measured doped structure at T = 300 K as a function of the inverse square size multiplied by the speed of light: c/s. Measurements for incident angles θ = 45 • (circles) and θ = 10 • (triangles) are both reported. At low frequencies we recover the typical linear dispersion of the TM 0 guided mode, ν K = Kc/n ef f s, with K the order of the lateral standing wave and n ef f the effective index. At higher frequencies, the dispersion becomes non-linear due to coupling with the optical phonon of the semiconductor, which results in a strong increase of the effective index. The dispersion is independent from the incident angle, however close to normal incidence (θ = 10 • ) only the resonances with odd K are excited due to the selection rule of the grating [11]. The position of the resonances is well described by a modal method model for a linear grating [17], based on the scattering matrix formalism [18]. Indeed, the modes of the square patches of the structure, that couple better to the incident beam, have a dipolar charge distribution [11], and these modes are identical to the resonances of the double-metal lamellar grating structure. We will therefore use this formalism for the data modelling that is described in part III. B. Quantum well slab The semiconductor region is a GaAs/Al 0.15 Ga 0.75 As heterostructure made of 32nm thick quantum wells (QWs) and 20nm thick barriers. X-ray analyses indicate a slightly lower average Al content in the barriers 12% than the nominal 15%. The structure is δ-doped in the each barrier, 5nm away from the adjacent quantum well, with a nominal value of 2 × 10 11 cm −2 of the sheet carrier concentration. In Fig.3 we show a typical conduction band profile obtained with Poisson-Schrödinger solver with periodic boundary conditions, at low temperature (T =7 K), including the Hartree corrections to the quantum well potential. The envelope wave-functions of the confined subbands of the quantum well are also indicated. Details on these simulations are given in Appendix A and in part III. The absorption of the quantum well is measured in a multi-pass configuration with 45 • polished facet and a gold layer deposited on top, as indicated in Fig.4 (inset). The results of the experiment are summarized in Fig.4, where two absorption peaks are clearly visible from 7 K to 150 K. These peaks correspond to the 1→2 and 2→3 transitions of the quantum well of Fig.3. The contour plot of Fig.4(a) is obtained by merging multi-pass transmission spectra performed at different temperatures. In the contour plot the red color scale corresponds to a low transmission signal and therefore strong ISB absorption. In the lower part of the figure we report the spectrum at T =7 K, where the 3.5 THz peak corresponds to the 1→2 transition, and the 4.8 THz peak is the 2→3 transition. As expected from Fermi-Dirac statistics, the intensity of the two peaks have an opposite dependence on temperature: the first 1→2 transition is activated at low temperature as electrons accumulate on the ground state of the quantum well and the population difference N 1 − N 2 is increased. Moreover, a strong blue shift is visible for the 1→2 transition, due to the depolarization effect [2]. The transition 2→3 follows an opposite behaviour, since the subband 2 is progressively populated as the temperature is risen. The fact that the 2→3 transition is visible even at low temperature indicates that the Fermi level lies above the second subband edge, meaning that the doping is higher than the nominal value. Further, we shall see that the ISB absorption of the 2→3 transition is increased due to the oscillator strength transfer between the first and the second ISB excitation of the quantum well [14]. Namely, this phenomenon will be elucidated and quantified from the study of the splitting induced by the light-matter interaction on each individual transition. C. Polariton dot measurements When the QW medium is inserted in a 0D microcavity the strong interaction between the ISB excitation and the cavity mode gives rise to two polariton states. In our system the spectroscopic features are a function of the size of the resonator s, which sets the detuning between the cavity mode and the ISB excitations. The polariton splitting due to the coupling between matter excitations and the first order mode of our 0dimensional cavities is very insensitive to the experimental conditions. This can be seen in the series of different experimental conditions reported in the data of Fig.5 and Fig.6. These experiments are performed at low temperature, T =7 K, where the 1→2 ISB excitation is coupled with the fundamental K = 1 cavity mode. This mode is twofold degenerate into TM 100 and TM 010 modes, each oscillating along the two sides of the square stripe (along the x-and y-direction respectively). Since the two modes have identical coupling with the ISB transition, the polariton splitting is independent from the particular polarization state of the incident beam, as described in Fig.5, where we compare two spectra obtained with unpolarised (dotted line) and polarized (solid line) light. If the incident beam is y-polarized, then only the TM 010 mode is excited. The resulting polariton reflectance spectrum, obtained for an angle of incidence θ = 45 • is shown in solid line in Fig.5. The size of the patch in this experiment is s=11.1µm. The two polariton peaks are indicated as K = 1 ′ and K = 1 ′′ . The same experiment repeated with unpolarized light (dashed line) gives rise to practically an identical spectrum (dashed curve in Fig.5). As a matter of fact for θ = 45 • both TM 100 and TM 010 are excited and therefore the polariton states K = 1 ′ and K = 1 ′′ can be similarly excited for the polarization in the x-and y-directions. Contrarily, one can see that the second order TM 200 mode (indicated as K = 2) has a different dependence and appears only if a component of the incident electric field in the x-direction is present. This is due to a symmetry selection rule of the even resonances [11]. In Fig.6 we report the measurements performed with unpolarized light, but at different incident angles: θ = 10 • (solid curve) and θ = 45 • (dashed curve). The patch size is s=12.5µm in this case. Once more we can see that the polaritons (K = 1 ′ and K = 1 ′′ ) are unaffected by the angle of incidence, since they belong to the coupling with the first order mode. The K = 2 resonance disappears at normal incidence due to its symmetry selection rule [11]. The above properties of the polariton states, that attest their 0D nature, stem directly from the properties of the K = 1 resonance, which is dispersionless (independent from the angle of the incidence) and insensitive to the polarization because of the twofold degeneracy arising from the symmetry of the structure. These properties are imprinted on the ISB polarization through the strong interaction with light. We therefore refer to these structures as "polariton dots" [16]. Having reduced the light degrees of freedom as much as possible, we can now concentrate on the study of the polariton characteristics as a function of their matter part, i.e. of the ISB excitation. To this end we have measured the temperature evolution of the polaritons for different cavity resonances, as illustrated in Fig.7. The four panels in Fig.7 represent the reflectivity contour plots as a function of the temperature for cavities, in which the first order (K = 1) resonance is varied between the 1→2 and 2→3 ISB transitions. As an indication, the dashed lines in Fig.7 report the values of the ISB resonances measured at high temperature, T > 140 K (for graphical simplicity we do not consider here the temperature dependence of the ISB resonances). Note that these values are closest to the ones that one would expect for the same QW without charge effects. In the color code, red corresponds to low reflectance, and blue corresponds to close to unity reflection. The cavity resonance is indicated with an arrow. The first panel (cavity with s=12.5µm) is resonant with the 1→2 transition, while the fourth (s=7.8µm) is resonant with the 2→3 transition. The first panel is performed at 45 • angle of incidence, while the others are performed at 10 • . This is why in the first panel it is visible also the second order resonance (K = 2) at 6 THz. One can see that as the temperature is lowered below 150 K electrons accumulate preferentially on the first subband, making the ratio N 1 /N 2 >> 1. The 1→2 transition is thus activated resulting in a progressively increasing of the polariton splitting for the s=12.5µm cavity or, a growing absorption dip for the off-resonant cases. The 2→3 transition is most active in the temperature range (50 K < T < 150 K), as qualitatively expected from the Fermi-Dirac statistics. The main advantage of the microcavity measurements with respect to multi-pass measurements is the type of spectroscopic data that is related to the population differences between the QW subbands. In a multi-pass experiment, such as the one depicted in Fig.4, if a linear absorption regime is assumed, the population difference N 1 − N 2 is related to the integral of the absorption peak in the spectra [12]. The peak area determination requires a careful baseline calibration, which can become difficult in the THz region owe to diffraction effects due to the large wavelengths, as well as an increased free carrier and phonon absorption. In a microcavity, the population difference N 1 − N 2 can be inferred precisely from the polariton splitting (the difference between the two polariton frequencies) observed in the reflectivity experiment. Moreover, both methods require a careful determination of the linewidth of the ISB resonances. The impact of the dissipation effects on the spectra, such as free carrier absorption and ISB linewidth enlargement is visible in the data of Fig.7. For high temperatures the cavity resonances appear to have rather low quality factors, whereas the quality factors improve at low temperature, becoming comparable with the values measured for an undoped structure. This is clearly visible, e.g. in the case of the s=7.8µm cavity, where much narrower cavity resonance is recovered after the 2→3 transition is depleted at very low temperature: the FWHM is 1.2 THz at 200 K and 0.79 THz at 7 K. This behavior confirms the comparison between the doped and undoped reference ( Fig.2(a)). We attribute it mainly to the enlargement of the ISB resonances, that couple to the cavity mode, which is also visible in the multi-pass absorption spectra (Fig.4). Moreover, at high temperature the free carrier absorption in the heterostructure also increases, as part of the electronic population can be thermally excited in the high levels of the well and the 3D continuum of the heterostructure. III. DATA EXPLOITATION To ascertain the above qualitative discussion we have modelled the experimental data for cavities with different dimensions at different temperatures. The data modelling is based on an effective medium approach, combined with numerical simulations of the optical response of the double metal structures. This approach allows us, on one hand, to consider the situation of several occupied subbands, and on the other, to take into account the finite linewidths of both the microcavity mode and the ISB resonances. The ohmic losses in the metallic layers are taken into account by considering a complex Drude constant for the metal [5,19] The effective medium constant can be obtained from the quantum Hamiltonian describing the light-matter interaction [10]. In the following we start by recalling the quantum approach. A. Plasma Hamiltonian and Effective medium approach The description of light-matter interaction using a Hamiltonian approach allows to take into account all the microscopic details of the electronic confinement. Moreover, the effective dielectric function of the quantum well media can be obtained from the characteristic polynomial of the quantum Hamiltonian. The quantum model allows therefore to obtain a dielectric function that is both directly applicable for electromagnetic simulations and encapsulates the microscopic details of the system. To carry out this approach, we implement selfconsistent Poisson-Schrödinger coupled equations to calculate the potential changes associated with the field produced by the static charges. As a result, we obtain the envelope wave-functions φ i (z) and energy levels ω i produced by the electronic confinement of the quantum wells, as wells as the Hartree correction. If the total sheet concentration N tot is known, we can then obtain the population of each subband N i through Fermi-Dirac statistic. We can then use this information in order to formulate the light-matter coupling Hamiltonian [10]. The interaction of the transition i → j with light can be fully described introducing the function ξ ij (z), which represents the oscillating electronic current between levels i and j: ξ ij (z) = φ i (z) dφ j (z) dz − φ j (z) dφ i (z) dz(2) In the following, to simplify the notation, we use a single greek index α to indicate the transition i → j. For instance, the frequencies of the ISB transitions are ω α = ω j − ω i . The current functions of Eq.(2) contain all the relevant quantities of the problem. Namely, the oscillator strength of the transition α is: f o α = m * ω α ξ α (z)dz 2(3) and the effective thickness L α of the transition is: 1 L α = m * ω α ξ 2 α (z)dz(4) Further, the effective length Eq.(4) allows us to define the plasma frequency of the transition ω P α : ω 2 P α = e 2 m * εε 0 ∆N α L α(5) where ∆N α = N i − N j is the population difference (surface density) between levels i and j. With the above definitions Eq.(5) coincides with the plasma frequency introduced by Ando, Folwer and Stern [2]. Finally, we introduce the frequency of the intersubband plasmon ω α which takes into account the depolarization effect: ω α = ω 2 α + ω 2 P α(6) These quantities permit to write the bosonized Hamiltonian describing the interaction of the intersubband plasmons with the cavity mode, as well as the coupling between the plasmons associated with different transitions [10]: H = α ω α p † α p α + ω cav (a † a + 1/2) +i α Ω α (a † − a)(p † α + p α ) + α =β 1 2 Ξ αβ (p † α + p α )(p † β + p β )(7) Here p α and p † α are bosonic operators describing the intersubband plasmons, a and a † are creation/annihilation operators of the cavity mode, and ω cav is the mode frequency. The strength of the light-matter coupling is provided by the coefficient Ω α : Ω α = ω P α 2 ω cav ω α f o α f w α(8) where f w α = L α /L cav is the overlap between the microcavity and the intersubband plasmon. The strength of the inter-plasmon coupling is provided by the coefficient Ξ αβ : Ξ αβ = ω P α ω P β 2 ω α ω β ξ α (z)ξ β (z)dz ξ 2 α (z)dz ξ 2 β (z)dz(9) The Hamiltonian (7) describes the coupling between set of quantum oscillators, one of which is the electromagnetic mode. The plasmonic oscillators are also coupled between each other through Coulomb dipole-dipole interactions, which are expressed in the last term of the Hamiltonian (7). As described in Appendix B the matter part of the Hamiltonian (7) can be diagonalised numerically thus obtaining a set of independent plasmon modes each separately coupled with the light field: H = J W J P † J P J + ω cav (a † a + 1/2) +i J R J 2 ω cav W J (a † − a)(P † J + P J )(10) Here the P † J and P J are the new plasmonic operators with frequencies W J . The coefficients R J play the role of effective plasma frequencies and characterize completely the coupling of the J th mode to the electromagnetic field. As an illustration, let us consider the case of a single intersubband transition, J = α. Then R J = R 0 α becomes: R 0 α = ω P α f o α f w α(11) This equation shows that the coefficient R 0 α contains altogether the information of the population differences ∆N α between the subband levels, through the plasma frequency Eq.(5), as well as the overlap between the cavity modes and the intersubband polarization. When several intersubband transitions are present, they interact through the last term of the Hamiltonian (7). After the Hamiltonian is diagonalized into its form (10), this interaction is also contained in the coefficients R J . Moreover, owe to the plasmon-plasmon coupling, the coefficients R J are different from the ones that one would obtain for a single uncoupled plasmon (Eq. (11)). This is precisely the phenomenon of oscillator strength transfer, as explained further. As shown in details in Appendix B, the characteristic polynomial of the Hamiltonian (10) allows to define the effective medium dielectric constant of the system: ε ε eff (ω) = 1 + J R 2 J ω 2 − W 2 J + iωΓ J(12) where we have introduced the phenomenological linewidths, Γ J , of the plasmon modes. This expression is similar to the plasmon pole approximation for 3D plasmons [20]. Note that, when rotating wave approximation is assumed, Eq.(12) leads to a Lorenzian lineshape with full width at half-maximum equal to Γ J . Once more, Eq.(12) underlines the importance of the effective plasma frequencies R J . Indeed, from the semi-classical point of view, R 2 J is the weight of the plasmon pole W J of the inverse dielectric function, quantifying its oscillator strength. With this respect, the quantity R 2 J can be considered as a generalization of the concept of oscillator strength in the case of ISB plasmons. Note that, in the case of a multiple quantum well system as considered here, the Hamiltonian (7) contains the couplings between plasmons from spatialy different quantum wells, as well as between plasmons from the same quantum well. However, in the case where we can neglect the overlap between wavefunctions from different quantum wells, the plasmon coupling coefficients (9) vanish, and the remaining non-zero couplings are only those between plasmons from the same well. If the N QW quantum wells are identical, we can consider a Hamiltonian of the form of Eq. (10), where the operators P † J describe the normal plasmon modes of a single quantum well, but the plasma oscillator strengths are replaced by R 2 J × N QW . This fact is briefly discussed in Appendix B. To take into account the intrinsic anisotropy of the intersubband system and the in-plane free carrier absorption, the semiconductor slab is modeled by a diagonal dielectric tensor with components ε z = ε eff (ω) and ε x = ε y = ε || (ω) [21]. The in-plane dielectric function ε || (ω) describes the Drude absorption arising from the free movement of the carriers in the plane (x, y) [22]. We have also considered the contribution of the LO phonons in the dielectric function, as in Ref. [17]. The knowledge of the effective medium dielectric constant expressed in Eq. (12) gives the electromagnetic response of the ISB system, and therefore it is a central quantity for the interpretation of our data. In particular Eq.(12) involves the quantities W J , Γ J and R J describing the collective plasmon modes. As it is often the case we do not know all the exact details of the system and therefore we regard the quantities W J , Γ J and R J as free parameters that can be determined by fitting the experimental spectra. Indeed, the positions of the intersubband peaks W J and their linewidths Γ J can be directly derived from the multi-pass absorption experiment (Fig.4). Moreover, the quantities R J can be deduced from the polariton splitting in measurement such as those presented in Fig.7. The values of W J and R J can also be calculated from the microscopic model described above, based on the Poisson-Schrödinger modelling of the heterostructure and the diagonalization of the Hamiltonian of Eq. (7). In this approach, the critical input parameter is the number of ionized impurities that gives rise to the electronic charge (sheet) density N tot . For each temperature T , the sheet density N tot is adjusted so that the calculated values of W J , and R J coincide with the values obtained from the fit of the experimental spectra, W exp J , and R exp J . The scheme for relating the experimental spectra to the microscopic computation is summarized in Fig.8. Following this approach we can connect the results from the spectroscopic measurements with the microscopic information on the heterostructure as a function of the temperature T . In particular we can deduce the population of the different subbands N i , as well as the bare intersubband energies ω i and the corresponding single particle FIG. 9: Conduction band profile and charge density simulations of the system, taking into account the metal semiconductor interfaces [23]. wavefunctions φ i (z). We should bear in mind several underlying approximations. First, the quantum Hamiltonians of Eq.(7) or Eq.(10) describe a perfect electronic system, as they do not include any dissipative effects that may decrease or destroy the coherence of the electronic wave-functions, such as disorder or scattering from phonons or interface roughness. We approximate these effects by the inclusion of the phenomenological homogeneous broadening parameters Γ J in Eq. (12). Moreover, the plasma Hamiltonians of Eq.(7) and Eq.(10) are valid in the limit of high electronic densities, where the exchange-correlation effect can be neglected. B. Modelling of the experimental data In our structure the crucial quantity for the evaluation of the light matter interaction is the total number of electron participating to process. In a metal-dielectricmetal cavity the Fermi level pins at the metal-GaAs interface [15] and therefore some of the quantum wells in the immediate vicinity of the metallic layers are depleted. In order to estimate the number of quantum wells that are still charged, N QW , we have used a freeware Poisson band-structure solver [23]. The conduction band profile simulations are presented in Fig.9. The simulation shows that, below 150 K, 2 QWs are depleted at each interface. Both ends of the structure are therefore modelled as undoped spacers. The electronic distribution in the charged QWs is practically uniform, as can be seen from Fig.9. Indeed, we model this region with an effective dielectric constant of the form expressed in Eq. (12), which assumes an identical average doping for all the remaining 21 doped QWs. To take into account the spacer regions, according to Eq.(11) the values for R J have been multiplied by 21/25 = 0.92. The total population N tot (T ) and its redistribution on the different subbands as a function of the temperature N i (T ) can be extracted by injecting it into the Hamiltonian (10), calculating W J and R J (for J = 1 → 2 and J = 2 → 3) and finally comparing these values with those obtained by the fit with the experimental spectra. Examples of the experimental fit are provided in Fig.10. In Fig.10(a) the transmission experiment without cavity (dotted curve) is compared with a transfer matrix simulations based on the effective dielectric constant (12) in order to extract the values of W J and Γ J . These values are then inserted in the simulations of the reflectivity spectra presented in Figures 10(b) and 10(c), where we consider the particular case of cavities in resonance with the 1 → 2 and 2 → 3 ISB excitations respectively. From these simulations one can extract the effective plasma frequencies R 1→2 and R 2→3 . Note that the reflectivity spectra from the cavities have intrinsically better quality than the multipass transmission data, especially at a high temperature. Indeed, in the multi-pass absorption light pass through a thick substrate (350µm) to probe a very thin active region (1.5µm). The substrate transparency is strongly affected by the increased optical phonon absorption as the temperature is raised. This is clearly visible in the data of Fig.10(a), where the transmission signal drops considerably around 7 THz. On the contrary, in the case of the cavity there is no substrate and the light field is crosses exclusively the QW region. This configuration is therefore more appropriate for testing ISB electronic absorption and avoiding spurious effect from the substrate. The parameters W J , and R J obtained from this procedure have been plotted as a function of the temperature in Fig.11(a) and Fig.11(b) (full circles). In Fig.11(b) the full widths at half maximum of the absorption peaks, Γ J , are also indicated as errors bars. The continuous lines are the results of our simulation using a Poisson-Schrödinger solver described in Appendix A together with the diagonalization of the plasmon Hamiltonian (7). As we have already mentioned, we have left, for each temperature T , the total sheet electronic density per well as a free fitting parameter. In Fig.12(a) we present N tot as a function of the temperature as well as the population differences, (N i − N j )(T ) between the occupied subbands. For the fit we have used the value of 31.8nm for the quantum well thickness (instead if the nominal value 32nm), which gives us a good fit over the whole temperature range. According to these results the number of electrons N tot (T ) captured by the quantum wells is constant below 60 K, and then progressively decreases as the temperature is raised. This behaviour can be explained by assuming that some of the electrons are recaptured in the delta doped donor region. In Fig.11(b) we also report the frequencies ω 1→2 and ω 2→3 (dashed lines) calculated as the difference between the single particle energy states. Due to the Hartree potential in a modulation doped structure ω 1→2 decreases as the electronic population is increased. Therefore, as the temperature is lowered between 7 K and 60 K, the energy spacing ω 1→2 is reduced and the subband 2 becomes populated. Note also that the collective plasma frequency W 1→2 , that fits very well the experimental data, has exactly the opposite behavior with respect to ω 1→2 . In the case of the 2 → 3 transition the depo-larization effect is much smaller and the collective mode frequency W 2→3 is close to the single particle frequency ω 2→3 . The observed slight depolarization shift for this transition is even lower than that predicted by the quantum Hamiltonian (7) that describes a perfectly coherent electronic system (Γ j = 0). The small discrepancy arises from disorder effects [24]. As a matter of fact it can be noticed that W 2→3 − ω 2→3 < Γ 2→3 , which implies that the correlations introduced by the depolarization effects are not enough to overcome the disorder, and the system stays closer to its single particle energy. In this case the effects of the inhomogeneous broadening become important, and models like those exposed in Ref. [25] are more pertinent to describe the system. The microscopic model used to fit the experimental data permits also to evidence the interaction between ISB plasmons from different transitions which leads to transfer of the oscillation strength between the first and excited ISB transitions [14]. Let us recall how this phenomenon is contained in the Hamiltonians (7) and (10). The Hamiltonian (7) describes a set of collective modes at frequencies ω α . Their plasma frequencies ω P α (Eq. (5)) not only quantify their coupling strength with the cavity mode, but also describe their interaction through the dipole-dipole Coulomb coupling, Eq.(9). By diagonalizing the matter part of the Hamiltonian (7), we obtain the Hamiltonian (10) which contains the new independent normal collective modes at frequencies W J . Therefore the light-matter interaction occurs between plasmons originating from individual ISB plasmons coupled by Coulomb interaction. A side effect of the coupling is that, when two ISB excitations are present, as in our system, the first excitation transfers part of its oscillator strength to the second one. For our system this effect is not visible in the excitation frequencies, as the normal mode frequencies W 1→2 and W 2→3 remain close to the original collective mode frequencies ω 1→2 and ω 2→3 (not shown). However, the transfer of oscillator becomes apparent by considering the plasma energies. This is illustrated in Fig.12(b), where we compare R 1→2 and R 2→3 (continuous line) with the values R 0 1→2 and R 0 2→3 that would be obtained for two completely uncoupled plasmons (dashed line). The later can be obtained analytically from Eq. (11). We observe that always R 2→3 > R 0 2→3 and R 1→2 < R 0 1→2 . Notice that at low temperature R 2→3 is almost twice R 0 2→3 . This means that the effective oscillator strength of the plasmon resonance is doubled when we consider the Coulomb dipole-dipole interaction. This explains why 2 → 3 absorption peak in the spectra of Fig.10(a) is quite high in spite of the relatively low population on the subband 2. C. Photonic dispersion and finite linewidth effects To proof the consistency of the values of W J and R J extracted from the fits in Fig.10 we have simulated the spectra for all the measured cavities, at four different temperatures: 7 K, 30 K, 50 K and 100 K. The cavities used in the experiments had a patch width ranging from 6 to 21µm. This permits to explore a frequency range from 1.8 THz to 6 THz for the fundamental mode. The comparison between the experimental spectra (dot- To summarize the comparison between measurements and experiments in Fig.14 we have plotted the polariton dispersion curves for the four different temperatures above mentioned. Again, very good agreement is obtained between the experiment (triangles) and the effective medium model based upon the dielectric constant Eq.(12) (continuous lines) for every temperature. These dispersion curves indicate clearly how the splitting of the high energy excitation 2 → 3 increases while that of the 1 → 2 excitation decreases as the temperature is raised. In accordance the polariton gap associated with the 1 → 2 excitations progressively decreases with the temperature and at T =100 K no gap can be seen. The "temperature knob" therefore allows us to tune the system from the ultra-strong coupling to the "ordinary" strong-coupling regime [7]. The study of the polaritonic dispersion brings insights on the impact of the dissipation (linewidth enlargement) on the measured splitting. In a system with loss the observable Rabi splitting is in general smaller than the actual strength of the light-matter interaction [13]. For an ideal lossless system, the polariton dispersion can be obtained analytically from the eigenvalue equation of the quantum Hamiltonian [10]. For our particular system, the dispersion equation for two plasmon modes coupled with a lossless TM 0 mode is: 1 + R 2 1→2 ω 2 − W 2 1→2 + R 2 2→3 ω 2 − W 2 2→3 = ω 2 ω 2 cav(13) The real roots of Eq.(13) are plotted in a dashed-line in Fig.15, for T =50 K, where both ISB excitations are clearly present. The results from the dielectric constant model, that fits exactly the measured data, are indicated as a solid-line. Not surprisingly the predicted Rabi splittings for the ideal system described by Eq.(13) are greater than those obtained experimentally. This is the results of the reduced coherence in the real system due to the loss introduced by the metallic layers of the cavity, the finite width of the ISB resonance, as well as the decreased transparency of the semiconductor close to the restrahlen band. For an infinitely coherent system, according to Eq.(13) the polariton splittings 2Ω exp 12 and 2Ω exp 23 would be exactly R 1→2 and R 2→3 . Interestingly, a very good fit will be obtained also by replacing in Eq.(13) R 1→2 and R 2→3 with the experimental values for the splittings (2Ω exp 12 and 2Ω exp 23 ) as it is shown in the dotted-curve of Fig.15. This implicates that the polaritonic dispersion can be very well reproduced from the perfect quantum model, as long as the measured values of the splittings 2Ω exp i are used for the light-matter coupling constants in the Hamiltonian. Moreover, it justifies the approach of several experimental groups that fit the measured dispersion curves with fully coherent equations, like Eq.(13), by using the experimentally derived Rabi frequency at resonance [5][6][7][8]. The comparison between the ideal system and our real system is summarized in Table I and Table II, for the 1 → 2 transition and 2 → 3 transition, respectively. A sizeable discrepancy can be measured between the modelled coupling constant R 2→3 and the observable splitting, 2Ω exp 23 , particularly when the linewidth is an important fraction of these quantities. To quantify this effect we define cooperativity parameter, as in atomic physiscs, that sets the onset for the strong coupling in our system. In atomic physics, where an atom is interacting with a microcavity mode, the cooperativity parameter is defined as the ratio between the rate of photon scattering into the microcavity mode divided by the rate of the free-space scattering (radiative enlargement of the atomic transition) [26,27]. This definition can be adapted to our system by taking into account the fact that the dominant contribution to linewidth enlargement is not of radiative origin. Indeed, the decay rate of the ISB plasmon is Γ J /2 and is determined mainly by non-radiative mechanisms, such as phonon relaxation and interface scattering. We can define thus the cooperativity parameter C J as: C J = R 2 J 2Γ cav Γ J(14) where R 2 J /4Γ cav is the rate of photon emission in the cavity mode as provided by Fermi Golden rule [28], since the coupling constant of the Hamiltonian of Eq.(10) is R J /2 and Γ cav is the full-width at half maximum of the cavity mode. With this definition, the criterion for the intersubband system to be in the strong coupling regime is C J > 1. The values of C J are reported in the rightmost column of the Tables I and II, and show that this definition is in accordance with our data. In particular we can observe that the coupling between the 2 → 3 transition and the fundamental mode has in general a lower cooperativity and therefore a much higher discrepancy between the values of the coupling constant R J and the measured splittings 2Ω exp 23 is observed. This is due to the dissipations effect of the 2 → 3 transition being more important than those of the 1 → 2 transition, probably due to the proximity of the GaAs phonon absorption band. According to the criterion C J > 1, the 2 → 3 transition is not in strong coupling at T =7 K as it can be also seen in Fig.10(c) and Fig.14(a). More generally, the importance of the dissipation effects on the onset of the strong coupling could explain the discrepancies between the light-matter coupling estimated from simple analytical formulas like Eq.(1) and experimental observations, especially in a case of large ISB linewidths or of metallic cavities with high losses as those reported in Ref. [8]. IV. CONCLUSION In conclusion, we have presented a study of intersubband systems strongly coupled with 0D micro-resonators. Our analysis permits to relate the polariton spectroscopic features to microscopic details of the heterostructure, such as the thermal distribution of electrons on the different subbands and the coupling between different collective oscillations in the quantum well. We interpret our experimental results using an effective medium approach, which connects the quantum simulation of the heterostructure with the electromagnetic modelling of the microresontor containing the quantum well slab. We also discuss some of the important aspects of the experimental system, such as finite linewidths or charge transfer Here M is the 2N × 2N Hopfield-Bogoliubov matrix of the problem, and and V J is the 2N rank eigenvector which corresponds to the eigenvalue W J : M =       I 1 C 12 · · · C 1N C 12 I 2 · · · C 2N . . . . . . . . . . . . C 1N C 2N · · · I N       (B6) I α = − ω α 0 0 ω α C αβ = −Ξ αβ −Ξ αβ Ξ αβ Ξ αβ (B7) V J = T (Y J1 , T J1 , · · · Y JN , T JN )(B8) Since the matrix M real and block-symmetric, it allows reel eigenvalues with reel eigenvectors. Moreover, if W J is an eigenvalue, then −W J is also eigenvalue of M. Therefore the matrix M has N distinct positive eigenvalues W J which are the frequencies of the normal plasmon modes. The latter can be easily inferred from the numerical solution of the linear problem described by Eq.(B5). Let us now explicit the coupling of the normal modes with the light mode. Since the eigenvectors V J are real, then it is easy to show that we have: P J + P † J = α (Y Jα + T Jα )(p α + p † α )(B9) This relation, seen as a matrix-vector product, allows to determine p α + p † α as a function of P J + P † J . Let us define the N × N matrix inverse: X αJ = (Y Jα + T Jα ) −1 (B10) This leads to the light-matter coupling Hamiltonian of Eq.(10), where the effective plasma frequencies R J of the normal modes are defined as: R J = α ω P α X αJ W J ω α f o α f w α (B11) At this point, the Hamiltonian (10) describes N independent quantum oscillators coupled with a single cavity mode. Its Hopfield matrix is very similar to Eq. (B6), except that the coupling terms are non zero only on the first two columns and first two rows. The characteristic polynomial of this problem can then be computed explicitly, and the result is: J (ω 2 − W 2 J ) ω 2 − ω 2 cav − J R 2 J ω 2 cav ω 2 − W 2 J(B12) Therefore the equation providing the light-coupled polariton states is: ω 2 − ω 2 cav − J R 2 J ω 2 cav ω 2 − W 2 J = 0 (B13) In the case where there are N QW initial plasmons are uncoupled, but oscillate at identical frequencies, the above formalism predicts that there is one bright excitation, with a collective oscillator strength that is N QW higher. This is easily seen from the above equation, by setting W J = const. The effective dielectric constant of the system is then defined through the relation ε eff (ω)ω 2 = εω 2 cav : ε ε eff (ω) = 1 + J R 2 J ω 2 − W 2 J (B14) The treatment of the problem is so far energy conserving. When introducing Markovian relaxation mechanism with a rate Γ J /2 and considering a basis of coherent (semiclassical) states the above equation is transformed into Eq.(12) [30]. Let us discuss briefly the case of a heterostructure consisiting of N QW identical quantum wells. For simplicity, let us consider only two plasmons from each quantum well, coupled by a constant Ξ ab , and we consider that the plasmons from different wells are uncoupled, owe to a vanishing overlap of the wavefunctions between different wells. In this case the matter Hamiltonian writes: H mat = NQW i=1 ω a p † a,i p a,i + NQW i=1 ω b p † b,i p b,i + 1 2 Ξ ab NQW i=1 (p † a,i + p a,i )(p † b,i + p b,i ) (B15) The light matter-part interaction Hamiltonian is respectivellŷ H int = i Ω a NQW i=1 (a † − a)(p † a,i + p a,i ) +i Ω b NQW i=1 (a † − a)(p † b,i + p b,i )(B16) To simplifiy these Hamiltonians, we introduce the following new bosonic operators [4]: π † a,b = 1 N QW NQW i=1 p † (a,b),i(B17) Here the normalization factor 1/ N QW arises from the requirement that [π a,b , π † a,b ] = 1. Let us then compute the commutator of π † a with the Hamiltonian (B15): [Ĥ mat , π † a ] = ω a 1 N QW NQW i=1 p † a,i + 1 2 Ξ ab 1 N QW NQW i=1 (p † b,i + p b,i ) = ω a π † a + 1 2 Ξ ab (π b + π † b ) (B18) Similar commutation relation is valid for π † b . These commutation relations are identical if we replace Eq. (B15) with the effective Hamiltonian:Ĥ mat = ω a π † a π a + ω b π † b π b + 1 2 Ξ ab (π † a + π a )(π † b + π b ) (B19) Expressing the interaction Hamiltonian with the new operators we obtain: H int = i Ω a N QW (a † − a)(π † a + π a ) +i Ω b N QW (a † − a)(π † b + π b )(B20) Formally, the resulting Hamiltonian is equivalent to Eq. (7), except that the light-matter coupling constants for a single quantum well are multiplied by N QW . This justifies our approach in part III. FIG. 1 : 1(a) Schematic of an individual metal-metal patch cavity with thickness Lcav, containing 25 GaAs/AlGaAs quantum wells. The structure is probed in reflectivity measurements, with an incident angle θ, in both TM (p-) and TE(o-) polarization. (b) Microscope picture of the top surface of the sample, containing an array of patch cavities. The size of the patches is s, and the period of the array is p. FIG. 2 : 2(a) Typical room temperature reflectivity spectra for two samples with an array of patches with s =12µm. Dashed line: undoped reference sample. Solid line: sample with doped quantum wells, but with the same overall thickness (Lcav=1.5µm) as the reference. (b) Resonance frequency of the doped sample, measured at 300 K, as a function of the inverse square size multiplied by the speed of light (c/s). K denotes the order of the resonant mode, and the grey area corresponds to the optical phonon band. FIG. 3 : 3Typical conduction band profile of the periodic quantum well stack, including self-consistent Hartree corrections. We have highlighted a single quantum well together with the moduli squared of the first 4 wavefunctions, as well as the Fermi-level energy, EF . FIG. 4 : 4(a) Multipass absorption of the quantum well medium, measured as a function of the temperature and frequency. The two features around 3 THz and 5 THz correspond respectively to the 1→2 and 2→3 transitions. (b) The absorption spectrum at T =7 K. The inset illustrates the experimental configuration. FIG. 5 : 5(a) Experimental configuration for reflectivity measurements with variable polarization of the incident beam. (b) Low temperature (T =7 K) reflectivity spectra performed at an incident angle θ = 45 • . Solid curve: TE linear polarization. Dashed curve: unpolarized light. The size of the patches is s=11.1µm. FIG. 6 : 6(a) Experimental configuration for reflectivity measurements with different incident angles. (b) Low temperature (T =7 K) reflectivity spectra performed with unpolarized light, for an angle θ = 10 • (solid line) and θ = 45 • (dashed line). The size of the patches is s=12.5µm. FIG. 7 : 7Frequency -temperature contour plots of the reflectivity for cavities with different sizes s. The resonant frequencies of the cavities (indicated by arrows) have been chosen so that the microcavity modes are tuned between the 1→2 and 2→3 ISB transitions. The dashed lines indicate the values of the ISB resonances measured at high temperature, T > 140 K, according to the data inFig.4. FIG. 8 : 8Block diagram of the fitting procedure used to exploit the experimental data. Top: fitting the experimental spectra (reflectivity and absorption) with the electromagnetic model, based on the dielectric function of Eq.(12) provides a phenomenological estimate for the quantities R exp J , W exp J and ΓJ for each temperature. Bottom: the quantities RJ , WJ are computed from the microscopic quantum model, using the total number of electrons Ntot in the well. For each temperature T we retain the value of Ntot which best reproduce the experimental values R exp J and W exp J . FIG. 10 : 10Examples of electromagnetic simulations (solid curves) of the experimental spectra (dotted lines) for different temperatures. (a) Multipass transmission experiment. (b), (c) Reflectivity measurements at θ = 45 • for arrays of patches with dimensions s =12.5µm and s =7.8µm respectively. FIG. 11: (a) The effective plasma frequencies R1→2 and R2→3 for the two plasmon resonances, as extracted from experiments (dots) or computed from the quantum approach (solid lines). (b) Measured plasmon resonances W1→2, W2→3 and their line widths Γ1→2 , Γ2→3 (dots and error bars), as compared to the results from the quantum approach. In dashed lines: the subband separations ω1→2 and ω2→3 of the bare electronic states. FIG. 12 : 12(a) Population differences between different subband levels, as wells as total electronic sheet density as a function of the temperature, as computed from the quantum model. (b) Comparison between the effective plasma frequencies of the two resonances in the case without coupling (dashed lines) and when the plasmon-plasmon coupling is taken into account (solid lines). ted lines) and the effective medium electromagnetic simulations (continuous lines) are presented in Fig.13 at T =7 K. For simplicity, we consider only measurements with an incident angle of θ = 10 • , where only the odd resonances are excited. The agreement between the experiments and electromagnetic simulations is excellent. The simulations FIG. 13: (a) and (b) Experimental reflectivity spectra at θ = 10 • (dotted curves) and simulations (solid lines) for all measured cavities, at low temperature T =7 K. The number next to each spectra indicates the size of the patch s. reproduce correctly both the positions of the different resonances and the evolution of their reflectivity dips. FIG. 14 : 14Dispersion curves for the coupling between the two ISB plasmons with the fundamental cavity mode. The solid lines are electromagnetic simulations, and the triangles are measured polariton frequencies. (a) T =7 K, (b) T =30 K, (c) T =50 K, (d) T =100 K. The grey area indicates the polariton gap for the 1 → 2 ISB plasmon. FIG. 15 : 15Dispersion curves computed from the different models mentioned in the text. TABLE I : Ia In THz, from the effective medium model. b In THz, from the dispersion curves. c In THz, simulation of a s =12.5µm cavity without carriers.1 → 2 transition T (K) R1→2 a 2Ω exp 12 b Γ1→2 a Γcav c C1→2 7 1.64 1.40 0.20 0.55 12.77 30 1.41 1.20 0.53 0.55 3.4 50 1.23 1.04 0.60 0.55 2.3 100 0.90 0.70 0.65 0.55 1.1 TABLE II : IIa In THz, from the effective medium model. b In THz, from the dispersion curves. c In THz, simulation of a s =7.8µm cavity without carriers.2 → 3 transition T (K) R2→3 a 2Ω exp 23 b Γ2→3 a Γcav c C2→3 7 0.72 0.24 0.48 0.77 0.7 30 1.18 0.61 0.58 0.77 1.6 50 1.46 0.80 0.68 0.77 2.0 100 1.35 0.82 0.70 0.77 1.7 occurring at metal-semiconductor interfaces. Such modelling can be further refined by including, for instance, inhomogeneous broadening effects owed to disorder like in the numerical approach proposed in Ref.[25]. Since we consider samples with high electronic densities, we expect that other fundamental effects, like exchange-correlation and excitonic like shift[29]to be negligible.We therefore believe that the approach described here, which is a combination of microscopic selfconsistent modelling of the heterostructure and electromagnetic simulations through the effective medium constant (Eq.(12)) could be applied as a designer tool for ISB light-matter coupled quantum devices.We gratefully acknowledge support from the French National Research Agency in the frame of its Nanotechnology and Nanosystems program P2N, Project No. ANR-09-NANO-007. We acknowledge financial support from the ERC grant "ADEQUATE", and from the Austrian Science Fund (FWF).Appendix A: Periodic Poisson-Schrödinger solverIn a typical self-consistent Poisson-Schrödinger problem one determines the Hartree correction of the potential V H (z), that includes the contribution of the static charges through the Poisson equation[2]. This contribution is added to the heterostructure potential, and the Schrödinger equation is solved numerically, typically trough the shooting method.As seen fromFig.9, the doped quantum wells create a periodic potential. Hence the Hartee correction V H (z) to the heterostructure potential, that depends on the wavefunctions is also periodic. However, such selfconsistent Poisson-Schrödinger problem is inconvenient to treat with the shooting method, since the energy levels of the quantum wells are heavily degenerated. We therefore treat only a single period with extension d (such as V H (z + d) = V H (z)). The most general solution for the Poisson equation for a 2D charge density ρ(z) is then[2]:Here ρ(z) contains both the contribution from the wavefunctions and the donor impurities. We choose B = 0 and we require that the charge density is also periodic: ρ(z + d) = ρ(z). Moreover, we require that the starting point z = 0 of the period is chosen so that the period is globally neutral:Hence the constant A is uniquely determined and Eq.(A1) becomes:A3) It is easy to verify with this formula that both the potential V H (z) and its derivative dV H (z)/dz are dperiodic. Hence Eq.(A3) describes the unique solution of the Poisson equation on a globally neutral cylindrical space with a circumference d.Appendix B: Multiband HamiltonianThe diagonalization of the Hamiltonian(7)is performed in two steps. First, we diagonalize the matter part of the Hamiltonian which includes the inter-plasmon interactions. The result is a set of new independent normal collective plasmon modes, which are coherent superpositions of plasmons from different transitions. We then explicit the interaction with the normal modes with the light mode. The new coupling constants of the lightmatter interaction then describe the collective oscillator strength of the normal modes.We therefore start with the matter Hamiltonian:If there are N optically active plasmons in the system, then, after the diagolization is performed, there will be N independent normal modes. The bosonic operators P J of the normal modes are linear combination of the initial ones:The latter are eigenmodes of the Hamiltonian (B1) with eigenfrequencies W J only if:Using the symmetry property Ξ αβ = Ξ βα and the relation:as well as its hermitian conjugate, Eq.(B3) leads to the following eigenvalue problem: . J Faist, F Capasso, D L Sivco, C Sirtori, A L , J. Faist, F. Capasso, D. L. Sivco, C. Sirtori, A. L. . A Y Hutchinson, Cho, Sicence. 264Hutchinson, and A. Y. Cho, Sicence 264, 553-556 (1994). . T Ando, A B Fowler, F Stern, Rev. of Mod. Physics. 522437T. Ando, A.B. Fowler, F. Stern, Rev. of Mod. Physics, Vol. 52, No. 2, p. 437, April 1982. . D Dini, R Kohler, A Tredicucci, G Biasiol, L Sorba, Phys. Rev. Lett. 90116401D. Dini, R. Kohler, A. Tredicucci, G. Biasiol, and L. Sorba, Phys. Rev. Lett. 90, 116401 (2003). . C Ciuti, G Bastard, I Carusotto, Phys. Rev. B. 72115303C. Ciuti, G. Bastard, and I. Carusotto, Phys. Rev. B 72, 115303 (2005). . Y Todorov, A M Andrews, I Sagnes, R Colombelli, P Klang, G Strasser, C Sirtori, Phys. Rev. Lett. 102186402Y. Todorov, A. M. Andrews, I. Sagnes, R. Colombelli, P. Klang, G. Strasser, and C. Sirtori, Phys. Rev. Lett. 102, 186402 (2009). . A A Anappara, S De Liberato, A Tredicucci, C Ciuti, G Biasiol, L Sorba, F Beltram, Phys. Rev. B. 79201303A. A. Anappara, S. De Liberato, A. Tredicucci, C. Ciuti, G. Biasiol, L. Sorba, and F. Beltram, Phys. Rev. B 79, 201303(R) (2009). . P Jouy, A Vasanelli, Y Todorov, A Delteil, G Biasiol, L Sorba, C Sirtori, Appl. Phys. Lett. 98231114P. Jouy, A. Vasanelli, Y. Todorov, A. Delteil, G. Biasiol, L. Sorba, and C. Sirtori, Appl. Phys. Lett. 98, 231114 (2011). . M Geiser, F Castellano, G Scalari, M Beck, L Nevou, J Faist, Phys. Rev. Lett. 108106402M. Geiser, F. Castellano, G. Scalari, M. Beck, L. Nevou, and J. Faist, Phys. Rev. Lett. 108, 106402 (2012). L C Andreani, Proceedings of the International School of Physics Enrico Fermi. B. Deveaud, A. Quattropani and P. Schwendimannthe International School of Physics Enrico FermiIOS PressL. C. Andreani in Proceedings of the International School of Physics Enrico Fermi, Course CL, B. Deveaud, A. Quattropani and P. Schwendimann (eds)(IOS Press, Am- sterdam, 2003). . Y Todorov, C Sirtori, Phys. Rev. B. 8545304Y. Todorov and C. Sirtori, Phys. Rev. B 85, 045304 (2012). . Y Todorov, L Tosetto, J Teissier, A M Andrews, P Klang, R Colombelli, I Sagnes, G Strasser, C Sirtori, Opt. Express. 18Y. Todorov, L. Tosetto, J. Teissier, A. M. Andrews, P. Klang, R. Colombelli, I. Sagnes, G. Strasser, and C. Sirtori, Opt. Express 18 13886-13907 (2010). Intersubband Transitions in Quantum Wells: Physics and Device. Applications I H.C. Liu and F. Capasso eds.San DiegoAcademic PressM. Helm in Intersubband Transitions in Quantum Wells: Physics and Device Applications I H.C. Liu and F. Ca- passo eds. (Academic Press, San Diego 2000). . V Savona, L C Andreani, P Schwendimann, A Quattropani, Solid State Commun. 93V. Savona, L.C. Andreani, P. Schwendimann, and A. Quattropani, Solid State Commun. 93, 733-739 (1995). . R J Warburton, K Weilhammer, J P Kotthaus, M Thomas, H Kroemer, Phys. Rev. Lett. 802185R. J. Warburton, K. Weilhammer, J. P. Kotthaus, M. Thomas, and H. Kroemer, Phys. Rev. Lett. 80, 2185 (1998). S M Sze, Semiconductor devices, physics and technology. John Wiley & SonsS. M. Sze. Semiconductor devices, physics and technol- ogy. (John Wiley & Sons, 1985). . Y Todorov, A M Andrews, R Colombelli, S De Liberato, C Ciuti, P Klang, G Strasser, C Sirtori, Phys. Rev. Lett. 105196402Y. Todorov, A. M. Andrews, R. Colombelli, S. De Liber- ato, C. Ciuti, P. Klang, G. Strasser, and C. Sirtori, Phys. Rev. Lett. 105, 196402 (2010) . Y Todorov, C Minot, J. Opt. Soc. Am. A. 24Y. Todorov and C. Minot, J. Opt. Soc. Am. A 24, 3100- 3114 (2007). . S Collin, F Pardo, R Teissier, J.-L Pelouard, Phys. Rev. B. 6333107S. Collin, F. Pardo, R. Teissier, and J.-L. Pelouard, Phys. Rev. B 63, 033107 (2001). . P Jouy, Y Todorov, A Vasanelli, R Colombelli, I Sagnes, C Sirtori, Appl. Phys. Lett. 9821105P. Jouy, Y. Todorov, A. Vasanelli, R. Colombelli, I. Sagnes, and C. Sirtori, Appl. Phys. Lett. 98, 021105 (2011). H Haug, S W Koch, Quantum Theory of the Optical and Electronic Properties of Semiconductors. World Scientific PublishingH. Haug and S. W. Koch, Quantum Theory of the Opti- cal and Electronic Properties of Semiconductors (World Scientific Publishing, 2004). . L Wendler, T Kraft, Phys. Rev. B. 54L. Wendler and T. Kraft, Phys. Rev. B 54, 11436-11456 (1996). . S Zanotto, G Biasiol, R Deglinnocenti, L Sorba, A Tredicucci, Appl. Phys. Lett. 97231123S. Zanotto, G. Biasiol, R. DeglInnocenti, L. Sorba, and A. Tredicucci, Appl. Phys. Lett. 97, 231123 (2010). 1D Poisson/Schrödinger solver program developed by Dr. Gregory SniderUniversity of Notre Dame1D Poisson/Schrödinger solver program developed by Dr. Gregory Snider, University of Notre Dame. . S Luin, V Pellegrini, F Beltram, X Marcadet, C Sirtori, Phys. Rev. B. 6441306S. Luin, V. Pellegrini, F. Beltram, X. Marcadet, and C. Sirtori, Phys. Rev. B 64, 041306(R) (2001). . C Metzner, G H Dohler, Phys. Rev. B. 6011005C. Metzner and G. H. Dohler, Phys. Rev. B. 60 11005 (1999). . A Wickenbrock, P Phoonthong, F Renzoni, J. Mod. Opt. 5813101316A. Wickenbrock, P. Phoonthong, and F. Renzoni, J. Mod. Opt. 58, 13101316 (2011). . J Mckeever, A Boca, A D Boozer, J R Buck, H J Kimble, Nature. 425J. McKeever, A. Boca, A. D. Boozer, J. R. Buck, and H. J. Kimble, Nature 425, 268-271 (2003). . C Ciuti, I Carusotto, Phys. Rev. A. 7433811C. Ciuti and I. Carusotto, Phys. Rev. A 74, 033811 (2006). . M O Manasreh, F Szmulowicz, T Vaughan, K R Evans, C E Stutz, D W Fischer, Phys. Rev. B. 439996M.O. Manasreh, F. Szmulowicz, T. Vaughan, K.R. Evans, C.E. Stutz, and D.W. Fischer, Phys. Rev. B 43, 9996 (1991). . S M Dutra, K Furuya, Phys. Rev. A. 573050S. M. Dutra and K. Furuya, Phys. Rev. A 57, 3050 (1998).
[]
[ "Work Relation and the Second Law of Thermodynamics in Nonequilibrium Steady States", "Work Relation and the Second Law of Thermodynamics in Nonequilibrium Steady States" ]
[ "Naoko Nakagawa \nCollege of Science\nIbaraki University\n310-8512MitoIbarakiJapan\n" ]
[ "College of Science\nIbaraki University\n310-8512MitoIbarakiJapan" ]
[]
1: An example of the system, where the position ν of a particle (green online) can be controlled externally. J1 and J2 are the heat currents from the heat baths to the system.
10.1103/physreve.85.051115
[ "https://arxiv.org/pdf/1109.1374v4.pdf" ]
35,575,768
1109.1374
1439bc8153b85690ae3c8b58afbb3a123d7d610d
Work Relation and the Second Law of Thermodynamics in Nonequilibrium Steady States 7 Dec 2011 (Dated: December 8, 2011) Naoko Nakagawa College of Science Ibaraki University 310-8512MitoIbarakiJapan Work Relation and the Second Law of Thermodynamics in Nonequilibrium Steady States 7 Dec 2011 (Dated: December 8, 2011) 1: An example of the system, where the position ν of a particle (green online) can be controlled externally. J1 and J2 are the heat currents from the heat baths to the system. We extend Jarzynski's work relation and the second law of thermodynamics to a heat conducting system which is operated by an external agent. These extensions contain a new nonequilibrium contribution expressed as the violation of the (linear) response relation caused by the operation. We find that a natural extension of the minimum work principle involves information about the timereversed operation, and is far from straightforward. Our work relation may be tested experimentally especially when the temperature gradient is small. Thermodynamics is a universal framework for macroscopic systems in equilibrium. The second law, which is at the heart of thermodynamics, gives strict limitations to macroscopic operations and provides fundamental concepts such as irreversibility and minimum work. The idea of minimum work leads to the useful Gibbs relation, which represents work associated with a thermodynamic operation as the difference in the free energy. To develop similar useful thermodynamics for nonequilibrium systems is a fascinating challenge. In [1][2][3][4][5][6][7], attempt has been made to construct operational thermodynamics for nonequilibrium steady states (NESS). A central idea was to replace the "bare heat" in NESS by its "renormalized" counterpart called excess heat. Recently there has been a considerable progress in nonequilibrium physics which in particular led to the fluctuation theorem [8] and the Jarzynski equality [9,10]. The former gives an exact equality for the entropy production in NESS, which is connected to response relations. The latter provides an exact relation between operational work and the free energy not only in quasi-static but also in general operations in equilibrium. It is also directly connected to the second law of thermodynamics. In the present Letter, we focus on mechanical work associated with an external operation in NESS realized in classical heat conducting systems. We derive a very natural extension (1) of the Jarzynski equality to NESS, which may be tested experimentally. The extended equality straightforwardly implies the Gibbs relation (3) for the quasistatic limit and the second law (4). These relations contain new nonequilibrium contributions expressed as the violation caused by the external operation of the linear response relation. The derivation of the results are essentially straightforward and is based on the detailed fluctuation theorem (also known as the microscopic reversibility or the local detailed balance condition). We hope that these findings become crucial steps in the understanding and construction of thermodynamics for NESS. Setup: Our theory can be developed in various nonequilibrium settings of classical stochastic systems. For simplicity we here focus on heat conduction, and consider a system which is attached to two heat baths with inverse temperatures β 1 and β 2 and has controllable parameters ν. An example is a system of N particles in a container in which the position ν of one of the particles is controlled by the external agent (see Fig. 1). The inverse temperatures β 1 and β 2 are fixed throughout, and are often omitted. We defineβ := (β 1 + β 2 )/2 and ∆β := β 1 − β 2 . The coordinates of N particles are collectively denoted as Γ = (r 1 , . . . , r N ; p 1 , . . . , p N ), and its time-reversal as Γ * = (r 1 , . . . , r N ; −p 1 , . . . , −p N ). The time evolution of the system is governed by deterministic dynamics according to the Hamiltonian H ν (Γ) and stochastic Markovian dynamics due to coupling to the two external heat baths. We impose time-reversal symmetry H ν (Γ) = H ν (Γ * ). When discussing time evolution of Γ, we denote by Γ(t) its value at time t, and byΓ = (Γ(t)) t∈[−τ ℓ ,τ ℓ ] the path in the whole time interval [−τ ℓ , τ ℓ ]. The heat baths may be realized in standard manners such as "thermal walls" (see, e.g. [11]) or the Langevin noise near the walls. The only (and the essential) requirement is that the detailed fluctuation theorem (see (11) below) is valid. By J k (Γ; t), we denote the heat current that flows from the k-th bath to the system at time t in the pathΓ = (Γ(t)) t∈[−τ ℓ ,τ ℓ ] [12]. We write J(Γ; t) = (J 1 (Γ; t) − J 2 (Γ; t))/2, which is the heat current from the first to the second heat bath. ν ' ν (t) ν t 0 FIG. 2: A sketch of the protocolν. We assume τ ℓ − τs ≫ τr and τs − τo ≫ τr where τr is the relaxation time of the system. We shall assume that the system settles to a unique NESS when it evolves for a sufficiently long time with fixed ν. For later convenience we shall choose and fix three time scales 0 < τ o < τ s < τ ℓ such that τ ℓ − τ s ≫ τ r and τ s − τ o ≫ τ r where τ r is the relaxation time of the system. See Fig. 2. We suppose that an external agent performs an operation to the system by changing the parameters ν according to a prefixed protocol. A protocol is specified by a function ν(t) of t ∈ [−τ ℓ , τ ℓ ]. In order to study transitions between NESS, we assume that ν(t) varies only for t ∈ [−τ o , τ o ], so that ν(t) = ν for t ∈ [−τ ℓ , −τ o ] and ν(t) = ν ′ for t ∈ [τ o , τ ℓ ]. We denote byν = (ν(t)) t∈[−τ ℓ ,τ ℓ ] the whole protocol, byν † = (ν(−t)) t∈[−τ ℓ ,τ ℓ ] the time-reversal ofν, and by (ν) the protocol in which the parameters are kept constant at ν. During the operation, the external agent performs mechanical work W (Γ) = τo −τo dt∂ ν H ν (Γ(t))| ν=ν(t)ν (t) to the system. We denote by Q t (Γ) = τs −τs dtJ(Γ; t) the heat transferred from the first to the second heat bath during [−τ s , τ s ]. Similarly, we write Q i t (Γ) = −τs −τ ℓ dtJ(Γ; t) and Q f t (Γ) = τ ℓ τs dtJ(Γ; t) . The time evolution of the system is described by a Markov process. We denote by Tν[Γ] the transition probability associated with a pathΓ in a protocolν. It is normalized as DΓ Tν [Γ]δ(Γ(−τ ℓ ) − Γ i ) = 1 for any initial state Γ i , where DΓ(· · · ) denotes the integral over all the possible pathsΓ. For any function f (Γ), we define its average in the protocolν as f ν := DΓρ st ν (Γ(−τ ℓ ))Tν [Γ]f (Γ), where ρ st ν (Γ) is the probability distribution for the unique NESS corresponding to the parameters ν = ν(−τ ℓ ). Jarzynski equality for NESS: In [15], we introduced a nonequilibrium free energy F (ν) which is a function of the parameter ν (as well as β 1 and β 2 ), and coincides with the equilibrium free energy for β 1 = β 2 . Here we show that, for any β 1 , β 2 , and operationν, the exact identity e −β(W −∆F ) ν = e ∆β Qt ν † m ,(1) is valid, where ∆F := F (ν ′ ) − F (ν). The identity (1) is our most basic result. Here we introduced a modified expectation f ν m := e ∆β Q i t /2 f e ∆β Q f t /2 ν e ∆β Q i t /2 e ∆β Q f t /2 ν ,(2) where f (Γ) is an arbitrary function [13]. Obviously (1) reduces to the celebrated Jarzynski equality for β 1 = β 2 . We would like to propose (1) as the most natural nonequilibrium extension of the Jarzynski equality to NESS. Let us stress that W (Γ) is the standard mechanical work, and the left-hand side of (1) can be evaluated experimentally (exactly as in the case of the original Jarzynski equality). Although the right-hand side may appear artificial, we shall show below that this quantity can also be evaluated experimentally (at least when the NESS is close to equilibrium). To be specific, we show below in (7) that this modified expectation is directly connected to the wellknown linear response relation by rewriting the modified expectation in terms of heat currents. We recall that · · · ν † represents the average in a physically natural time evolution with the time-reversed protocol. We also note that since W (Γ) = 0 for a constant protocolν = (ν), the equality (1) implies e ∆β Qt (ν) m = 1. This is a version of the integrated fluctuation theorem and is related to a response relation as usual (see [14]). We can say that the equality (1) reveals an intimate relation between the mechanical work in NESS and the response relation. We shall see later that a deviation of e ∆β Qt ν m from 1 corresponds to a violation of the response relation. Quasi-static limit: Since W (Γ) essentially does not fluctuate in the quasi-static limit, one has e −βW ν = e −β W ν . By also noting that W ν † = − W ν in this limit, (1) reduces to W ν = ∆F +β −1 log e ∆β Qt ν m ,(3) which is an exact relation corresponding to the Gibbs relation in equilibrium thermodynamics. The equilibrium Gibbs relation leads to potentials which describe macro-or mesoscopic forces. The equality (3), however, implies that we may not have such potentials in NESS as log e ∆β Qt ν m is not necessarily described by a difference of a state function. The second law for NESS: From Jensen's inequality, we have log e −β(W −∆F ) ν ≥ −β( W ν − ∆F ), which implies W ν ≥ ∆F −β −1 log e ∆β Qt ν † m ,(4) where the equality holds in the quasi-static limit. We believe that this is a natural extension of the second law of thermodynamics to operations between NESS. It is notable that the right-hand side involves a quantity in the reversed protocolν † . The inequality (4) implies that the minimum work principle is not extended straightforwardly to NESS. The quantity equated with the free energy difference is not the work but the sum of the work andβ −1 log e ∆β Qt ν † m . This apparently means that one must invoke the reversed protocolν † to find the limitation of the work. Expansion in weak nonequilibrium regime: We define a dimensionless parameter indicating the degree of nonequilibrium by ǫ = |∆β|/β. We here deal with systems with small ǫ and ignore the contribution of O(ǫ 3 ). Let us now derive a compact approximate expression (7) for the right-hand side of (1). From the definition (2) of the modified expectation , we have log e ∆β Qt ν m = log e ∆β Q i t /2 e ∆β Qt e ∆β Q f t /2 ν − log e ∆β Q i t /2 e ∆β Q f t /2 ν .(5) By applying the cumulant expansion to the right-hand side and arranging the result by order, we have log e ∆β Qt ν m = ∆β Q t ν + ∆β 2 2 Q t ; (Q i t + Q t + Q f t ) ν + O(ǫ 3 ),(6)where A; B = AB − A B . Since Q t (Γ) = τs −τs dtJ(Γ; t) and Q i t (Γ) + Q t (Γ) + Q f t (Γ) = τ ℓ −τ ℓ dtJ(Γ; t), we have log e ∆β Qt ν m = ∆β τs −τs dtJν viol (t) + O(ǫ 3 ),(7) where we have defined Jν viol (t) = J(t) ν + ∆β 2 τ ℓ −τ ℓ ds J(t); J(s) ν .(8) In the steady protocol (ν), we have the linear response relation (LRR) for heat currents [17,18] and thus J (ν) viol (t) = 0 (more exactly, O(ǫ 2 )). More generally the equality e ∆β Qt (ν) m = 1 gives an exact response relation for J(t) (α) because it connects J(t) (α) to the higher cumulants of J(Γ; t). When there is an operation, the LRR is violated in general and Jν viol (t) does not vanish. We can thus interpret Jν viol (t) as the "violation of LRR" due to the external operation. Similarly, the deviation of e ∆β Qt ν m from 1 can be regarded as the violation of the exact response relation. We have thus reached the most important interpretation of the equality (1); the mechanical work in NESS is related to the violation of the response relation. In equilibrium operations at ∆β = 0, we see that Jν viol (t) = J(t) ν corresponds to the heat current induced by the external operation. This enables us to intuitively understand the role played by Jν viol (t) in a weak NESS. In an equilibrium system the induced current J(t) ν requires no "costs", and hence does not appear in thermodynamic relations. In a NESS, on the other hand, any heat current is coupled to the temperature difference. This is the reason that we have ∆β J(t) ν in our thermodynamic relation. Derivation of (17): We shall prove our main observation (1). The proof relies on the detailed fluctuation theorem (11) and the exact representation (14) for the probability distribution of NESS. It is essentially straightforward. Let where we used Q i t (Γ) = Q i t (Γ i ) and Q f t (Γ) = Q f t (Γ f ). · · · st,Γ or · · · Ξ,st is properly defined conditioned expectation with a fixed final state Γ or initial state Ξ [16]. We also introduced unnormalized expectation [· · · ] by [f ]ν Γ,Ξ = ρ st ν (Γ) DΓ m f (Γ m )δ(Γ(−τ s ) − Γ)δ(Γ(τ s ) − Ξ)Tν [Γ m ],(10) where Tν [Γ m ] is the transition probability forΓ m . The transition probability Tν [Γ m ] is known to satisfy the detailed fluctuation theorem, Tν [Γ m ] e 2 k=1 β k Q k (Γm) = Tν † [Γ † m ],(11) where Tν † [Γ † m ] is the transition probability of the time reversed pathΓ † m = (Γ(−t)) t∈[−τs,τs] for the reversed protocol ν † . Q k (Γ m ) = τs −τs dtJ k (Γ m ; t) is the total heat that flows from the bath k to the system during the pathΓ m . By using the energy conservation (11) is rewritten as H ν ′ (Γ(τ s )) − H ν (Γ(−τ s )) = W (Γ m ) + 2 k=1 Q k (Γ m ),e −βHν (Γ(−τs))−βW (Γm) Tν[Γ m ] = e −βH ν ′ (Γ(τs))+∆βQt(Γ † m ) Tν † [Γ † m ],(12) where we noted that Q t (Γ † m ) = −Q t (Γ m ). By integrating overΓ m with constraints Γ(−τ s ) = Γ and Γ(τ s ) = Ξ, and recalling (10), we have e −βHν (Γ) ρ st ν (Γ) e −βW ν Γ,Ξ = e −βH ν ′ (Ξ) ρ st ν ′ (Ξ) e ∆βQt ν † Ξ * ,Γ * .(13) In [15], we derived (also by using the detailed fluctuation theorem (11)) the exact representation for the probability distribution in NESS with parameters ν, ρ st ν (Γ) = eβ (F (ν)−Hν (Γ)) e ∆βQ f Although we have restricted ourselves to the simplest setting here, it is straightforward to extend the present results to the case where the inverse temperatures β 1 and β 2 vary, or to other nonequilibrium systems. The only essential requirement is the detailed fluctuation theorem (11). Last but not least let us stress that all of our main results (1), (3), and (4) may be tested experimentally especially when the degree of nonequilibrium ǫ is small. Although we still do not know whether these results have practical applications, it would be exciting to imagine applying the exact Jarzynski equality (1) to the analysis of the efficiency of a thermodynamic machine operating in NESS. The author thanks T. S. Komatsu, S. Sasa and H. Tasaki for fruitful discussions and suggestions. This work was supported by KAKENHI (19540392 and 23540435). PACS numbers: 05.70.Ln, 05.40.-a, 05.60.Cd us decompose the time intervals as [−τ ℓ , τ ℓ ] = [−τ ℓ , −τ s ] ∪ [−τ s , τ s ] ∪ [τ s , τ ℓ ], and, correspondingly, a path asΓ = (Γ i ,Γ m ,Γ f ). For an arbitrary function f ofΓ m = (Γ(t)) t∈[−τs,τs] , the following expectation is naturally decomposed as e ∆βQ i t (Γ)/2 f e ∆βQ f t (Γ)/2 ν = dΓdΞ e ∆βQ i t (Γ)/2 (ν) st,Γ [f ]ν Γ,Ξ e ∆βQ f t (Γ)/2 (ν ′ ) Ξ,st ,(9) t /2 (ν) Γ * ,st e ∆βQ i t /2 (ν) st,Γ ,(14)where the normalization factor F (ν) was identified as a nonequilibrium free energy. By substituting(14) into(13), one getseβ F (ν) e ∆βQ i t /2 (ν) st,Γ e −βW ν Γ,Ξ e ∆βQ f t /2 (ν ′ ) Ξ,st = eβ F (ν ′ ) e ∆βQ i t /2 (ν ′ )st,Ξ * e ∆βQt ν † Ξ * ,Γ * e ∆βQ f t /2 (ν) Γ * ,st (15)By integrating over Γ and Ξ, and using(10), this implies e ∆βQ i t /2 e −β(W −∆F ) e ∆βQ f t /2 ν = e ∆βQ i t /2 e ∆βQt e ∆βQ f t /2 ν † .Noting that e ∆βQ i t /2 e ∆βQ f t /2 ν = e ∆βQ i t /2 e ∆βQ f t /2 ν †[19], this reduces toSince the operation takes place in the interval [−τ o , τ o ], there is no correlation between W (Γ) and e ∆β Q i t /2 or e ∆β Q f t /2 . This means that the left-hand side of (17) can be replaced by the usual expectation to give the desired (1).Discussions: We have derived nonequilibrium extensions of the Jarzynski work relation (1), the Gibbs relation (3), and the second law (4) in a general classical model of heat conduction. Although one can show that the Gibbs relation (3) approximated to O(ǫ 2 ) coincides with the extended Clausius relation that we have derived in[7], all the other results are novel. Especially, it is fascinating that thermodynamic relations and response relations are coupled intrinsically in our relations. The traditional understanding has been that thermodynamic relations work for equilibrium operations, and response relations work for nonequilibrium steady states. Here, by considering an operation for NESS and proving the extended Jarzynski equality (1), we have shown that the two paradigms are naturally unified in a single exact relation.There has been many works on the violation of fluctuation-dissipation relation including an effective temperature for characterizing relaxation processes[20], the formula for estimating the energy dissipation[21,22], and the linear response around NESS[23,24]. It would be suggestive to look for possible relations of these topics with the present study. . Y Oono, M Paniconi, Prog. Theor. Phys. Suppl. 13029Y. Oono and M. Paniconi, Prog. Theor. Phys. Suppl. 130, 29 (1998). . S Sasa, H Tasaki, J. Stat. Phys. 125125S. Sasa and H. Tasaki, J. Stat. Phys. 125, 125 (2006). . D Ruelle, Proc. Natl. Acad. Sci. U.S.A. 1003054D. Ruelle, Proc. Natl. Acad. Sci. U.S.A. 100, 3054 (2003). . T Hatano, S Sasa, Phys. Rev. Lett. 863463T. Hatano and S. Sasa, Phys. Rev. Lett. 86, 3463 (2001). . T S Komatsu, N Nakagawa, Phys. Rev. Lett. 10030601T. S. Komatsu and N. Nakagawa, Phys. Rev. Lett. 100, 030601 (2008); . T S Komatsu, N Nakagawa, S Sasa, H Tasaki, Phys. Rev. Lett. 10030601T. S. Komatsu, N. Nakagawa, S. Sasa and H. Tasaki, Phys. Rev. Lett., 100, 030601 (2008). . T S Komatsu, N Nakagawa, S Sasa, H Tasaki, J. Stat. Phys. 142127T. S. Komatsu, N. Nakagawa, S. Sasa and H. Tasaki, J. Stat. Phys. 142, 127 (2011). . D J Evans, E G D Cohen, G P Morriss, Phys. Rev. Lett. 712401D. J. Evans, E. G. D. Cohen, and G. P. Morriss, Phys. Rev. Lett. 71, 2401 (1993); . G Gallavotti, E G D Cohen, Phys. Rev. Lett. 742694G. Gallavotti and E. G. D. Cohen, Phys. Rev. Lett. 74, 2694 (1995); . J Kurchan, J. Phys. A: Math. Gen. 313719J. Kurchan, J. Phys. A: Math. Gen. 31, 3719 (1998); . C Maes, J. Stat. Phys. 95367C. Maes, J. Stat. Phys. 95, 367 (1999); . C Jarzynski, J. Stat. Phys. 9877C. Jarzynski, J. Stat. Phys. 98, 77 (2000). . C Jarzynski, Phys. Rev. Lett. 782690C. Jarzynski, Phys. Rev. Lett. 78, 2690 (1997). . G E Crooks, Phys. Rev. E. 612361G. E. Crooks, Phys. Rev. E 61, 2361 (2000). . T Murakami, T Shimada, S Yukawa, N Ito, J. Phys. Soc. Jpn. 721049T. Murakami, T. Shimada, S. Yukawa and N. Ito, J. Phys. Soc. Jpn 72, 1049 (2003). For the definition of heat current for the Langevin bath, see K. Sekimoto: Stochastic Energetics. K Sekimoto, J. Phys. Soc. Jpn. 661234SpringerFor the definition of heat current for the Langevin bath, see K. Sekimoto: Stochastic Energetics (Springer, Berlin, 2010), or K. Sekimoto, J. Phys. Soc. Jpn. 66, 1234 (1997) . an effect of recovering the time-reversal symmetry and making the state "as close as equilibrium. in the relevant time intervalsan effect of recovering the time-reversal symmetry and making the state "as close as equilibrium" in the relevant time intervals. Usually the fluctuation theorem is expressed as P (+Σ)/P (−Σ) = e Σ , where P (Σ) is the probability that the entropy production in the baths is Σ. By integrating over Σ, we have e −Σ = 1. Since −∆βQt is the entropy production due to the heat current, e ∆βQt (ν) m = 1 can be regarded as a version of the integrated fluctuation theoremUsually the fluctuation theorem is expressed as P (+Σ)/P (−Σ) = e Σ , where P (Σ) is the probability that the entropy production in the baths is Σ. By integrating over Σ, we have e −Σ = 1. Since −∆βQt is the entropy production due to the heat current, e ∆βQt (ν) m = 1 can be regarded as a version of the integrated fluctuation theorem. . T S Komatsu, N Nakagawa, S Sasa, H Tasaki, J. Stat. Phys. 134401T. S. Komatsu, N. Nakagawa, S. Sasa and H. Tasaki, J. Stat. Phys. 134, 401 (2010). Γ(τs) − Γ)T (ν) [Γ f ] f (Γ f ) and f (ν) st,Γ = {ρ st ν (Γ)} −1 DΓi ρ st ν (Γ(−τ ℓ )) T (ν). Γ , Γi]δ(Γ(−τsΓ,st = DΓ f δ(Γ(τs) − Γ)T (ν) [Γ f ] f (Γ f ) and f (ν) st,Γ = {ρ st ν (Γ)} −1 DΓi ρ st ν (Γ(−τ ℓ )) T (ν) [Γi]δ(Γ(−τs) − Γ) f (Γi). Γ) f (Γi). R Kubo, M Toda, N Hashitsume, Statistical Physics II: Noneqilibrium Statical Mechanics. BerlinSpringer-VerlagR. Kubo, M. Toda and N. Hashitsume: Statistical Physics II: Noneqilibrium Statical Mechanics (Springer-Verlag, Berlin, 1991). . L Onsager, Phys. Rev. 372265L. Onsager, Phys. Rev. 37, 405 (1931); ibid 38, 2265 (1931). Since ν(t) is constant except for t ∈ [−τo, τo], both the quantities are equal to e ∆βQ i/2 ′ ). Since ν(t) is constant except for t ∈ [−τo, τo], both the quantities are equal to e ∆βQ i/2 ′ ) . . L F Cugliandolo, J L F Kurchan ; 5749, J Cugliandolo, L Kurchan, Peliti, J. Phys. A. 273898Phys.Rev.EL. F. Cugliandolo and J. Kurchan, J. Phys. A, 27 (1994) 5749. L. F. Cugliandolo, J. Kurchan and L. Peliti, Phys.Rev.E, 55 (1997) 3898. . T Harada, S Sasa, Phys.Rev.Lett. 95130602T. Harada and S. Sasa, Phys.Rev.Lett., 95 (2005) 130602. . T Speck, U Seifert, Europhys. Lett. 74391T. Speck and U. Seifert, Europhys. Lett., 74 (2006) 391. . M Baiesi, C Maes, B Wynants, Phys.Rev.Lett. 10310602M. Baiesi, C. Maes and B. Wynants, Phys.Rev.Lett., 103 (2009) 010602. . J Prost, J F Joanny, J M R Parrondo, Phys.Rev.Lett. 10390601J. Prost, J. F. Joanny and J. M. R. Parrondo, Phys.Rev.Lett., 103 (2009) 090601.
[]
[ "Uncertainty Principle Enhanced Pairing Correlations in Projected Fermi Systems Near Half Filling", "Uncertainty Principle Enhanced Pairing Correlations in Projected Fermi Systems Near Half Filling" ]
[ "B Sriram Shastry \nIndian Institute of Science\n560012BangaloreINDIA\n" ]
[ "Indian Institute of Science\n560012BangaloreINDIA" ]
[]
We point out the curious phenomenon of order by projection in a class of lattice Fermi systems near half filling. Enhanced pairing correlations of extended s-wave Cooper pairs result from the process of projecting out s-wave Cooper pairs, with negligible effect on the ground state energy. The Hubbard model is a particularly nice example of the above phenomenon, which is revealed with the use of rigorous inequalities including the Uncertainty Principle InequalityIn addition, we present numerical evidence that at half filling, a related but simplified model shows ODLRO of extended s-wave Cooper pairs.
10.1088/0305-4470/30/18/005
[ "https://arxiv.org/pdf/cond-mat/9612098v1.pdf" ]
98,580
cond-mat/9612098
089c74660c306b2059c03e3af9c6022af149a4c1
Uncertainty Principle Enhanced Pairing Correlations in Projected Fermi Systems Near Half Filling 19 September 1996 B Sriram Shastry Indian Institute of Science 560012BangaloreINDIA Uncertainty Principle Enhanced Pairing Correlations in Projected Fermi Systems Near Half Filling 219 September 1996Typeset using REVT E X 1 We point out the curious phenomenon of order by projection in a class of lattice Fermi systems near half filling. Enhanced pairing correlations of extended s-wave Cooper pairs result from the process of projecting out s-wave Cooper pairs, with negligible effect on the ground state energy. The Hubbard model is a particularly nice example of the above phenomenon, which is revealed with the use of rigorous inequalities including the Uncertainty Principle InequalityIn addition, we present numerical evidence that at half filling, a related but simplified model shows ODLRO of extended s-wave Cooper pairs. There is considerable current interest in the possibility of purely electronic interactions driven superconductivity as a mechanism to explain the High T c superconductors. While it is well known for the uniform electron gas that purely Coulomb repulsion terms lead to superconductivity in higher angular momentum channels [1], albeit with very low transition temperatures, here the search, guided by experiments, is predominantly for single band models that display such behavior in the proximity of half filling on a lattice. The prototypical example is that of the Hubbard model [2] [3], although its tendency (or otherwise) towards superconductivity remains an unsettled issue. In this work, we study a class of many body Fermi systems on a lattice, under the influence of a projection of s-wave Cooper pairs. Recall that one has an inhibition of s-wave ordering within weak coupling BCS theory for models for on-site repulsion in addition to the usual phononic coupling. In contrast, we project out s-wave Cooper pairs in the present work. The study of most projected models, generally justified by their status as "fixed point" Hamiltonians in some underlying scaling theory, has been a rich source of new and interesting models in the field of correlated fermi systems. A prototype is the Gutzwiller wavefunction, wherein upon removing double occupancy, effects such as enchanced effective masses follow near half filling, and these are crucial in our understanding of almost localized fermi liquids [4]. At half filling we find insulating wave functions with enhanced spin-spin correlations [5], that are regarded as typical of Quantum Spin Systems in low dimensions with S = 1/2. At the level of the Hamiltonian, projection leads to interesting new models, such as the various limits of the Hubbard model, e.g. large U giving the t − J model, U = ∞ giving the Nagaoka limit, and several examples in single impurity models. It seems worth remarking that projection is a theoretical device that is genuinely strong coupling and non perturbative, making it difficult to treat with conventional methods. In the present work the consequences of s-wave projection are found by combining a set of known inequalities in a novel fashion, and lead to surprising insights detailed below. We consider the model defined on a d-dimensional hyper cubic lattice with a Hamiltonian H = T + U i n i↑ n i↓ + U s B † B (1) where T is the kinetic energy term k,σ ǫ(k)c † σ (k)c σ (k). The second term is the Hubbard repulsion term (i.e. U ≥ 0). We will consider other forms of interaction below, but the argument is simplest for the Hubbard interaction written above. We will take the number of sites as L and denote the density of particles by ρ = N/L. We will also denoteH = H −µN , where µ is the chemical potential andN the number operator. The third term is new, with the operator B = i exp{iφ j } b j and b j ≡ c j↓ c j↑ . If we take U s to be O(1/L), and negative then this term would, in a weak coupling BCS like theory, inhibit the formation of s-wave Cooper pairs. The coupling constant U s is taken of O(1) and positive in the present work and corresponds to projecting out the appropriate Cooper pairs, at general fillings. Precisely at half filling, the influence of the new term is more subtle as noted later in the paper. Although various choices of the phase angle φ i generate different examples, two of the interesting ones are (α) φ i = 0 leads to a suppression of pure s-wave superconductivity, and (β) φ i = r i · {π, π, ..} suppresses the so called eta pairing [6]. We will also consider a third possibility (γ) obtained by setting B = k exp{iφ(k)} b(k) where b(k) ≡ c −k↓ c k↑ and with an arbitrary function φ(k) which can be used to vary the relative phases between different momenta. This last class of operators, however forces us to the case of U = 0, in order to obtain any results. The case (α) appears to be the most interesting physically, the others are included for completeness. We first note that for lattices that are bipartite, and where the electronic hopping only connects unlike sublattices, we can make a particle hole transformation c i↑ → (−1) θ i c † i↑ , with θ i = 0, 1 for the two sublattices, whereby the energy satisfies E[U, U s , ρ] = E[U, U s , 1 − ρ] − L(1 − ρ)(U + U s ). At half filling (ρ = 1) the chemical potentials for adding and subtracting a particle add up as: µ + + µ − = U + U s . At this filling, the new term U s plays a crucial role in allowing doubly occupied sites and holes to wander away from each other, and infact encourages charge fluctuations, whereby the usual Mott insulating state of the Hubbard model at half filling is heavily discouraged. We now use a simple but useful inequality [7] < ψ 0 | M † [H, M] |ψ 0 > ≥ 0 (2) where |ψ 0 > is the ground state ofH, and M is an arbitrary operator. Using M = B we find on using the important commutator [B, B † ] = L −N , valid in all cases (α), (β) and (γ), that < B † A >≥ {U s (L − N + 2) − 2µ + U} < B † B > .(3) In the case of (γ), the above Inequality holds only with U = 0. Note that the LHS of above is forced to be real and to be positive from the Inequality. We denote ground state averages by angular brackets as above, and the operator A is given by A = [T, B]. For the two cases of the phase φ i in Eq(1) , (α) A = −2 k ǫ(k)b(k), and (β) A = − {ǫ( k) + ǫ( k + Π)} b(k) with Π = {π, π, ..}. In the popular case of nearest neighbour hopping on the hypercubic lattice, (α) corresponds to the extended s-wave pairing operator, whereas (β) gives zero. A non zero result is obtained in the latter case only when the hopping connects sites on the same sublattice. In two dimensions, for example, with ǫ(k) = −2t(cos(k x ) + cos(k y )) − 2t ′ cos(k x ) cos(k y ), we find A = 4t ′ (cos(k x ) cos(k y )) b(k). We now use the Cauchy-Schwartz inequality to bound the LHS of Inequality (3) as < A † A > < B † B > ≥ < B † A > 2 .(4) Combining Ineqs(3 ,4 ) we find < A † A > ≥ {U s (L − N + 2) − 2µ + U} 2 < B † B > .(5) Again note that in case of (γ), Ineq(5) is valid only with U = 0. We note that in the RHS of Ineq(5) the prefactor is of the O(L 2 ) provided we are at a thermodynamic filling ρ < 1. Exactly at half filling the inequality is less useful. At any filling ρ < 1, we can deduce that < B † B > isE[U, U s , ρ] = E[U, 0, ρ] + Us 0 dU ′ s < B † B > U ′ s (6) = E[U, 0, ρ] + o(L).(7) The other consequence of Ineq (5) A consistent possibility [11] is < B † B >= O(1/L) and < A † A >= O(L),(8) along with Eq (7). One immediate consequence of this result is that the energy per site of < a † a + aa † > < b † b + bb † > ≥ | < [a † , b] > | 2 .(9) We now use this with a → A and b → B, and also the results [A, B † ] = 2T and [A, A † ] ≡ χ A = 4 k ǫ(k) 2 (1 − σ c † σ (k)c σ (k)) to find < A † A > ≥ 2| < T > | 2 L(1 − ρ) − < χ A > 2 .(10) Both terms of the RHS of the above Inequality are of the O(L), and the second term remains bounded as we approach half filling, infact vanishing for the case of a symmetric band around zero energy. This implies that the first term dominates and hence we conclude that the correlation function < A † A > grows without limit as half filling is approached. We We next consider other kinds of interactions, different from the Hubbard model [13]. In this case, we can still use Ineq(2) and also Ineq(4) to find in place of Ineq (5), < F † F >≥ {U s (L − N + 2) − 2µ} 2 < B † B >,(11) where F = A+C with C = −[B, V int ], so that F = [H, B]. The norm on the LHS of Ineq (11) can be bounded by the triangle inequality as < F † F >≤ [ √ < A † A > + √ < C † C >] 2 and hence we need, in addition to the previous estimates, one of < C † C >. This of course depends upon the nature of the two particle interaction, and has to be examined for each model separately. However, for "generic" repulsive short ranged models, it seems clear that and that the ground state energy is as in Eq (7). The uncertainty relation Ineq(10) needs only the fairoekly weak first condition < B † B >∼ o(L), and hence we conclude that the mechanism of order by projection works for generic short ranged repulsive models near half filling. We have thus found enhanced correlations as we approach half filling, and by continuity, we may expect ODLRO in the operator A. The inequalities given above do not constrain correlations sufficiently, and we turn to other methods. Before doing that, we introduce a simpler version of the models above, namelỹ H s = L n=1 (ǫ n − µ)(σ z n + 1) + U s L n,m=1 σ + n σ − m ,(12) where σ z etc are the usual Pauli matrices, and ǫ n are an ascending set of energies. This model is intimately related to the U = 0 version of our starting problem Eq(1), using the pseudo spin representation σ z j + 1 = σ n σ (k j ) and σ + j = c † ↑ (k j )c † ↓ (−k j ), in the subspace where both (k, ↑) and (−k, ↓) are simultaneously present or absent. The Hamiltonian Eq (1) commutes with the operator ν = k n ↑ (k)n ↓ (−k), and its operation is identical to that of H s provided we specialize to various sectors labeled by the eigenvalues of ν (0 ≤ < ν > ≤ N/2), and further choose appropriate degeneracies for the energies. We simplify by choosing our energies in H s above to be non degenerate, and pick them to be ǫ n = {n−(L+1)/2}/(L−1) so that the band is symmetric about zero and the bandwidth is unity. Each up spin corresponds to two ( fermi) particles of the original problem. The filling in this problem is clearly ρ = N/L withN = j (σ z (j) + 1) . The chemical potential at half filling is U s /2 by particle hole symmetry. The model can also be viewed as a lattice of N/2 hard core particles sitting in a constant electric field that tries to localize them in regions of low potential, and an infinite ranged hopping that tries to delocalize them. The results proved for the starting Eq(1) namely Ineqs (5,10,8), are equally true in this one dimensional spin model, provided we identify B = j σ − j and A = −2 j ǫ j σ − j . Away from half filling, i.e. when σ z tot = 0, we see that even in the limit of large U s , there is a large number of states, infact states with S tot = L(1 − ρ)/2 and S z tot = −L(1 − ρ)/2 i.e. highest weight states of the rotation group, which have a null eigenvalue of the hopping term U s σ + n σ − m . The Zeeman energy term has non zero matrix elements within this manifold. In the case of half filling ρ = 1, the Zeeman term necessarily connects singlet states with triplets and hence the energy is unable to escape the influence of U s . At half filling and for large U s , we can use degenerate perturbation theory to find an effective Hamiltonian to lowest order in 1/U s . To do this we consider the action of H s in Eq(12) on the space of L C L/2 /(L/2 + 1) singlets spanned, for example by the Non Crossing Rumer Diagrams [14]. A typical non orthogonal state is given by ψ P = [P 1 , P 2 ] − . . . [P L−1 , P L ] − where P isH qd ψ P = −1 2U s j (ǫ P j − ǫ P j+1 ) 2 ψ P − 1 3U s j+1<k (ǫ P j − ǫ P j+1 )(ǫ P k − ǫ P k+1 ){2Π P j P k − 1} ψ P .(13) This model is quite non trivial to work with, but does reveal that the diagonal terms favour singlet bonds that connect the largest energy separations, and the mixing terms oblige us to take non trivial linear combinations in this space. We study the interesting half filled limit by studying the sector σ z total = 0 of Eq (12) directly. We diagonalized the problem numerically for chains of length up to 14, and studied the ground state energy as well as the correlation function < A † A >. It is clear that a non zero extrapolation of Γ to a number of the O(1) would imply ODLRO in the A field. In the figure we plot the parameter Γ = 1 L < A † A > / < A † A > non for three values of U s (= 2, 4, 8). The data seems to be consistent with this hypothesis, and fits well to Γ = Γ ∞ +|a|/L±|b|/L 2 , with non zero Γ ∞ . In the inset of the figure, the ground state energy per site is plotted very small. In fact we will show that it is o(L) rather than O(L). If it were of the O(L), ( as indeed it is in the ground state of the free fermi gas), then < A † A > has to exceed a trivial upper bound of the O(L 2 ) [8]. If < B † B > is of the O(1) then we have two consequences that are mutually incompatible (at least when U = 0). To see this, assume that < B † B > is of the O(1) and so we find from the Feynman Hellman Theorem is that < A † A >∼ O(L 2 ), i.e. we have Long Ranged Order (ODLRO) in the operator A [9]. This is possible only if the energy increases by terms of the O(L), at least in the case when U = 0 as is seen from a diagonalization of a bilinear Hamiltonian adding the kinetic energy T and A with coefficients of the O(1) [10]. the model in Eq( 1 ) 1Lim L→∞ E[U, U s , ρ]/L is identical to that of the pure Hubbard model ( i.e. U s = 0) at all U or filling ρ = 1.Another important consequence is that the chemical potential is unchanged by U s until we reach half filling (µ = ∂E/∂N) and therefore the compressibility is unchanged by U s (since 1/κ = Nρ(∂µ/∂N) L ). At precisely half filling, the chemical potential jumps and the compressibility vanishes. The value of µ at half filling for the case of bipartite symmetry was given above as (U + U s )/2. We see that the suppression of a correlation of the type < B † B > occurs with remarkable efficiency through Ineq(8). When we recall that < [B, B † ] >= L(1 − ρ), it is seen that the fluctuations of B + B † in the ground state diminish on approaching half filling, i.e. < (B + B † ) 2 /L >= 1 − ρ. This immediately suggests that "conjugate variables" in the sense of the Uncertainty Principle, should exhibit enhancements by similar factors. The following form of the Uncertainty Principle is most useful, for any two operators a and b ( such that < a 2 >= 0 =< b 2 > and [a, b] = 0) we have[12] should remark that any operator of the type [T, [T, ..[T, B]..] in the place of A would end up having similar enhancements in its correlations, since it would be a bilinear in the c † and have similar commutation with B this object, like < A † A > should be bounded from above by a number of O(L 2 ). With this assumption, the remaining argument goes exactly as in the case of the Hubbard model, and we again conclude that < B † B > is at least as small as o(L), and in fact probably O(1/L), one of the permutations of the set {1, 2, . . . , L} giving a Non Crossing Rumer Diagram, and [i,j] ∓ = (α i β j ∓ β i α j )/ √ 2 is a singlet (triplet)with s z = 0. The action of the operator Eq(12)can be projected into this subspace, by using the relation[1,2] + [3, 4] + = 1 3 (1−2Π 13 )[1, 2] − [3,4] − +ψ quintet with Π ij the permutation operator, and leads to the following Quantum Dimer problem: viewed as being essentially degenerate with that of the original model and yet the extended s-wave pairing correlations are hugely enhanced near half filling. This effect, namely order by projection requires a lattice Fermi system near half filling to occur, and has no natural counterpart in continuum Fermi systems. In this regard, as well as in the form of the enhancements 1/(1 − ρ), it resembles the results of the almost localized fermi systems [4]. [13] For other kinds of projected order, the inequalities are harder to interpret e.g. if B = i,j [ν] i,j c i↓ c j↑ with an off diagonal matrix [ν], then the analog of Ineq(3) contains, as a coefficient of U s the factor (L − N + 2)[ν 2 ] i,i + < χ|φ|χ >, where |χ >≡ B|ψ 0 > and φ = − i,j [ν 2 ] i,j c † iσ c jσ , and hence there is the possibility of cancellation of the term of O(L). [14] G. Rumer, Nachr. d. Ges. d. Wiss. zu Gottingen, M.P. Klasse, 337 (1932), L. Hulthén, Ark. Mat. Astr. Fys., 26 A, 1 ( 1938). A R. Saito , J of Phys. Soc. Japan 59 482 (1990). Figure Captions Figure Captions • Long Ranged Order parameter Γ ≡ 1 L < A † A >/< A † A > non versus 1/L for U s = 2, 4, 8 (bottom to top) chains of length 4, 6, .., 14 at ρ = 1. The inset shows the Ground State Energy per site for the same values of U s versus 1/L. (bottom to top). for the same three values of U s against 1/L, showing that the energy does depend on the coupling at half filling, implying that the U s term cannot be viewed as a projection at this particular filling. The dependence is consistent with finite sized scaling with a formE/L = e ∞ + |a|/L − |b|/L 2 + O(1/L 3 ).At half filling, the new model Eq(1) is almost certainly non-insulating, and likely to be superconducting in a complementary pairing state. The presence of hopping terms for the doubly occupied sites makes their number density nonzero, unlike in the pure Hubbard model. By continuity in filling, we expect the pairing correlations to be divergent for any U.Our numerical results for the reduced model, the spin model of Eq(12) are consistent with ODLRO at half filling. It is not, however, straightforward to write down a mean field theory that captures the correct ordering in the model, since the Hamiltonian does not contain explicit terms that favour any kind of ordering, these are generated by the dynamics rather indirectly.In summary we have seen that the effect of projecting out s-wave Cooper pairs in a class of Fermi systems leads to surprising results. The ground state of the projected model may be . W Kohn, J M Luttinger, Phys. Rev. Letts. 54524W. Kohn and J. M. Luttinger, Phys. Rev. Letts. 54 524 (1965). A selection of other theoretical approaches may be. P W Anderson, Proceedings Los Alamos Symposium. K. Bedell et.al. Addison WesleyLos Alamos SymposiumNew York2351196High Temperature SuperconductivityP. W. Anderson, Science 235 1196 (1987). A selection of other theoretical approaches may be found in e.g. "High Temperature Superconductivity"", Proceedings Los Alamos Symposium ed. K. Bedell et.al. Addison Wesley, New York (1990). The Hubbard Model: a reprint Volume. A. Montorosi, World ScientificSingapore"The Hubbard Model: a reprint Volume", ed. A. Montorosi, World Scientific ( Singapore 1992). . M Gutzwiller, ; W Brinkman, T M Rice, Vollhardt Rev. Mod. Phys. 13799Phys. Rev.M. Gutzwiller, Phys. Rev. 137, A1726 (1965), W. Brinkman and T. M. Rice, Phys. Rev B2,4302 (1970), D. Vollhardt Rev. Mod. Phys. 56 99 (1984). . T Kaplan, P Horsch, P Fulde, Phys. Rev. Letts. 49889T.Kaplan, P.Horsch and P.Fulde, Phys. Rev. Letts. 49, 889(1982); . C Gros, R Joynt, T M Rice, Phys. Rev.B. 36381C.Gros, R.Joynt and T.M.Rice, Phys. Rev.B 36, 381(1987); . F Gebhardt, D Vollhardt, Phys. Rev. Letts. 591472F.Gebhardt and D.Vollhardt, Phys. Rev. Letts. 59, 1472(1987). . C N N Yang ; C, S C Yang, Zhang, Phys. Rev. Letts. 63759Mod. Phys. Letts.C. N. Yang, Phys. Rev. Letts. 63 2144 (1989), C. N. Yang and S.C.Zhang, Mod. Phys. Letts. B4 759 (1990). The inequality is readily proved by inserting a complete set of energy eigenfunctions. K Sawada, C S Warke, Phys. Rev. 1331252K. Sawada and C. S. Warke, Phys. Rev. 133 A1252 (1964). The inequality is readily proved by inserting a complete set of energy eigenfunctions. The bound is obtained by writing < A † A >≤ 4 k,k ′ |ǫ(k)ǫ(k ′ ) < b † (k ′ )b(k) > | ≤ 4 k,k ′ |ǫ(k) ǫ(k ′ ). | ∼ O, The bound is obtained by writing < A † A >≤ 4 k,k ′ |ǫ(k)ǫ(k ′ ) < b † (k ′ )b(k) > | ≤ 4 k,k ′ |ǫ(k) ǫ(k ′ )| ∼ O(L 2 ). . C N Yang, Rev. Mod. Phys. 34694C. N. Yang, Rev. Mod. Phys. 34 694 ( 1962). In the case of U = 0, provided that A does not already have Long Ranged Order in the absence of the U s , then the same is expected to be true. but harder to prove rigorouslyIn the case of U = 0, provided that A does not already have Long Ranged Order in the absence of the U s , then the same is expected to be true, but harder to prove rigorously. L 1−σ ) with 1 > σ > 0 then we find from Ineq(5) that < A † A > has quasi ODLRO, i.e. is almost ordered. &lt; B † B If, &gt;∼ O, unlike in a normal Fermi LiquidIf < B † B >∼ O(L 1−σ ) with 1 > σ > 0 then we find from Ineq(5) that < A † A > has quasi ODLRO, i.e. is almost ordered, unlike in a normal Fermi Liquid. . L Pitaesvski, S Stringari, J. Low. Temp. Phys. 85377L. Pitaesvski and S. Stringari, J. Low. Temp. Phys. 85 377 (1991).
[]
[ "Galactic outflows and the kinematics of damped Lyman alpha absorbers", "Galactic outflows and the kinematics of damped Lyman alpha absorbers" ]
[ "Sungryong Hong \nAstronomy Department\nUniversity of Massachusetts\n01003AmherstMA\n", "Neal Katz \nAstronomy Department\nUniversity of Massachusetts\n01003AmherstMA\n", "Romeel Davé \nAstronomy Department\nUniversity of Arizona\n85721TucsonMA\n", "Mark Fardal \nAstronomy Department\nUniversity of Massachusetts\n01003AmherstMA\n", "Dušan Kereš \nHarvard-Smithsonian Center for Astrophysics\n02138CambridgeMAUSA\n", "Ben-Jamin D Oppenheimer \nLeiden Observatory\nLeiden University\nPO Box 95132300 RALeidenthe Netherlands\n" ]
[ "Astronomy Department\nUniversity of Massachusetts\n01003AmherstMA", "Astronomy Department\nUniversity of Massachusetts\n01003AmherstMA", "Astronomy Department\nUniversity of Arizona\n85721TucsonMA", "Astronomy Department\nUniversity of Massachusetts\n01003AmherstMA", "Harvard-Smithsonian Center for Astrophysics\n02138CambridgeMAUSA", "Leiden Observatory\nLeiden University\nPO Box 95132300 RALeidenthe Netherlands" ]
[ "Mon. Not. R. Astron. Soc" ]
The kinematics of damped Lyman-α absorbers (DLAs) are difficult to reproduce in hierarchical galaxy formation models, particularly the preponderance of wide systems. We investigate DLA kinematics at z = 3 using high-resolution cosmological hydrodynamical simulations that include a heuristic model for galactic outflows. Without outflows, our simulations fail to yield enough wide DLAs, as in previous studies. With outflows, predicted DLA kinematics are in much better agreement with observations. Comparing two outflow models, we find that a model based on momentum-driven wind scalings provides the best match to the observed DLA kinematic statistics of Prochaska & Wolfe. In this model, DLAs typically arise a few kpc away from galaxies that would be identified in emission. Narrow DLAs can arise from any halo and galaxy mass, but wide ones only arise in halos with mass 10 11 M ⊙ , from either large central or small satellite galaxies. This implies that the success of this outflow model originates from being most efficient at pushing gas out from small satellite galaxies living in larger halos. This increases the cross-section for large halos relative to smaller ones, thereby yielding wider kinematics. Our simulations do not include radiative transfer effects or detailed metal tracking, and outflows are modeled heuristically, but they strongly suggest that galactic outflows are central to understanding DLA kinematics. An interesting consequence is that DLA kinematics may place constraints on the nature and efficiency of gas ejection from high-z galaxies.
null
[ "https://arxiv.org/pdf/1008.4242v2.pdf" ]
4,487,147
1008.4242
1f37cebdce64758c0cfd36c34d4303eae8ca173d
Galactic outflows and the kinematics of damped Lyman alpha absorbers 27 Aug 2010 Printed 30 August 2010 30 August 2010 Sungryong Hong Astronomy Department University of Massachusetts 01003AmherstMA Neal Katz Astronomy Department University of Massachusetts 01003AmherstMA Romeel Davé Astronomy Department University of Arizona 85721TucsonMA Mark Fardal Astronomy Department University of Massachusetts 01003AmherstMA Dušan Kereš Harvard-Smithsonian Center for Astrophysics 02138CambridgeMAUSA Ben-Jamin D Oppenheimer Leiden Observatory Leiden University PO Box 95132300 RALeidenthe Netherlands Galactic outflows and the kinematics of damped Lyman alpha absorbers Mon. Not. R. Astron. Soc 000000027 Aug 2010 Printed 30 August 2010 30 August 2010(MN L A T E X style file v2.2)quasar: absorption linesgalaxies: formationgalaxies: kinematics and dynamics The kinematics of damped Lyman-α absorbers (DLAs) are difficult to reproduce in hierarchical galaxy formation models, particularly the preponderance of wide systems. We investigate DLA kinematics at z = 3 using high-resolution cosmological hydrodynamical simulations that include a heuristic model for galactic outflows. Without outflows, our simulations fail to yield enough wide DLAs, as in previous studies. With outflows, predicted DLA kinematics are in much better agreement with observations. Comparing two outflow models, we find that a model based on momentum-driven wind scalings provides the best match to the observed DLA kinematic statistics of Prochaska & Wolfe. In this model, DLAs typically arise a few kpc away from galaxies that would be identified in emission. Narrow DLAs can arise from any halo and galaxy mass, but wide ones only arise in halos with mass 10 11 M ⊙ , from either large central or small satellite galaxies. This implies that the success of this outflow model originates from being most efficient at pushing gas out from small satellite galaxies living in larger halos. This increases the cross-section for large halos relative to smaller ones, thereby yielding wider kinematics. Our simulations do not include radiative transfer effects or detailed metal tracking, and outflows are modeled heuristically, but they strongly suggest that galactic outflows are central to understanding DLA kinematics. An interesting consequence is that DLA kinematics may place constraints on the nature and efficiency of gas ejection from high-z galaxies. INTRODUCTION Damped Lyman alpha systems (DLAs), i.e. H i absorption line systems having column densities of NHI > 2 × 10 20 cm −2 (Wolfe et al. 1986), contain the dominant reservoir of neutral hydrogen in the Universe (see review by Wolfe, Gawiser, & Prochaska 2005). Since neutral hydrogen is closely connected with star formation (e.g. Kennicutt 1998), DLAs represent the neutral gas repository for fuelling star formation. Influential work by Storrie-Lombardi & Wolfe (2000) suggested that the global neutral gas content measured from DLAs declines with cosmic epoch in concert with the growth in cosmic stellar mass. Other studies with different selection methods suggest that the decline in neutral gas is not as steep (Rao, Turnshek, & Nestor 2006). Nevertheless, one expects some connection between galaxies identified in stellar emission and those identified in gas absorption such as DLAs. Studying this connection has been the focus of a large number of investigations, enabled by the ability to routinely catalogue absorbers and galaxies at z 2 where optical Lyα absorption line studies have accurately characterised the DLA population. Despite these efforts, a deep understanding of the relationship between DLAs and emission-selected galaxies remains elusive. Studies of neutral hydrogen absorption around Lyman break-selected galaxies (LBGs) at z ∼ 2 − 3 show a wide range of absorption strengths . DLAs have gas-phase metallicities that are lower than that seen in LBGs, and show a much larger spread . Measures of the C ii * cooling rates in DLAs indicate that the star formation rates are dramatically lower than that seen in LBGs (Wolfe et al. 2004). Low-redshift imaging of DLAs show a heterogeneous population of generally sub-L * galax-ies, but it is unclear if this is relevant to high-z DLAs. On the other hand, the clustering of LBGs and DLAs at z ∼ 2 − 3 suggests that they occupy similar halos (Bouché et al. 2005;Cooke et al. 2006). Hence, even though a wealth of observational data exists for DLAs, it cannot be definitively said whether DLAs are lower-mass LBG analogs, whether they are a phase of galaxy evolution that eventually leads to LBGs, or whether they are an altogether separate galaxy population. One approach for constructing a unified framework for the nature of DLAs is to employ numerical simulations that account for the hierarchical growth of galaxies including gas physical processes and star formation. Early investigations using simulations by Haehnelt, Steinmetz, & Rauch (1998, hereafter HSR98) suggested that DLAs traced clumps of cold gas that were in the process of assembling into a larger galaxies. This proto-galactic clump (PGC) model favoured the interpretation of DLAs as a precursor phase to LBGs. Gardner et al. (2001) used simulations in cosmological volumes to argue that the bulk of DLAs must arise near dwarf galaxies, suggesting that they are low-mass LBG analogs. Maller et al. (2001) used semi-analytic models to argue that DLAs must come from gas distributions within dark halos that are extended relative to that expected from angular momentum support, and hence another mechanism such as tidal stripping may be important for obtaining the correct cross-section for DLA absorption. Nagamine et al. (2007) found that DLA properties are significantly affected by galactic outflows, and argued that the more successful models showed DLAs as being lower-mass analogs of LBGs. Simulations by Razoumov et al. (2006) incorporated radiative transfer to model the neutral gas distribution more accurately, and found that DLAs can arise from a variety of environments, from the centres of small galaxies to filamentary tidal structure in large halos. Pontzen et al. (2008) used hydrodynamic simulations with star formation and outflows together with radiative transfer, and were able to match the column densities and (for the first time) the metallicity range of DLAs. While these successes are impressive, one set of observations have consistently confounded all such models: The kinematic properties of DLAs. Prochaska & Wolfe (1997, hereafter PW97) observed the detailed kinematics of DLAs via their low-ionization metal lines. They devised four kinematic diagnostics to quantify their findings, the key one being the velocity extent of DLAs. The observations show a pronounced tail to large velocity extents. Such a tail is not seen in simulations, from the earliest models (Prochaska & Wolfe 2001) to the most sophisticated recent ones (Razoumov et al. 2008;Pontzen et al. 2008). In general, to match the observed kinematics the absorption cross section of low-mass halos must be reduced significantly (Barnes & Haehnelt 2009), which does not arise naturally in fully dynamical models. Fundamentally, the difficulty with all these models is that they yield few absorbers whose velocity spread significantly exceeds the characteristic velocity of the halo in which the system resides. Since early gas-rich galaxies tend to be small, this makes it difficult to reproduce the observed highvelocity ( 200 km/s) tail in DLA system widths. This implies that the gravitational dynamics of hierarchical assembly alone cannot explain the distribution of the observed kinematics. Hence, any model that does not add a signifi-cant component of non-gravitational velocities to the neutral gas seems doomed to fail the DLA kinematics test. Alternatively, the original model of Wolfe et al. (1986), which proposed that DLAs are large, puffy rotating disks, provides an excellent agreement with the observed kinematics, but forming enough such objects by z ∼ 3 is challenging in currently-favoured cosmologies. Hence the oft-debated claim by Prochaska & Wolfe (1997) that cold dark matter (CDM) models cannot straightforwardly reproduce the kinematic structure of DLAs remains viable. While it could be that the currently-favoured hierarchical galaxy formation scenarios are incorrect, their wideranging successes in other areas suggest that the explanation probably lies somewhere within the poorly-understood baryonic physics associated with galaxy formation. For DLAs, it has been suggested (e.g. Pontzen et al. 2008) that galactic outflows may be the missing ingredient needed to increase the kinematic widths and to lower the absorption cross-sections of smaller halos. Recent work indicates that essentially all star-forming galaxies at z 1 are generating outflows of hundreds of km/s (e.g. Pettini et al. 2001;Steidel et al. 2004;Shapley et al. 2005;Weiner et al. 2009), with mass outflow rates comparable to or more than their star formation rates (Erb et al. 2006;Steidel et al. 2010). Theoretically, outflows are believed to be responsible for suppressing early star formation (Springel & Hernquist 2003b), without which the cosmic stellar mass density would far exceed observations (Davé et al. 2001;Balogh et al. 2001). But it is unclear how such outflows affect DLAs. Their star formation surface densities are quite low (Wolfe et al. 2004), so they may not generate outflows at all. Even if they do, it is unclear how much mass is being carried in outflows, and whether it would result in enhanced DLA kinematic widths in accord with observations. Simulating the impact of outflows on DLA kinematics is a challenging problem, because it requires both redistributing neutral gas accurately as well as self-consistently modeling the energy deposition by outflows into the surrounding gas. Recently, several groups have considered the impact of outflows on DLA properties, specifically kinematics. Barnes & Haehnelt (2009) used a semi-analytic model to interpret observations of DLAs and faint extended Lyα emitters, and showed that the incidence rate and kinematics of DLAs are best reproduced if neutral gas is removed from halos with circular velocities 50 − 70 km/s, suggesting an important role for outflows. Tescari et al. (2009) followed up the work of Nagamine et al. (2007) to assess, among other things, the impact of various outflow models on DLA kinematics, and found that none of their outflow models were able to reproduce the kinematics completely, although a model based on momentum-driven wind scalings came closest. Zwaan et al. (2008) argued from an observational perspective that DLA kinematics must be influenced by outflows, by comparing the H i kinematic widths of local H i galaxies and those of high-z DLAs, and showing that (assuming that locally-identified DLA galaxies are representative of those at high-z) some other process such as outflows must be enhancing the kinematics of high-z DLAs. Hence there is growing evidence that DLA kinematics are strongly impacted by galactic outflows. In this paper, we investigate whether galactic outflows can both lower the absorption cross-section of small halos and broaden the velocity widths enough to bring the theoretical DLA kinematics into accord with the observations. To do so, we employ cosmological hydrodynamic simulations of galaxy formation, with the key addition being several heuristic models of galactic feedback. We consider three variants: No outflows; a model with constant outflow speed and mass loading factor (i.e. the mass outflow rate relative to the star formation rate; Springel & Hernquist 2003b, hereafter SH03); and a model where the outflow speed and mass loading factor scale according to expectations from momentumdriven winds (Murray, Quataert, & Thompson 2005). Such momentum-driven wind scalings implemented within our models have proved remarkably successful at reproducing a wide range of observations, including: the observed IGM metal content from z ∼ 0 − 6 ); observations of early galaxy luminosity functions and evolution (Davé, Finlator, & Oppenheimer 2006;Bouwens et al. 2007); the galaxy mass-metallicity relationship (Finlator & Davé 2008); the observed enrichment in various baryonic phases (Davé & Oppenheimer 2007); the metal and entropy content of low-redshift galaxy groups (Davé, Oppenheimer, & Sivanandam 2008); the H i distribution in the local universe (Popping et al. 2009); and the faint-end slope of the present day stellar mass function . Momentum-driven wind scalings of outflow velocity are also observed in nearby starburst outflows (Martin 2005;Rupke et al. 2005), providing an intriguing connection between outflows today and in the past. Hence our momentum-driven wind model provides a concrete (albeit phenomenological) model for how outflows are related to the properties of star-forming galaxies across cosmic time. Our primary result in this paper is that our momentumdriven wind simulation is able to reproduce the observed kinematics of DLAs. Without winds, our simulated DLA kinematics do not show nearly enough wide-separation systems and have an average velocity width that is too small, in agreement with previous studies. A constant wind model, as implemented by Springel & Hernquist (2003b) and used by Nagamine et al. (2007) to study DLAs, improves over the no-wind case but is still unable to statistically match DLA kinematics. The primary reason for the success of momentum-driven wind scalings is that it ejects large amounts of mass from small galaxies, at relatively low velocities such that the gas is not overly heated. Although our simulations are somewhat simplistic, using a heuristic wind prescription, no radiative transfer, and relatively low spatial resolution (∼ 200 pc physical at z = 3), they are the first to statistically match the observed DLA kinematics within a cosmological context. The implication is that DLAs represent the extended neutral gas envelopes of early starforming galaxies that are driving outflows, and furthermore that such outflows are effective at moving large amounts of neutral gas into the regions surrounding early galaxies. Our paper is organised as follows: In §2 we describe our simulations and methodology for computing DLA absorption. In §3, we present our results for DLA abundances and present statistical tests on the kinematics of each feedback model compared to the observations. We also present many physical properties that relate DLAs to their host galaxies and dark matter halos. We summarise our results in §4. SIMULATIONS Input physics We simulate 8 h −1 Mpc periodic cubic comoving volumes using the cosmological tree-particle mesh-smoothed particle hydrodynamics (SPH) code Gadget-2, with modifications including radiative cooling and star formation (Springel & Hernquist 2003a). Our version used in this work (described in ) also includes metal-line cooling (Sutherland & Dopita 1993), assumes a (spatially-uniform) cosmic photoionising background taken from Haardt & Madau (2001), and includes a heuristic prescription for galactic outflows driven by star formation that we describe in the next section. We focus on z = 3 outputs as that is the typical redshift for the DLAs with well-measured kinematics. We choose cosmological parameters of Ωm = 0.3, ΩΛ = 0.7, h = 0.7, σ8 = 0.9, and Ω b = 0.04 and generate the initial conditions at z = 199 using the Eisenstein & Hu (1999) transfer function. These parameters are consistent with the WMAP-1 results (Spergel et al. 2003) but are somewhat different than the latest WMAP-7 results (Komatsu et al. 2008). However, these differences are not expected to affect our general conclusions, as the uncertainties in modeling the baryonic physics probably dominate over these differences. We use 256 3 particles each for the gas and the dark matter, yielding particle masses of 4.84 × 10 5 M⊙ for the gas and 3.15 × 10 6 M⊙ for the dark matter. The equivalent Plummer gravitational softening length is 0.625 h −1 comoving kpc, or 223 proper pc at z = 3. These simulations resolve star formation in halos down to virial temperature of ∼ 2 × 10 4 K, below which ambient photoionisation is expected to provide significant suppression of gas accretion. We note that the numerical resolution in our cosmological volume is better than that in the individual halo simulations of HSR98. Outflow models Galactic outflows appear to be related to star formation, but the exact relationship is unknown. While many recent studies have focused on explicitly driving winds by over-pressurising the ISM via supernova heat input (e.g. Stinson et al. 2006;Ceverino & Klypin 2008), it is clear that such processes are sensitive to scales below any realistically achievable resolution within a cosmological volume. Hence SH03 took the approach of explicitly incorporating outflows with tunable parameters, to avoid a strong dependence on physics below the resolution scale. In the SH03 implementation, an "outflow model" is described by choosing two parameters: the mass loading factor η, and outflow velocity v wind . The mass loading factor is defined as η ≡Ṁw/Ṁ⋆, whereṀw is the mass loss rate by winds andṀ⋆ is the star formation rate. Each gas particle that is sufficiently dense to allow star formation has some probability to either spawn a star particle, or to be kicked into an outflow. The ratio of those probabilities is given by the mass loading factor η. If a particle is selected to be in an outflow, its velocity is augmented by v wind , in a direction given by ±v×a (resulting in a quasibipolar outflow). When a gas particle becomes a wind particle, its hydrodynamic forces are turned off until the particle escapes the star-forming region and reaches a density of one-tenth the critical density for star formation. The maximum time for hydrodynamic decoupling is 20 kpc divided by the wind speed, ensuring that the hydrodynamic forces will almost always be turned off for less than 20 kpc. At the simulation resolution we employ, our results are not very sensitive to whether or not we decouple (but see Dalla Vecchia & Schaye 2008, who show that it makes a large difference at very high resolution), but it does allow for better resolution convergence. However, this decoupling does imply that the detailed distribution of metals around galaxies may be inexactly modeled owing to the lack of accounting for hydrodynamical effects. For this reason, we will prefer to use the H i distribution directly when computing DLA kinematics, and simply tie the metals directly to the H i. We note that Tescari et al. (2009), using simulations similar in physics and resolution to ours, explored the difference in kinematics between using the metals directly and using the H i distribution with a fixed assumed metallicity, and found that observed kinematics were better reproduced in the latter case, suggesting that some metal diffusion or smoothing may be required. We follow their more optimistic case here, with the hope that future simulations will allow us to more robustly and self-consistently model the metal distribution near galaxies. SH03 chose v wind and η to be constants, based on observations available at the time (Martin 1999;Heckman et al. 2000). The wind velocity vw was derived from the supernova feedback energy, 1 2Ṁ w v 2 w = χǫSNṀ⋆,(1) where ǫSN is the energy deposition by Type II supernovae per unit of star forming mass, and χ is the fraction of supernovae energy that drives the wind. SH03 set χ = 1, and used ǫSN ∼ 4 × 10 48 erg/M⊙ for a Salpeter initial mass function, yielding a wind velocity of v wind = 484 km/s. They set η = 2 to roughly reproduce the observed present-day mass density in stars. SH03 showed that this wind model produces a cosmic star formation history that is in broadly agreement with observations, and is resolution converged. We will refer to this as the constant wind ("cw") model, since both η and v wind are independent of galaxy size. Improved recent observations of local starburst outflows suggest that the wind speed is not constant, but is proportional to a galaxy's circular velocity (Martin 2005;Rupke et al. 2005;Weiner et al. 2009). This is suggestive of momentum-driven winds, as worked out analytically by Murray, Quataert, & Thompson (2005). OD06 implement a momentum-driven wind model in a cosmological simulation. In this scenario, the wind speed scales as the galaxy's velocity dispersion σ, and since the momentum deposition per unit star forming mass is assumed to be constant, the mass loading factor is inversely proportional to σ. For our momentum-driven wind model, we employ vw = 3σ fL − 1 (2) η = σ0 σ ,(3) where fL is the galaxy luminosity divided by its Eddington luminosity and σ0 is a normalisation factor. Because our resolution is still too low to compute the galaxy's velocity dispersion, we take σ = − 1 2 Φ from the virial theorem, where Φ is the local potential depth where a wind particle is created. The normalisation factor σ0 is set to 300 km/s as suggested by Murray, Quataert, & Thompson (2005). Values for fL are taken from observations: Martin (2005) found fL ∼ 2, and Rupke et al. (2005) measured fL ≈ 1.05 − 2, for local starbursts. In the momentum-driven wind scenario, lower metallicity star formation should produce stronger outflows owing to greater ultraviolet (UV) flux output; we account for this using the stellar models of Schaerer (2003). Specifically, we set fL = fL⊙ × 10 −0.0029×(log Z+9) 2.5 +0.417694(4) where fL,⊙ is randomly chosen between 1.05 − 2.0, and Z is the gas particle's metallicity. We term this momentumdriven wind model ("vzw"). In contrast with the cw model, v wind and η depend on galaxy size. We note that this momentum-driven wind implementation differs from our more recent works (e.g. Oppenheimer ) that use the galaxy mass to calculate σ. However, as we showed in , this choice makes little difference at z = 3, and only becomes critical at z 2 when large potential wells with hot gas develop. Furthermore, our implementation also differs from that in Tescari et al. (2009), who use a calibrated relationship to the halo mass to determine σ. Finally, for comparison we also evolve a model with no galactic outflows (no winds, "nw"). Note that this model still includes thermal supernova feedback within the context of the subgrid multiphase ISM model of Springel & Hernquist (2003a). It is a poor match to many observations, but provides a baseline to assess the impact of outflows. The suite of three models employed here are a subset of those in OD06. Distribution of neutral gas Here we describe how we calculate the distribution of neutral gas in each simulation. There are two significant complications for this: first, in the multi-phase ISM model of Springel & Hernquist (2003a) that we employ, only some fraction of each particle is actually neutral. Second, since we do not perform detailed radiative transfer, we must account for the effects of self-shielding that can mitigate the strength of the photoionising flux. The first issue is straightforward to handle, at least to a sufficient approximation. In our multi-phase model, each particle above a critical density for fragmentation is assumed to consist of a cold phase at 1000 K plus a hot phase at 10 6 K. The cold phase dominates the mass fraction, ranging between 84 − 100% (for primordial gas). Since we are only interested in the cold phase, we assume all particles above the fragmentation density to have 90% of their mass in the cold phase at 1000 K, and ignore the hot phase. Note that with metal cooling included, the fragmentation scale can depend on particle metallicity (OD06) and the cold mass fraction can range slightly lower, but to the level of approximation required here this is not important. In detail, the cold phase is probably confined to even smaller sub-particle scale clumps, but as long as the overall cross-section of these clumps is close to unity on galactic scales (as is found observationally when examining the H i content of galaxy disks, e.g. Zwaan et al. 2005), then the assumption of smoothlydistribution gas within the particle seems reasonably valid. Accounting for self-shielding requires more significant approximations. The observed HI column density distribution shows an excess of absorbing systems with N (HI) 2 × 10 20 when the distribution is extrapolated from the low column density region populated by the Lyman α forest (Lanzetta et al. 1991;Storrie-Lombardi & Wolfe 2000). This excess has been explained as a self-shielding effect typically causing a sharp transition layer from mostly ionised regions to mostly neutral regions (Murakami & Ikeuchi 1990;Petitjean 1992;Corbelli, Salpeter, & Bandeira 2001). Hence self-shielding is a critical aspect for setting the ionization level of gas in DLAs. Owing to our lack of a full radiative line transfer treatment of Lyα photons, we resort to a simpler estimate for self-shielding. HSR98 use a pure density criterion which assumed that all the gas with a number density above 0.01 cm 2 is fully self-shielded, while the rest is subject to the full metagalactic UV flux. In reality, such a threshold depends on the size of the collapsed object and the local strength of UV background, and hence should be different at each local position. Our approach is to have two criteria to identify a gas particle as "self-shielded": a maximum temperature and a minimum density. We choose a single value for each criterion within each wind model, and use the observed abundance of DLAs to constrain this value. We will demonstrate that the DLA abundances are not sensitive the choice of temperature threshold, and that the DLA kinematics obtained using different (reasonable) thresholds do not differ significantly. For densities, we will employ comoving densities of ρ θ = (40, 80)ρcrit, where ρcrit is the critical density at z = 0, as two thresholds that span a reasonable range. These correspond to (1000, 2000)ρ b , or a baryon density of approximately (0.014, 0.028) cm −3 (physical). In Table 1 we summarise our models. The names signify the wind model and ρ θ . Also shown are the DLA number densities per unit redshift, and the number of randomly chosen lines of sight (LOS) in our DLA sample. These LOS will be used to investigate the kinematics of each model. Models in boldface, cw40 and vzw80, are the ones that broadly match the observed number density of DLAs, as we will discuss further in §3.1, while also having an observationallyconsistent cosmic star formation rate at z ∼ 3. When we need to compare these wind models with the no-wind case, we choose nw80 as a representative, although this model produces too many DLAs and strongly overpredicts the cosmic star formation rate. Note that in order to bring the no-wind model into agreement with the number density of DLAs, we need a very high choice of ρ θ that is likely to be unphysical (in the sense that this density well exceeds the threshold density for our multi-phase ISM model). This further highlights the impact that outflows have in regulating the amount of neutral gas in DLAs and the cosmos. DLA absorption profiles We now identify DLAs and calculate the absorption profiles for low-ionisation species. We first perform self-shielding corrections to the neutral gas content of individual gas particles e # of DLAs with ∆v > 30 km/s and satisfying eq. 9. as described in the previous section. To identify DLAs, we make an 8000×8000 H i map projected through the entire simulation volume. We randomly choose 2000 pixels with NHI > 10 20.3 cm −2 , and compute H i Lyα spectra along lines of sight (LOS). We then identify the individual absorbing region along the LOS that is a DLA. In less than 1% of the cases, the total NHI of any single absorber along the LOS is below the DLA threshold, even though the total column exceeds it; we discard those LOS. The actual numbers of identified DLAs for each model are listed in Table 1. To obtain DLA kinematics, we attempt to mimic the observational procedure of identifying low-ionisation, relatively unsaturated metal lines that can trace the gas motions. Although we track metal enrichment in our simulations, we choose not to attempt to make spectra from the metals directly. This is because (1) we don't accurately track the ionisation field, and (2) as explained above, the tracking of metals in outflows near galaxies is hampered by limited numerical resolution and explicit hydrodynamic decoupling. Instead, we tie the metal abundance to the H i abundance with values representative of fully neutral gas. We choose Si ii (1808Å) as our representative lowionisation metal tracer, which was used in HSR98 and is generally a good tracer for DLA kinematics. We assume a uniform silicon abundance of [Si/H]= −1, and take the oscillator strength from Tripp, Lu, & Savage (1996). Since the ionisation potentials of Si i and Si ii are 8.15 eV and 16.3 eV, respectively, which brackets H i, we assume that SiII is dominant and exists only in self-shielded regions and in cold ISM gas. This gives an abundance of nSiII/nHI = 3.24 × 10 −6 . We multiply the H i optical depths along each spectrum by this value, along with the oscillator strength difference, to obtain the Si ii optical depths, thereby generating spectra from which we can examine DLA kinematics. Our results are not sensitive to the specific abundance and ionization choice, because we are primarily interested in the redshiftspace extent of metal systems rather than the strengths of the systems themselves. Example randomly-chosen spectra are shown in along the right side. Such spectra are analyzed as described in §3.2 in order to obtain kinematic statistics for each DLA. Identification of galaxies and dark matter halos Besides DLA kinematics, we are also interested in studying the relationship between DLAs and galaxies in our simulations. Hence, we must identify galaxies in our simulations and we do so using SKID 1 (Katz et al. 1996a), which identifies density watersheds using spline kernel interpolation, and then finds surrounding groups of gas and star particles that are gravitationally self-bound within these density watersheds. We apply SKID to all the star particles and gas particles that satisfy ρgas/ρgas > 10 3 and T < 30, 000K, and all particles (regardless of temperature) above the multiphase threshold of ≈ 1000ρgas. Figure 2 shows the galaxy baryonic mass function for our three simulations We take our galaxy mass resolution limit to be 32mSPH = 1.55 × 10 7 M⊙ , shown as the vertical line. This limit appears to be conservative, as the mass function doesn't turn over severely until much lower masses, but detailed resolution testing shows that the individual galaxy star formation histories are poorly converged below this mass owing to stochastic effects and the fact that the densities cannot be adequately resolved. The inflection point in the no-wind case we believe corresponds to the filtering mass (e.g. Gnedin 2000), which is the mass around which galaxies are significantly suppressed by metagalactic photoionization. Galaxies below this mass formed prior to turning on our spatially-uniform ionizing background instantaneously at z = 9 (Haardt & Madau 2001). A more realistic radiative transfer simulation would not yield such instantaneous reionization, and would likely not produce such a feature. The wind models have much lower stellar masses for a given halo mass, and hence the . The mass function of gravitationally bound groups found by SKID for the three models: no winds (nw, red solid), constant winds (cw, green dashed) and momentum-driven winds (vzw, blue dotted). We resolve galaxies down to a baryonic mass of 1.55 × 10 7 M ⊙ , indicated by the vertical dotted line. effects of filtering are relegated to much smaller masses, at or near our resolution limit. While these effects on the mass function are interesting, they are not immediately relevant to DLAs, so we do not discuss them further. To identify dark matter halos we use a friends-offriends(FOF) algorithm (Davis et al. 1985), which finds groups of dark matter particles by linking neighbouring particles within a given linking length. We choose the linking length to be the interparticle distance at one-third of the virial overdensity ρvir/ρ, which is the local density at the virial radius for an isothermal sphere, and we take ρvir from Kitayama & Suto (1996). To define the virial mass, virial radius, and the circular velocity of the dark matter halo we refine the group using a spherical overdensity (SO) criterion. In SO, the halo centre is set to be the location of the most bound FOF particle. Then we expand the radius around this centre until the mean overdensity inside the radius equals ρvir/ρ. We define this radius to be Rvir, the mass within Rvir to be Mvir, and the circular velocity vc = GMvir/Rvir. In the end, we obtain a sample of about 2000 DLAs and ∼ 3000 galaxies within each of our simulations. We will now analyze these systems to understand the impact of outflows on DLA kinematics and the relationship between DLAs and galaxies. PHYSICAL PROPERTIES OF DLAS DLA number densities The observed redshift-space abundance of DLAs, dN/dz = 0.26 ± 0.05 (z = 3; Storrie-Lombardi & Wolfe 2000; Prochaska et al. 2005), provides a key constraint for DLA models. In our case, we use it to constrain our self-shielding criteria, ρ θ and T θ . Figure 3 shows our derived abundance for each wind model assuming different ρ θ and T θ . Remember that we assume that all gas with a density greater than ρ θ and a temperature less than T θ is fully self-shielded. Overall, changing T θ has little effect on the abundance, while ρ θ has a large effect. The no-wind (nw) model has too much neutral gas to match the observations within a large range of choices for ρ θ . For the two wind models, we choose two self-shielding criteria, ρ θ = 40 and ρ θ = 80 for the cw model and the vzw model, respectively, to approximately match the observed dN/dz. A density of ρ θ = 40 corresponds to 1000ρ b and nH = 0.014 cm −3 in primordial composition gas, which was adopted in HSR98 as their shielding criteria. So the cw model has the same density threshold value as HSR98 while the vzw model has one twice as large. Our resolution and inclusion of metal line cooling likely affects the choice of threshold values. Figure 4 shows HI column density maps projected along the x-axis for the nw80, cw40, and vzw80 models. We see substantial differences in the neutral gas distributions produced by these three different feedback implementations. Without winds, the DLA-absorbing gas is highly concentrated within star-forming regions. In the constant wind (cw) case, the absorption is somewhat more extended, but the high outflow velocities generally cause the outflowing gas to be heated, thereby lowering their neutral fractions and hence their densities. The momentum-driven wind model (vzw) produces the greatest extent of high-column density gas. This foreshadows our result that the vzw model will yield the largest DLA kinematic widths. The more extended gas distribution causes the vzw model to be most sensitive to ρ θ . This subtle dependence arises because the abundance is sensitive to cross-section, since an extended distribution results in more gas lying near the threshold density. So the curves in Figure 3 partly reflect the different gas topologies that result from the different outflow models. In contrast, the sensitivity to temperature threshold is negligible because most of the gas for any reasonable choice of ρ θ lies at ∼ 10 4 K, below which we truncate radiative cooling in our simulations. As an aside, we note that the filamentary gas structures that are responsible for feeding galaxies with fresh fuel via the cold mode accretion scenario (e.g. Kereš et al. 2005;Dekel et al. 2009) are generally not sufficiently dense to produce DLA absorption, and instead typically have NHI 10 17 cm −2 . Only in the vicinity of galaxies does the gas starts to self-shield, and the DLAs tend to be confined to even denser gas within and around galaxies. DLA kinematics We now examine the primary target of our investigation: The kinematics of simulated DLAs and how they compare to observed kinematics. PW97 developed a suite of statistics to descibe the one-dimensional distribution of metal absorption lines within a single DLA. To generate these statistics from our simulated DLAs, we adopt the following procedure: Figure 3. DLA abundances for each model assuming various selfshielding criteria. The temperature dependence is weak, while the density dependence is strong. The red arrows indicate the value we choose for ρ θ . Note that the values in this figure are slightly larger than the values in Table 1 because we used 2000x2000 maps for this figure and 8000x8000 maps for our main results as presented in Table 1. lines by tying Si ii to H i as described in §2.4. (ii) We smooth the absorption lines with a 19 km/s tophat window, which is equivalent to the 9-pixel tophat smoothing done in PW97. (iii) From this profile, we calculate four key quantities, as defined in PW97: The velocities of the largest (v pk ) and second largest peak absorption (v 2pk ), the median absorber velocity (v med ), and the system velocity width (∆v) defined by ∆v ≡ |v95 − v5|, where v95, v5, and v med represent the positions at which the optical depths are 95%, 5%, and 50% of the total optical depth. We note that we compute these quantities directly from the Si ii optical depths, while PW97 compute them from apparent optical depths obtained from the flux; we expect these quantities to be similar, particularly for the typically unsatured metal lines in DLAs. (iv) From these quantities, we calculate the DLA kine- matic statistics from PW97: fmm = |v med − vmean| (∆v/2) (6) f edg = |v pk − vmean| (∆v/2) (7) f 2pk = ± |v 2pk − vmean| (∆v/2)(8) where the plus sign for the two peak fraction, f 2pk , holds if the second peak is between the mean velocity and the first peak velocity; otherwise the minus sign holds. If there is no second peak, we take the edge-leading fraction, f edg , for the second peak fraction f 2pk . Here, vmean ≡ 1 2 (v95 + v5). To avoid saturation effects that blur the kinematic information, and to exclude noise contamination in the observations, we again follow PW97 by only using profiles with peak intensities I pk in the range 0.1 I pk I0 0.6 (9) where I0 is the continuum around the absorption line, or peak optical depths between 0.5 and 2.3. This removes a significant portion of the total DLA sample, roughly 60% for nw, 45% for cw, and 30% for vzw. The variance amongst models in the acceptance fraction based on this criterion suggests that this may be another way to constrain outflow models. The most crucial statistic is ∆v, the system velocity width. It represents the velocity-space extent of the dense neutral absorbing gas, and hence encodes information about internal motions within the ISM as well as any inflow or outflow-induced motions. The other statistics, fmm, f edg , and f 2pk , turn out to be less discriminatory, but we include them for completeness. fmm measures the skewness of the overall absorption in the DLA; a symmetric distribution of optical depths would yield fmm = 0. The edge-leading test f edg would be 0 if the strongest absorption is at the kinematic center, but is large if the kinematics are dominated by rotation or infall where the strongest absorption occurs at large velocities from the center. The 2-peak test f 2pk is designed to distinguish between rotation and infall: in the case of rotation, the second peak is expected to be on the same side as the first (and hence yield a positive value), whereas spherical and symmetric accretion would produce the peaks on opposite sides, yielding a negative value (PW97). While these statistics were devised to distinguish between simple scenarios, the complex interplay between infall, outflow, and rotation within a fully hierarchical context precludes such straightforward interpretations. Hence we focus on the distributions of these statistics among DLA samples (both observed and simulated), and use Kolmogorov-Smirnoff (K-S) tests to characterize their (dis)agreement. In Table 1 we summarise the number of simulated absorption lines, N sample , taken from each model. For the observations, we will compare to 46 observed lines of sight from Prochaska & Wolfe (2001). Note that their paper only presented the distribution of ∆v; the other quantities were kindly provided by X. Prochaska (private communication). Before we compare the simulations with the observations, we must consider the issue of velocity resolution effects in the observed DLA sample. The 9 pixel smoothing of PW97 effectively provides a minimum of ∆vmin ∼ 20 km/s, and indeed all of the observed DLAs have ∆v > 20 km/s. But HSR98 adopted a minimum threshold of ∆vmin = 30 km/s to avoid any possible incompleteness of the DLA sample in the range of ∆v = 20 − 30 km/s. Unfortunately, this threshold can bias the statistics, as it turns out that there are a significant number of smaller halos that host DLAs having ∆v within this range. To test this incompleteness issue, we present results for both velocity thresholds of 20 km/s and 30 km/s. However, we note that the observed sample may be significantly incomplete for the 20 km/s threshold, and may not be a fair comparison to the models, hence we prefer to compare to the 30 km/s case. As it happens, our main conclusions are unaffected by this choice. Figure 5 plots histograms of the four PW97 statistics for our three outflow models, along with their K-S test values as compared to the observed statistics, with ∆vmin = 30 km/s. The upper left panel demonstrates the central result of our paper: Models with outflows provide a much better match to PW97's velocity width statistic than our no-wind case. In particular, the momentum-driven wind simulation provides an excellent match to the data with a K-S acceptance level of 0.39, although the constant wind case cannot be definitively ruled out. The rest of the paper will mostly be devoted to understanding the physical origin of the differences among model for the velocity width statistic ∆v. We will see that the variations arise because the differences between wind models are most significant for small halos. The other PW97 tests generally show acceptable K-S values for all models when compared to the data, at least for ∆vmin = 30 km/s; at ∆vmin = 20 km/s again the vzw case is clearly favored as seen in Table 2. The median-mean statistic fmm ( Figure 5, top right) shows that the cw model produces slightly more skewed DLAs than the vzw and nw cases, resulting in marginally poorer agreement in the K-S test, but not so significant as to reject the model. The edge-leading statistic f edge (bottom left) tells a similar story, that nw produces the fewest edge-leading profiles and cw the most, but none of the models can be excluded by the data. The two-peak test (bottom right) shows that most galaxies show rotation signature in their center and interception with infalling or outflowing gas is rare, but it is more frequent for the wind models. This last statistics shows the most disagreement between the simulations and the data, and none of the models are obviously favored. It is unclear why this discrepancy is strongest, since it is a difficult statistic to interpret. However, we note that the thick disk model favoured by PW97 shows a similar distribution, and hence is expected to have roughly the same K-S value. Our no-wind results are significantly different than HSR98, who found broad agreement with their simulations (without strong outflows) and the data. The discrepancy arises because they extrapolate the relationship between DLA cross-section and halo mass to low halo masses in order to obtain cosmologically-based statistics for their DLA velocity widths. In §3.4.3 we calculate this relationship directly from our simulations, and show that their extrapolation is not valid. Specifically, our no-wind simulation shows considerably increased cross-section over such an extrapolation. This is the main effect that causes our no-wind case to be in poor agreement with data, as compared to HSR98. We note that our numerical resolution is similar to, in fact slightly better than, HSR98. Since HSR98 simulated individual galaxies while we employ a full cosmological simulation, we are able select DLAs based upon their absorption cross-sections to match the observed abundance of DLAs (which HSR98 could not do). This enables us to more accurately characterize the simulated DLA population and more robustly compare it to observations. HSR98 also noted that their results depended on the assumed self-shielding criterion. In Table 2 we present the results for different self-shielding criteria as well as for different minimum velocity thresholds. The K-S tests indicate that the vzw model is consistent with the observations for either choice of the threshold density, while the no-wind case is strongly discrepant in ∆v regardless of threshold choice. Hence reasonable variations in the self-shielding criteria do not significantly impact our overall conclusions. Hence while it certainly possible to perform more sophisticated self-shielding models even without radiative transfer (e.g. Popping et al. 2009), or even do the full postprocessing radiative transfer (Pontzen et al. 2008), we do not expect that this will strongly impact our results. In summary, galactic outflows appear to be capable of reconciling CDM-based models of galaxy formation with observed DLA kinematics. This is the most important conclusion from our work. Such statistics also provide constraints on wind models, and in the limited tests conducted here we favour momentum-driven wind scalings over constant wind scalings. We now investigate these trends more deeply by studying the properties of galaxies and halos that give rise to DLAs, in order to understand the physical reasons behind the success of outflows in general, and the momentum-driven wind scalings in particular. Distance to DLA host galaxies In our simulations, DLAs are generally associated with galaxies. This is believed to be true in the real Universe, although it is usually only possible to image the host galaxies at low redshifts, where galaxies can be separated from the bright background quasar. On the other hand, DLAs seem to deviate from well-established properties of galaxies such as the Kennicutt-Schmidt relation (Wolfe & Chen 2006). Furthermore, Wolfe et al. (2008) identified a bimodality in DLAs, in which they divide into systems with high and low [CII] 158 µm cooling rates, with the former having proper- Table 2. The no-wind simulations is strongly disfavored, and statistically the vzw model provides better agreement than cw. ties similar to Lyman Break Galaxies and the latter perhaps arising in a different sort of population. Hence the relation between DLAs and galaxies remains uncertain. Here we examine some basic properties of the relationship between DLAs and galaxies in our simulations. To connect DLAs to their host galaxies and their host dark matter halos, we need to define the line-of-sight positions of DLAs. Since it is somewhat arbitrary to define the position of diffuse gas, we consider two definitions: x50 ≡ the lineof-sight position where 50% of the optical depth is reached; and xave ≡ 0.5(x30 + x70), where x30 and x70 are defined analogously to x50. We also define an uncertainty on the line-of-sight position as ∆x ≡ |x50 − xave|. In the vast majority of cases, ∆x is quite small, so the distance is fairly unambiguous. The few percent of cases where it is not are typically caused by two or more separate gas clumps along the line of sight contributing to a single system. For the remainder of this paper, we will exclude cases where ∆x > 0.01 in units of the simulation box length (i.e. 28.7 proper kpc), since these systems are difficult to relate unambiguously to galaxies. Specifically, this excludes 1.5%, 6.9%, and 3.3% of DLAs in the nw, cw, and vzw cases, respectively. We also examined a tolerance criterion of ∆x = 0.001, with negligible difference in the final results. The top panel of Figure 6 shows the projected distances, Dproj, of DLAs versus their column densities for all three models. In general, the median distance is around ∼ 2 kpc (distance units are physical unless noted) in all models. There is a weak trend for higher-column systems to arise closer to galaxies. Almost all systems in all cases arise within 10 kpc of a galaxy. Hence in our simulations, DLAs generally arise in the extended neutral gas that are present in and around galaxies at these epochs. The middle panel shows the cumulative distribution of projected distances, P (> Dproj), for each wind model. The no-wind case shows a qualitatively different behavior than the wind runs: It shows somewhat larger typical distances at small separations, but beyond 3 kpc the nw case shows very few DLAs, while the wind models possess a significant tail to high projected separations. Hence about 95% of the DLAs in the no wind case have distances smaller than 5 kpc, but outflows substantially puff out the neutral gas as shown Figure 6. The projected distance, D proj , from the host galaxy (top panel), its cumulative distribution (middle panel), and the cumulative distribution of D proj /D, which characterises the angular distribution (bottom panel) for the all three models as labelled. The purple dotted curve has the form 1 − 1 − (D proj /D) 2 , which is what one would expected for an isotropic distribution of small blobs. The no-wind case is closest to having a central blob topology, while winds (particularly vzw) moves the gas distribution closer to isotropic. in Figure 4, so that in the vzw case 95% of DLAs are within 10 kpc. Since the mass loading factor is larger for small halos in the vzw model than in the cw model, the momentumdriven wind model is most efficient at puffing out the neutral gas from the more numerous and gas-rich small systems. One can quantify the topology of DLA absorption by examining the ratio of projection distance Dproj to real distance D, shown in the bottom panel. First, let us consider some simple illustrative cases: If DLAs come from small blobs that are distributed isotropically from host galaxies, the cumulative distribution should follow the purple dotted line, P (< Dproj/D) = 1 − 1 − (Dproj/D) 2 . This is likely similar to a scenario in which DLAs arise from randomlydistributed small disks within a galactic halo, as forwarded by Maller et al. (2001). Conversely, if all DLAs are located in the cross-section plane, then the cumulative distribution should be a step function at a value set by the angle of the plane relative to the line of sight. A special case of this is if the DLA owes to a single, spherical gas blob centered on a galaxy ("central blob"), in which case the distribution will be a step function around unity since Dproj ≈ D. Now let us examine the actual distributions. The three models generally lie between isotropically-distributed blob and central blob geometries, not surprisingly since these are the extreme cases. In more detail, the no-wind model lies distinctly more towards central blob, because most of the DLA cross-section is concentrated in the central galaxy. Conversely, the wind models follow more closely to the isotropically-distributed blob case, with the vzw case more so than the cw. This means that outflows change the simple centrally-dominated neutral gas topology of the no-wind model to a more complex and extended gas topology. This is consistent with the general impression one gets from examining the H i images in Figure 4. These topological trends directly correlate with the DLA kinematics examined in §3.2: The more the winds push out gas, the larger the velocity widths. The extra velocity can arise owing to the kinematics of the outflowing gas itself, or else the fact that the gas velocity field typically differs more from the galaxy systemic velocity as one goes farther out into the halo; we dissect these scenarios further in the next section. In either case, the effectiveness by which the momentum-driven winds push out gas into the halo, owing largely to the high mass loading factors in small galaxies, is critical for yielding its agreement with DLA kinematics. 3.4 Relations between DLAs, host galaxies, and host dark matter halos The environment producing high-∆v DLAs As pushing out the gas distribution into the surrounding halo seems to be an important aspect of reproducing DLA kinematics, we now investigate how halo environments impact the kinematics of DLAs. Figure 7 shows the relations between the velocity widths of DLAs, ∆v and the mass of nearest galaxy M gal (top panel), the distances from the nearest galaxy D (second panel), the circular velocity of the host dark matter halo vc (third panel), and the mass of the dark matter halo M halo (bottom panels) for the no-wind (red points) and the momentum-driven wind (blue points) models. The constant wind case is not shown for clarity, as its trends are intermediate between the two displayed cases. The M gal − ∆v plot shows that ∆v does not have a simple relation to the mass of the host galaxy. The nw model shows no significant trend, while the vzw model shows an interesting bimodal distribution for high velocity width DLAs, in which the nearest galaxies tend to be the most massive or the least massive. We will see below that these large-∆v systems tend to occur in massive halos, and hence we can deduce that the high-velocity tail in the vzw model occurs primarily around either massive (likely central) galaxies or low-mass galaxies within a massive halo. Overall, more of the absorption seems to come from low-mass galaxies in the vzw case, showing that the high mass loading factor in small systems produces significantly more DLA-absorbing cross section from these galaxies. The D − ∆v plot shows that the highest velocity widths occur relatively far out from the host galaxy, typically 2 − 10 kpc away. The overall median distance is approximately 2 kpc for the vzw case and 3 kpc for nw (shown as the tick marks above the vertical histograms), but most of the DLAs with ∆v > 150 km/s lie above these values. This means that if one wants to obtain large ∆v values, one has to arrange a significant amount of DLA cross-section at relatively large distances from the host galaxy. At smaller distances, the internal kinematics of galaxies is insufficient to produce large velocity widths. The vzw model, relative to the nw case, clearly produces more DLA-absorbing gas distributed farther away from galaxies. The M halo − ∆v plot (third panel) shows an envelope of increasing ∆v in larger halos. In the no-wind case, the envelope occurs at roughly an equality between vc and ∆v, showing that with only gravitationally-induced motion, it is rare to produce kinematics larger than the halo circular velocity. Conversely, the vzw case shows significant numbers of DLAs with widths greater than vc. This is of course key to producing large velocity widths, and we will explore this further in the next section. Note that a massive halo is a necessary but not sufficient condition for large velocity widths, as massive halos can host DLAs with small ∆v. These trends are mirrored in the circular velocity panel, which encodes information about the halo radius but otherwise shows very similar trends. From the above analysis, along with visual impressions from Figure 4, we can assemble a scenario for DLA absorption: Small velocity width DLAs can be produced in all halos, but large velocity widths require a massive halo environment. In the momentum-driven wind scenario, this environment is one where a massive central galaxy is surrounded by numerous small galaxies in a clustered environment. The DLA then occurs at intermediate distances away from the nearest galaxy, which can often be the largest galaxy or one of the smaller systems. The wide DLA velocity widths are then produced either by sightlines intersecting multiple galaxies within a dense region of protogalactic clumps (HSR98; Maller et al. 2001), or by dense gas puffed up by outflows. The former can occur without winds, but the latter requires momentum-driven outflows that produce a puffier neutral gas distribution and raise the H i crosssection of small galaxies within larger halos, thereby yielding the correct fraction of wide systems. Supergravitational motion In the no-wind model there is no kinetic energy injection into the neutral gas beyond that provided by gravity. High velocity width DLAs with ∆v > vc can still arise owing to merging or infalling motion of neutral gas, as in HSR98, but this is rare. Conversely, outflows provide substantial nongravitational kinetic energy, which is reflected in the significantly larger fraction of DLAs that have ∆v > vc as seen in Figure 7 (fourth panel). We call the kinematics induced by non-gravitational energy "supergravitational motion". The amount of supergravitational motion (∆v > vc) therefore quantifies the contribution of wind feedback to the kinematics of the neutral gas. Figure 8 shows the probability distribution p(∆v/vc) for each model. The probability distribution for the no-wind case is consistent with previous work (Haehnelt et al. 2000). The peak of ∆v/vc is around 0.5 and the fraction of systems with supergravitational motion is only 5%. The no-wind case shows a strong maximum at v/vc ≈ 0.5, and rapidly drops off to higher v/vc. The wind models show a qualitatively different behaviour, with a less obvious maximum and a longer tail to high v/vc. The fraction with supergravitational motion for the cw model and the vzw model are 13% and 22%, respectively. Without such high fractions of supergravitational motion, the models cannot reproduce the incidence of wide DLA velocity widths. Hence the distribution of gas by outflows, together with small-scale galaxy clustering, are critical for explaining DLA kinematics. DLA cross-section of dark matter halos To quantify the increase in the puffiness of the gas distribution in the wind models that is responsible for the wider velocity widths, we calculate the DLA cross-sections for each dark matter halo. Figure 9. The DLA cross-section α (top panels) and the ratio µ ≡ α/πR 2 vir (bottom) as a function of circular velocity vc for the three wind models. Top axis is labeled by the approximate halo mass. Dashed lines in top panels indicate power-law fits to v − c > 80 km/s cross sections. nw80 is consistent with Gardner et al. (1997a) for large halos, and cw40 is consistent with Nagamine et al. (2004); in both cases, there is extra absorption at low vc relative to an extrapolation from high-vc. vzw80, in contrast, shows a pure power law with index β = 3.47 down to the smallest halos. The other two models have a larger cross-section for small halos, which is one reason why they produce too many low-∆v DLAs. The upper panels of Figure 9 show DLA cross-sections α in comoving kpc 2 for the nw80, cw40, and vzw80 models. We calculate the cross-section from the projected neutral hydrogen maps, including only those pixels that produced DLAs with ∆v > 20 km/s. Each DLA produces 3 points on this plot, for projections along each of the cardinal axes. The relation between α and vc has traditionally been characterized as a power law with a slope we call β, though as is evident from the plot a pure power-law is only a good description for the vzw80 model (upper right). We note that if the cross-section was proportional to halo mass, then β = 3, since M h ∝ v 3 c (e.g. Mo, Mao, & White 1998); this turns out to be very roughly the case, though the departures from this are key for understanding DLA kinematics. The no-wind case shows a relatively shallow power law (i.e. low β) that results in large contributions to DLA crosssection from small vc systems. This causes a mean velocity width that is too small when the overall DLA abundance is matched. For comparison, we show the fit from the models of Gardner et al. (1997a), which also did not include winds but had much lower spatial resolution. Gardner et al. matched the observed DLA abundance, but in fact the cross-section in small systems was underpredicted compared to our current nw model. Hence, using higher resolution simulations without winds tend to exacerbate the disagreement with observed DLA kinematics. For the constant wind model, the power law index is β = 3.88 for vc > 80 km/s. This wind model is identical to that used in Nagamine et al. (2004), but our slope is slightly larger than theirs because we fit the slope for vc > 80 km/s only. We choose this cutoff to highlight where the power law breaks in the cw model, especially in relation to the vzw case. For smaller values of vc, the behaviour of the cw model tends towards that of the no-wind case. Hence the constant-wind case is able to puff out gas to some extent, but still produces significant absorption in low-vc systems that causes it to match the observed kinematics less well. Unlike the other two models, the vzw model shows no break in vc vs. cross section, with an overall slope of β = 3.47. This low cross-section at small vc is critical for reproducing the observed kinematics. As argued in PW97 and Prochaska & Wolfe (2001), most CDM galaxy formation models fail to reproduce DLA kinematics because they predict too many small baryonic structures. The key aspect of the vzw model is that it has a high mass loading factor η in small galaxies, which lowers the cross-section of the smaller systems by ejecting more of its material. Indeed, the reduction in cross section is roughly equal to (1 + η) in smaller halos as expected from simple gas-removal arguments: cw is lower than nw by ∼ ×3, and vzw is increasingly suppressed to smaller systems since η ∝ v −1 c . The lower panels of Figure 9 show µ = α/(πR 2 vir ), i.e. the ratio of the DLA absorption cross-section to the projected area of the dark halo. For the no wind case µ is almost constant and cuts off rapidly for the smallest halos, and the cw case shows qualitatively similar behavior, lowered by ∼ ×3 and with somewhat more scatter. In contrast, in the vzw wind model µ monotonically increases with halo circular velocity, and hence there is relatively more DLA cross-section at larger circular velocities that can give rise to high ∆v values. In Figure 10 we examine the global cross-sections to quantify which halo masses are predominantly responsible for DLA absorption, plotted cumulatively (top) and differentially (bottom) versus halo mass. We define the cumulative global cross section to be Σ(< M halo ) = M halo 0 α(M h )n(M h )dM h(10) The cumulative global cross-section reiterates the result that the vzw model is very efficient at suppressing DLA absorption in small halos relative to larger ones. The cw model produces comparably high cross-sections in large halos to vzw, but it does not suppress the low-mass halo crosssection, and instead its suppression occurs more at intermediate masses. In summary, the combination of both efficiently suppressing absorption at low masses and having high absorption at large masses relative to the no-wind case is what makes the momentum-driven wind model successful at matching the DLA kinematics. The constant wind case produces the latter requirement but less so the former, making it intermediate between the vzw and no-wind cases. CONCLUSIONS We examine the kinematic and physical properties of DLAs using cosmological hydrodynamic simulations including three different heuristic prescriptions for how galactic outflow properties are related to their host galaxies: A model without outflows (nw), a constant wind (cw) model where the wind speed of 484 km/s and mass loading factor of 2 are independent of galaxy properties, and a model with momentum-driven wind scalings (vzw) where the wind speed scales as the circular velocity and the mass loading scales inversely with it. Our no-wind model shows results consistent with previous studies of hierarchical models without strong outflows (e.g. Pontzen et al. 2008), and dramatically fails to explain the observed kinematics of DLAs, particularly their systemic velocity widths as traced by low-ionization metal lines. This contrasts with HSR98 who concluded that merging of protogalactic clumps (without winds) can explain the kinematics of DLAs; in our case, the high frequency of wide lines cannot be reproduced when such a model is placed in a cosmological context. Hence without non-gravitational motion, it appears that DLA kinematics are not reproducible within modern hierarchical structure formation models. We then explored DLA kinematics including outflows. The central result of this paper is that our momentumdriven wind model provides a very good match to all observed DLA kinematic measures, with the possible exception of the 2-peak test that no model matches well. Meanwhile, a model with constant wind speeds and mass loading factors (as in Nagamine et al. 2007) is only marginally consistent with data, and a no-wind case is in strong disagreement with data, particularly in the distribution of DLA velocity widths (∆v) that have historically been difficult to reproduce in CDM-based models. This shows that galactic outflows can reconcile DLA kinematics within current galaxy formation models with observations. Moreover, DLA kinematics could potentially provide interesting constraints on the properties of galactic outflows at high redshifts. The abundances of DLAs may also provide interesting constraints. Our no-wind case could not reproduce the observed abundances without a very high and likely unphysical threshold density for self-shielding. Meanwhile, both our wind models can match the abundance of DLAs with reasonable choices for this parameter. The previous study of Gardner et al. (1997a) with no winds matched the observed abundance by an incorrect extrapolation of the DLA crosssection versus circular velocity from low resolution simulations. Our higher-resolution simulations show that such an extrapolation is not valid in the the no-wind or (to a lesser extent) the constant wind case, since these models produce excess cross-section at low-vc over such an extrapolation. Still, given the crudeness of our self-shielding criterion, it is difficult to assess the robustness of this result. We note that the sophisticated radiative transfer simulations of Pontzen et al. (2008) which include self-consistently driven outflows are able to match DLA number densities (as well as metallicities), but they did not match the kinematics. To understand why outflows help with DLA kinematics, we investigated the properties of DLA absorption as a function of galaxy and halo properties. A consistent story emerges that momentum-driven winds most efficiently expel gas into a more extended, clumpy distribution around galaxies, particularly for small galaxies living in dense environments. As a result, the DLA cross-section becomes weighted away from small halos (despite arising preferentially in small galaxies) towards more massive halos, where forming structures and outflow-induced motions can yield higher velocities. This results in substantial amounts of super-gravitational motion that directly translates into wide DLA systems. HSR98 claimed agreement with observed DLA kinematics by noting that their individual DLA simulation produced a velocity width of 60% of the virial velocity vvir, and then integrating over the halo mass function assuming that the DLA cross-section scales as v 3 vir . In our comparable no-wind simulation, this latter scaling is appropriate for larger halos, but it becomes much shallower for smaller ones, yielding much more DLA absorption at small cross sections. This results in poor agreement with the observed kinematics in the no-wind case. In another study, Maller et al. (2001) found that gaseous disks of standard sizes were unable to reproduce DLA kinematics, and that more extended gaseous distributions were required. While they mostly focused on testing extended disks models, they noted that outflows could potentially have a similar effect. Our momentum-driven wind case appears to produce just the required gas distribution. We note that disk sizes are generally too small without strong feedback and much higher numerical resolution than we have here, but the results of Maller et al. (2001) suggest that even optimally conserving angular momentum is not enough, and some supergravitational motion is required. Finally, while outflows produce greater velocity widths, they also produce less cross-section overall. We hypothesize that such models will therefore produce more Lyman Limit Systems (LLS; 10 17 < NHI < 10 20.3 cm −2 ); Davé et al. (2010) showed that winds do produce more strong Lyα forest systems (NHI > 10 14 cm −2 ) at low-z, but did not have sufficient statistics to explore LLS. CDM models have traditionally also had difficulty producing enough LLS (e.g. Hernquist et al. 1996), though the strength of the discrepancy has been difficult to quantify owing to uncertainties in measuring column densities in this regime. More broadly, we note that momentum-driven winds have enjoyed substantial success in reproducing a wide range of observations of early galaxies and intergalactic gas, particularly related to cosmic chemical evolution. On the other hand, this investigation of DLAs focuses solely on the neutral gas; metals are merely used as a tracer of dense H i. Combining these various results leads us to conclude that outflows must move not only metals but also substantial amounts of mass around the cosmos. It may be possible to construct a model that enriches the IGM by preferentially expelling highly enriched gas from galaxies which could still reproduce the metal-line kinematics, but the strong lowionization metal absorption generally indicates a substantial neutral hydrogen component, if for nothing else than to shield it from being sent into higher ionization states. Furthermore, the high mass outflow rates (typically comparable to or greater than the mass forming into stars; suggest that a large amount of ISM gas must be entrained in these outflows, and hence the outflow metallicity cannot be substantially greater than the ambient ISM. Hence it is difficult to envision a scenario where the metals are being ejected highly preferentially compared to the mass. The fact that a single outflow model where vast amounts of gas are expelled at the typical ISM metallicity reproduces all these observations within a cosmological context is quite remarkable. Nevertheless, the lack of possibly important physics in our current simulations precludes any definitive constraints on outflows from DLA kinematics. DLA kinematics have long been an oddity that did not seem to fit neatly into our current hierarchical view of galaxy formation. Our study suggests that the answer is not in modifications to the hierarchical view, but rather to modifications in the detailed processes of how galaxies form. In a sense, this is a more exciting possibility, as it allows us to move towards constraining such processes from a completely new and different perspective. The work presented here paints a hopeful picture that DLA kinematics may fit into a broader understanding of galactic outflows. However, our current form of modeling outflows is relatively crude, and the present analysis did not address the wider range of DLA observations such as metallicities and redshift evolution that could provide interesting constraints. Furthermore, we did not include detailed radiative transfer to more accurately predict the distribution of neutral gas. Hence this work must be considered as a preliminary step that merely highlights the importance of a new physical process in yielding DLA properties as observed; we plan to examine a more comprehensive suite of DLA properties in the future. This will help us obtain a more complete picture for how DLAs relate to the evolution of galaxies in a hierarchical universe. Figure 2 2Figure 2. The mass function of gravitationally bound groups found by SKID for the three models: no winds (nw, red solid), constant winds (cw, green dashed) and momentum-driven winds (vzw, blue dotted). We resolve galaxies down to a baryonic mass of 1.55 × 10 7 M ⊙ , indicated by the vertical dotted line. Figure 4 . 4The column density of neutral hydrogen (HI maps) for the no wind model (nw80, top panel), the constant wind model (cw40, middle panel) and the momentum driven wind model (nw80, bottom panel). Every pixel over 2×10 20 cm −2 , i.e. damped absorption, is plotted in red. All lengths are comoving. Figure 5 . 5Distributions of the four statistics from PW97 (shaded histogram) compared with the nw80 (red), cw40 (green), and vzw80 (blue) models. A minimum velocity threshold of 30 km/s is assumed. Numbers in parenthesis are K-S test values comparing each model with observations, summarised in Figure 7 . 7The velocity width of a DLA, ∆v, versus the masses of the nearest galaxy (top panel), the distances from the nearest galaxy (second), and the mass (third) and circular velocity (bottom) of the host dark matter halo. Red points show the nw80 model, blue points the vzw80 model. Histograms along the y-axis show the distribution of DLAs along that axis, with a tick mark indicating the median value. In the bottom panel, the dotted line shows ∆v = vc; DLAs to the right of this have "supergravitational motion", which is more common in vzw compared to nw. Figure 8 . 8The probability distribution p(∆v/vc) for each wind model. The fraction showing supergravitational motion, i.e.∆v/vc > 1, are 5%, 13%, and 22% for the nw model, the cw model, and the vzw model, respectively. Figure 10 . 10The cumulative global DLA cross-section (top) and the differential global DLA cross-section (bottom). Table 1 . 1Self-shielding comoving density in units of the z = 0 critical density. Number of DLAs per unit redshift. Number of lines of sight. d # of DLAs with ∆v > 20 km/s and satisfying eq. 9.Name ρ θ a dN/dz b NLOS c N 20km/s d N 30km/s e nw40 40 0.534 1995 883 404 nw80 80 0.443 1995 716 310 cw40 40 0.244 1995 1199 672 vzw40 40 0.379 1992 1306 973 vzw80 80 0.265 1988 1453 1175 a b c Figure 1. Each panel contains 5 LOS; the locations (in our 8000 × 8000 grid) and H i column densities are indicatedFigure 1. Example low-ionisation metal line (i.e. Si ii) spectra drawn from our simulations. The metal optical depths are tied to H i as described in the text. Each colour shows a different LOS; solid and dotted lines show smoothed (19 km/s tophat) and unsmoothed versions, respectively.0.2 0.4 0.6 0.8 1 450 500 550 600 650 700 750 800 850 v [km/s] vzw (19,1951;20.36) (6962,2259;20.53) (13,1887;20.74) (7485,3522;20.77) (23,1869;21.05) 0.2 0.4 0.6 0.8 1 600 650 700 750 800 850 900 950 1000 cw (6823,2297;20.33) (4638,4881;20.57) (4598,7274;20.72) (7517,3501;20.91) (2668,6362;21.03) 0.2 0.4 0.6 0.8 1 200 250 300 350 400 450 500 550 600 nw (1114,1195;20.38) (2107,7900;20.46) (2075,7912;20.76) (1168,1348;20.84) (5726,2003;21.35) http://www-hpcc.astro.washington.edu/ tools/skid.html1 Spline Kernel Interpolative DENMAX; ( i ) iWe select DLAs and then generate Si ii absorption0.2 0.4 0.6 0.8 0 20 40 60 80 100 120 dN/dz ρ/ρ c vzw Observed Density 10000[K] 20000[K] 30000[K] 40000[K] 0.2 0.4 0.6 0.8 0 20 40 60 80 100 120 dN/dz ρ/ρ c cw Observed Density 10000[K] 20000[K] 30000[K] 40000[K] 0.2 0.4 0.6 0.8 0 20 40 60 80 100 120 dN/dz ρ/ρ c nw Observed Density 10000[K] 20000[K] 30000[K] 40000[K] Table 2 . 2K-S resultsModel vmin P∆v Pmm P edge P tpk nw40 20 < 10 −9 < 10 −5 < 10 −6 < 10 −3 nw80 20 < 10 −11 < 10 −7 < 10 −6 < 10 −3 cw40 20 < 10 −6 0.0036 < 10 −3 0.007 vzw40 20 0.18 0.70 0.58 0.14 vzw80 20 0.028 0.14 0.18 0.08 nw40 30 < 10 −4 0.69 0.19 0.005 nw80 30 < 10 −5 0.39 0.11 0.03 cw40 30 0.037 0.18 0.81 0.02 vzw40 30 0.51 0.80 0.92 0.04 vzw80 30 0.39 0.91 0.90 0.05 c 0000 RAS, MNRAS 000, 000-000 . K L Adelberger, A E Shapley, C C Steidel, M Pettini, D K Erb, N A Reddy, ApJ. 629636Adelberger, K. L., Shapley, A. E., Steidel, C. C., Pettini, M., Erb, D. K., Reddy, N. A. 2005, ApJ, 629, 636 . M L Balogh, F R Pearce, R G Bower, S T Kay, MNRAS. 3261228Balogh, M. L., Pearce, F. R., Bower, R. G., Kay, S. T. 2001, MNRAS, 326, 1228 . L A Barnes, Haehnelt, MNRAS. 397511Barnes, L. A. Haehnelt, M. G. 2009, MNRAS, 397, 511 . N Bouché, J P Gardner, N Katz, D H Weinberg, R Davé, J D Lowenthal, ApJ. 62889Bouché, N., Gardner, J. P., Katz, N., Weinberg, D. H., Davé, R., Lowenthal, J. D. 2005, ApJ, 628, 89 . R J Bouwens, G D Illingworth, M Franx, H Ford, ApJ. 670928Bouwens, R. J., Illingworth, G. D., Franx, M., Ford, H. 2007, ApJ, 670, 928 . D Ceverino, A Klypin, arXiv:0712.3285Ceverino, D. Klypin, A. 2008, arXiv:0712.3285 . J Cooke, A M Wolfe, E Gawiser, J X Prochaska, ApJ. 652994Cooke, J., Wolfe, A. M., Gawiser, E., Prochaska, J. X. 2006, ApJ, 652, 994 . E Corbelli, E E Salpeter, R Bandiera, ApJ. 55026Corbelli, E., Salpeter, E. E., Bandiera, R. 2001, ApJ, 550, 26 . M Davis, G Efstathiou, C S Frenk, S D M White, ApJ. 292371Davis M., Efstathiou G., Frenk C. S., White S. D. M., 1985, ApJ, 292, 371 . R Davé, R Cen, J P Ostriker, G L Bryan, L Hernquist, N Katz, D H Weinberg, M L Norman, B Shea, ApJ. 552473Davé, R., Cen, R., Ostriker, J. P., Bryan, G. L., Hernquist, L., Katz, N., Weinberg, D. H., Norman, M. L., O'Shea, B. 2001, ApJ, 552, 473 . R Davé, K Finlator, B D Oppenheimer, MNRAS. 370273Davé, R., Finlator, K., Oppenheimer, B. D. 2006, MNRAS, 370, 273 . R Davé, B D Oppenheimer, MNRAS. 374427Davé, R., Oppenheimer, B. D. 2007, MNRAS, 374, 427 . R Davé, B D Oppenheimer, S Sivanandam, MN-RASsubmittedDavé, R., Oppenheimer, B. D., Sivanandam, S. 2008, MN- RAS, submitted . R Davé, B D Oppenheimer, N Katz, J A Kollmeier, D H Weinberg, arXiv:1005.2421MNRAS, accepted. Davé, R., Oppenheimer, B. D., Katz, N., Kollmeier, J. A., Weinberg, D. H. 2010, MNRAS, accepted, arXiv:1005.2421 . Dalla Vecchia, C Schaye, J , MNRAS. 3871431Dalla Vecchia, C., Schaye, J. 2008, MNRAS, 387, 1431 . A Dekel, Nature. 457451Dekel, A. et al. 2009, Nature, 457, 451 . D J Eisenstein, W Hu, ApJ. 5115Eisenstein, D. J., Hu, W. 1999, ApJ, 511, 5 . D K Erb, A E Shapley, M Pettini, C C Steidel, N A Reddy, K L Adelberger, ApJ. 644813Erb, D. K., Shapley, A. E., Pettini, M., Steidel, C. C., Reddy, N. A., Adelberger, K. L. 2006, ApJ, 644, 813 . K Finlator, R Davé, C Papovich, L Hernquist, ApJ. 639672Finlator, K., Davé, R., Papovich, C.,Hernquist, L. 2006, ApJ, 639, 672 . K Finlator, R Davé, MNRAS. 3852181Finlator, K. & Davé, R. 2008, MNRAS, 385, 2181 . J P Gardner, N Katz, L Hernquist, D H Weinberg, Apj. 48431Gardner, J. P., Katz, N., Hernquist, L., Weinberg, D. H. 1997, Apj, 484, 31 . J P Gardner, N Katz, L Hernquist, D H Weinberg, ApJ. 559131Gardner, J.P., Katz,N., Hernquist, L., Weinberg,D.H. 2001, ApJ, 559, 131 . N Y Gnedin, ApJ. 542535Gnedin, N. Y. 2000, ApJ, 542, 535 . F Haardt, P Madau, D.M. Neumann, J.T.T. VanHaardt, F., Madau, P. 2001, in proc. XXXVIth Rencontres de Moriond, eds. D.M. Neumann, J.T.T. Van. . M G Haehnelt, M Steinmetz, M Rauch, ApJ. 495HSR98647Haehnelt, M.G., Steinmetz, M., Rauch, M. 1998, ApJ, 495, 647 (HSR98) . M G Haehnelt, M Steinmetz, M Rauch, ApJ. 534549Haehnelt, M.G., Steinmetz, M., Rauch, M. 2000, ApJ, 534, 549 . T M Heckman, M D Lehnert, D K Strickland, L Armus, ApJS. 129493Heckman, T. M., Lehnert, M. D., Strickland, D. K., Armus, L. 2000, ApJS, 129, 493 . L Hernquist, N Katz, D H Weinberg, J Miralda-Escudé, ApJL. 45751Hernquist, L., Katz, N., Weinberg, D.H., Miralda-Escudé, J. 1996, ApJL, 457, L51 . N Katz, D H Weinberg, L Hernquist, ApJS. 10519Katz, N., Weinberg, D. H., Hernquist, L. 1996, ApJS, 105, 19 . N Katz, D H Weinberg, L Hernquist, J Miralda-Escudé, ApJ. 45757Katz, N., Weinberg, D. H., Hernquist, L., Miralda-Escudé, J. 1996, ApJ, 457, L57 . G Kaufmann, S Charlot, ApJ. 43097Kaufmann, G., Charlot, S. 1994, ApJ, 430, L97 . R C Kennicutt, ApJ. 498541Kennicutt, R. C. 1998, ApJ, 498, 541 . D Kereš, N Katz, D H Weinberg, R Davé, 363MN-RASKereš, D., Katz, N., Weinberg, D. H., Davé, R. 2005, MN- RAS, 363, 2 . T Kitayama, Y Suto, ApJ. 469480Kitayama, T., Suto, Y. 1996, ApJ, 469, 480 . E Komatsu, arXiv:0803.0547ApJS, submitted. Komatsu, E. et al. 2008, ApJS, submitted, arXiv:0803.0547 . K M Lanzetta, A M Wolfe, D A Turnshek, L Lu, R G Mcmahon, C Hazard, ApJS. 771Lanzetta, K. M., Wolfe, A. M., Turnshek, D. A., Lu, L., McMahon, R. G., Hazard, C. 1991, ApJS, 77, 1 . C L Martin, ApJ. 513156Martin, C. L. 1999, ApJ, 513, 156 . C L Martin, ApJ. 621227Martin, C. L. 2005, ApJ, 621, 227 . A H Maller, J X Prochaska, R S Somerville, J R Primack, MNRAS. 3261475Maller, A. H., Prochaska, J. X., Somerville, R. S., Primack, J. R. 2001, MNRAS, 326, 1475 . H J Mo, S Mao, S D M White, MNRAS. 295319Mo, H. J., Mao, S., White, S. D. M. 1998, MNRAS, 295, 319 . I Murakami, S Ikeuchi, PASJ. 4211Murakami, I., Ikeuchi, S. 1990, PASJ, 42, L11 . N Murray, E Quataert, T A Thompson, ApJ. 618569Murray, N., Quataert, E., Thompson, T. A. 2005, ApJ, 618, 569 . K Nagamine, V Springel, L Hernquiest, MNRAS. 348421Nagamine, K., Springel, V., Hernquiest, L. 2004, MNRAS, 348, 421 . K Nagamine, A M Wolfe, L Hernquist, V Springel, ApJ. 660945Nagamine, K., Wolfe, A. M., Hernquist, L., Springel, V. 2007, ApJ, 660, 945 . B D Oppenheimer, R Davé, MNRAS. 3731265Oppenheimer, B.D, Davé, R.2006, MNRAS, 373, 1265 . B D Oppenheimer, R Davé, MNRAS. 387587Oppenheimer, B.D, Davé, R.2008, MNRAS, 387, 587 . B D Oppenheimer, R Davé, MNRAS. 3951875Oppenheimer, B.D, Davé, R.2009, MNRAS, 395, 1875 . B D Oppenheimer, R Davé, D Kereš, N Katz, J A Kollmeier, D H Weinberg, MNRAS. 860in pressOppenheimer, B. D., Davé, R., Kereš, D., Katz, N., Kollmeier, J. A., Weinberg, D. H. 2010, MNRAS, 860, in press . P Petitjean, J Bergeron, J L Puget, A&A. 265375Petitjean, P., Bergeron, J., Puget, J. L. 1992, A&A, 265, 375 . M Pettini, A E Shapley, C C Steidel, J.-G Cuby, M Dickinson, A F M Moorwood, K L Adelberger, M Giavalisco, ApJ. 554981Pettini, M., Shapley, A. E., Steidel, C. C., Cuby, J.-G., Dickinson, M., Moorwood, A. F. M., Adelberger, K. L., Giavalisco, M. 2001, ApJ, 554, 981 . A Pontzen, MNRAS. 3901394Pontzen, A. et al. 2008, MNRAS, 390, 1394 . A Popping, R Davé, R Braun, B D Oppenheimer, A&A. 50415Popping, A., Davé, R., Braun, R., Oppenheimer, B. D. 2009, A&A, 504, 15 . J X Prochaska, A M Wolfe, J X Prochaska, A M Wolfe, ApJ. 487PW9733ApJProchaska, J.X., Wolfe, A.M. 1997, ApJ, 487, 73 (PW97) Prochaska, J.X., Wolfe, A.M. 2001, ApJ, 560, L33 . J X Prochaska, S Herbert-Fort, A M Wolfe, ApJ. 635123Prochaska, J.X., Herbert-Fort, S., Wolfe, A. M. 2005, ApJ, 635, 123 . S M Rao, D A Turnshek, D B Nestor, ApJ. 636610Rao, S. M., Turnshek, D. A., Nestor, D. B. 2006, ApJ, 636, 610 . A O Razoumov, M L Norman, J X Prochaska, A M Wolfe, ApJ. 64555Razoumov, A. O., Norman, M. L., Prochaska, J. X., Wolfe, A. M. 2006, ApJ, 645, 55 . A O Razoumov, M L Norman, J X Prochaska, J Sommer-Larsen, A M Wolfe, Y.-J Yang, arXiv:0710.4137ApJ, accepted. Razoumov, A. O., Norman, M. L., Prochaska, J. X., Sommer-Larsen, J., Wolfe, A. M., Yang, Y.-J. 2008, ApJ, accepted, arXiv:0710.4137 . D S Rupke, S Veillaux, D B Sanders, ApJs. 16087Rupke, D. S., Veillaux, S., Sanders, D. B. 2005, ApJs, 160, 87 . D Schaerer, A&A. 397527Schaerer, D. 2003, A&A, 397, 527 . A E Shapley, C C Steidel, D K Erb, N A Reddy, K L Adelberger, M Pettini, P Barmby, J Huang, ApJ. 626698Shapley, A. E., Steidel, C. C., Erb, D. K., Reddy, N. A., Adelberger, K. L., Pettini, M., Barmby, P., Huang, J. 2005, ApJ, 626, 698 . D N Spergel, ApJS. 148175D. N. Spergel et al., ApJS, 148, 175 . D N Spergel, astro-ph/0603449Spergel, D. N., et al. 2006, astro-ph/0603449 . V Springel, L Hernquist, MNRAS. 339289Springel, V., Hernquist, L. 2003, MNRAS, 339, 289 . V Springel, L Hernquist, V Springel, MNRAS. 339SH031105MNRASSpringel, V., Hernquist, L. 2003, MNRAS, 339, 312 (SH03) Springel, V. 2005, MNRAS, 364, 1105 . C C Steidel, A E Shapley, M Pettini, K L Adelberger, D K Erb, N A Reddy, M P Hunt, ApJ. 604534Steidel, C. C., Shapley, A. E., Pettini, M., Adelberger, K. L., Erb, D. K., Reddy, N. A., Hunt, M. P. 2004, ApJ, 604, 534 . C C Steidel, D K Erb, A E Shapley, M Pettini, Reddy, M Bogosavljevic, G C Rudie, O Rakic, ApJ. 717289Steidel, C. C., Erb, D. K., Shapley, A. E., Pettini, M., Reddy, Bogosavljevic, M., Rudie, G. C., Rakic, O. 2010, ApJ, 717, 289 . M Steinmetz, MNRAS. 2781005Steinmetz, M. 1996, MNRAS, 278, 1005 . G Stinson, A Seth, N Katz, J Wadsley, F Governato, T Quinn, MNRAS. 3731074Stinson, G., Seth, A., Katz, N., Wadsley, J., Governato, F., Quinn, T. 2006, MNRAS, 373, 1074 . L J Storrie-Lombardi, A M Wolfe, ApJ. 543552Storrie-Lombardi,L.J., Wolfe, A. M. 2000, ApJ, 543, 552 . R S Sutherland, M A Dopita, ApJS. 88253Sutherland, R. S., Dopita, M. A. 1993, ApJS, 88, 253 . E Tescari, M Viel, L Tornatore, S Borgani, 397411MN-RASTescari, E., Viel, M., Tornatore, L., Borgani, S. 2009, MN- RAS, 397, 411 . T M Tripp, L Lu, B D Savage, ApJS. 102239Tripp, T. M., Lu, L., Savage, B. D. 1996, ApJS, 102, 239 . B J Weiner, ApJ. 692187Weiner, B. J. et al. 2009, ApJ, 692, 187 . A M Wolfe, D A Turnshek, H E Smith, R D Cohen, ApJS. 61249Wolfe,A.M., Turnshek,D.A., Smith, H.E., Cohen, R.D. 1986, ApJS, 61, 249 . A M Wolfe, K M Lanzetta, C B Foltz, F H Chaffee, ApJ. 454698Wolfe,A.M., Lanzetta, K.M., Foltz,C.B., Chaffee, F.H. 1995, ApJ, 454, 698 . A M Wolfe, J C Howk, E Gawiser, J X Prochaska, S Lopez, ApJ. 615625Wolfe, A. M., Howk, J. C., Gawiser, E., Prochaska, J. X., Lopez, S. 2004, ApJ, 615, 625 . A M Wolfe, E Gawiser, J X Prochaska, ARA&A. 43861Wolfe,A.M., Gawiser,E., Prochaska,J.X. 2005, ARA&A, 43, 861 . A M Wolfe, H S Chen, ApJ. 652981Wolfe, A.M., Chen, H.S. 2006, ApJ, 652, 981 . A M Wolfe, J X Prochaska, R A Jorgensen, M Rafelski, ApJ. 681881Wolfe, A. M., Prochaska, J. X., Jorgensen, R. A., Rafelski, M. 2008, ApJ, 681, 881 . M Zwaan, J M Van Der Hulst, F H Briggs, M A W Verheijen, E V Ryan-Weber, MNRAS. 3641467Zwaan, M., van der Hulst, J. M., Briggs, F. H., Verheijen, M. A. W., Ryan-Weber, E. V. 2005, MNRAS, 364, 1467 . M Zwaan, F Walter, E Ryan-Weber, E Brinks, W J G De Blok, R C Kennicutt, AJ. 1362886Zwaan, M., Walter, F., Ryan-Weber, E., Brinks, E., de Blok, W. J. G., Kennicutt, R. C. 2008, AJ, 136, 2886
[]
[ "Consistent Asymptotic Expansion of Mott's Solution for Oxide Growth", "Consistent Asymptotic Expansion of Mott's Solution for Oxide Growth" ]
[ "Matthew R Sears \nDepartment of Physics\nTexas A&M University\n77840-4242College Station, Texas\n", "Wayne M Saslow \nDepartment of Physics\nTexas A&M University\n77840-4242College Station, Texas\n" ]
[ "Department of Physics\nTexas A&M University\n77840-4242College Station, Texas", "Department of Physics\nTexas A&M University\n77840-4242College Station, Texas" ]
[]
Many relatively thick metal oxide films grow according to what is called the parabolic law L = √ 2At + . . . . Mott explained this for monovalent carriers by assuming that monovalent ions and electrons are the bulk charge carriers, and that their number fluxes vary as t −1/2 at sufficiently long t. In this theory no charge is present in the bulk, and surface charges were not discussed. However, it can be analyzed in terms of a discharging capacitor, with the oxide surfaces as the plates. The theory is inconsistent because the field decreases, corresponding to discharge, but there is no net current to cause discharge. The present work, which also includes non-monovalent carriers, systematically extends the theory and obtains the discharge current. Because the Planck-Nernst equations are nonlinear (although Gauss's Law and the continuity equations are linear) this leads to a systematic order-by-order expansion in powers of t −1/2 for the number currents, concentrations, and electric field during oxide growth. At higher order the bulk develops a non-zero charge density, with a corresponding non-uniform net current, and there are corrections to the electric field and the ion currents. The second order correction to ion current implies a logarithmic term in the thickness of the oxide layer: L = √ 2At + B ln t + . . . . It would be of interest to verify this result with high-precision measurements. arXiv:1006.3819v1 [cond-mat.mtrl-sci]
10.1016/j.ssi.2010.06.034
[ "https://arxiv.org/pdf/1006.3819v1.pdf" ]
96,162,939
1006.3819
005bfdfe707d1e21236e7188f9a90ba66a5764d5
Consistent Asymptotic Expansion of Mott's Solution for Oxide Growth Matthew R Sears Department of Physics Texas A&M University 77840-4242College Station, Texas Wayne M Saslow Department of Physics Texas A&M University 77840-4242College Station, Texas Consistent Asymptotic Expansion of Mott's Solution for Oxide Growth Many relatively thick metal oxide films grow according to what is called the parabolic law L = √ 2At + . . . . Mott explained this for monovalent carriers by assuming that monovalent ions and electrons are the bulk charge carriers, and that their number fluxes vary as t −1/2 at sufficiently long t. In this theory no charge is present in the bulk, and surface charges were not discussed. However, it can be analyzed in terms of a discharging capacitor, with the oxide surfaces as the plates. The theory is inconsistent because the field decreases, corresponding to discharge, but there is no net current to cause discharge. The present work, which also includes non-monovalent carriers, systematically extends the theory and obtains the discharge current. Because the Planck-Nernst equations are nonlinear (although Gauss's Law and the continuity equations are linear) this leads to a systematic order-by-order expansion in powers of t −1/2 for the number currents, concentrations, and electric field during oxide growth. At higher order the bulk develops a non-zero charge density, with a corresponding non-uniform net current, and there are corrections to the electric field and the ion currents. The second order correction to ion current implies a logarithmic term in the thickness of the oxide layer: L = √ 2At + B ln t + . . . . It would be of interest to verify this result with high-precision measurements. arXiv:1006.3819v1 [cond-mat.mtrl-sci] I. INTRODUCTION From the late 1930s to the late 1940s, N.F. Mott considered 1-3 the implications of experimental results 4 for oxide growth. (For more of a review, see Ref 5.) Under certain conditions (particularly high temperature), many metals develop 4 a layer of oxide at a parabolic rate on surfaces exposed to gas containing oxygen, according to L 2 = 2At,(1) where L is the thickness of the oxide, t is time, and A is a constant. The rate of growth is thus dL dt = A 2 t −1/2 .(2) This result may be thought of as representing the longtime asymptotic limit. For specificity, we assume that the metal M fills x < 0, that oxide MO fills 0 < x < L, and that oxygen gas O fills L < x. This means we employ a moving coordinate system where x = 0 represents the M/MO interface, and x = L(t) represents the MO/O interface. A field and fluxes that vary as t −1/2 are expected on the basis of a gradient of concentration, with the values of the carrier concentrations pinned by the two surfaces and the length L determining the gradient. 4 That is, dL/dt ∼ 1/L gives a parabolic law. Wagner obtained a parabolic growth law using the Planck-Nernst equations and some additional assumptions. 6 Mott obtained a parabolic growth law using a more complete argument 2 that invokes the Planck-Nernst equations, Gauss's Law, and (implicitly) the continuity equations. In this case one can think of the electrochemical potentials pinned by the two surfaces and L determining the gradient, which then yields the parabolic law. For electron and ion number currents (j a , j b ) and ion valence Z = 1, Mott assumed that the total current J = e(j b − j a ) in the oxide is zero, so j a = j b .(3) Since oxide grows when metal ions reach the oxide/gas interface, the growth rate is dL dt = j b Ω,(4) where Ω is the volume per metal ion in the newly formed oxide. Comparison with (2) immediately shows that j b ∼ t −1/2(5) for the asymptotic behavior of the ion fluxes. By the Planck-Nernst equations, the electric field E and the ion density gradients (∂ x n a , ∂ x n b ) also have the same behavior. Moreover, the quantities (j a , j b , E, ∂ x n a , ∂ x n b ) are all uniform throughout the oxide. We assume that the metal and the gas are neutral, so by Gauss's Law the surface charges (Σ(0), Σ(L)) and the electric fields (E(0), E(L)) are related by E(0) = Σ(0) , E(L) = − Σ(L) .(6) Moreover, by continuity the assumption that there is charge and current only within the oxide leads to the conditions J(0) = dΣ(0) dt , J(L) = − dΣ(L) dt .(7) This model immediately poses the questions of whether Mott's solution is self-consistent, and whether it is the beginning of an asymptotic series in powers of t −1/2 . To answer self-consistency, note that the uniform but decreasing-with-time E ∼ t −1/2 leads to no bulk charge and to interfaces with equal and opposite charge, so they behave like a capacitor. Since E decreases with time, so must the charge on the capacitor. However, the model assumes zero current. Hence Mott's solution is not selfconsistent. Nevertheless we will show, in response to the question about asymptopia, that each of the continuous variables can be expanded in an asymptotic series in t −n/2 , where Mott's solution corresponds to n = 1, and that non-zero current J appears at order n = 3. We will also show that all of the continuous variables can depend upon position. This means that the bulk can develop a local and total charge density, with the surfaces not having equal and opposite charges, so that the capacitor model holds only to lowest order. It is already known that steady nonequilibrium current flow can cause local charge densities in the bulk. 7,8 The fact that an expansion can be made of j b in powers of t −1/2 leads to dL dt = A 2 t −1/2 + Bt −1 + . . . ,(8) so that L = √ 2At 1/2 + B ln t + . . .(9) to higher accuracy than given by the pure parabolic law. This prediction can be subjected to experimental study. In practice, we must assume that L = √ 2At 1/2 + B ln t + C + . . .(10) because ln t is of order unity. We have not found data of a high enough precision to verify this ln t correction. Section II gives the five fundamental equations for the five continuous variables (two continuity equations, Gauss's Law, and two Planck-Nernst equations), and indicates the expansion in powers of t −1/2 . (The fact that it is such an expansion relates to the leading order term and the fact that the Planck-Nernst equation is a nonlinear function of the density.) Section III gives the n = 1 solution, which has two interface-reaction-rate-determined integration constants: both A and a constant-in-space density not included in the Mott solution. Section IV presents the results of the n = 2 solution that gives the first correction to Mott's solution, and shows that there is a non-zero bulk charge density and a corresponding spatial variation to the field. There are four interfacereaction-rate-determined integration constants for n = 2. All higher-order solutions involve four interface-reactionrate-determined integration constants. Section V considers the oxide thickness growth rate. Section VI provides a summary and conclusions. The Appendices explicitly give the n = 2 and n = 3 solutions. II. ON TWO-COMPONENT TRANSPORT Let the subscripts a and b denote electrons and metal ions respectively. Let n 0 and n 0 /Z be the uniform equilibrium concentration of electrons and metal ions respectively, (n a ,n b ) be their additional concentrations, (ν a ,ν b ) be their mobilities, (D a ,D b ) be their diffusion coefficients, (q a = −e, q b = Ze) be their charges, E be the electric field, k B be the Boltzmann constant, T be the temperature, and x be the position. Note that we use the Einstein Relations to rewrite the mobilities, which can have either sign, in terms of diffusion constants, which are always positive. Since we always consider Z positive, if we want to consider oxygen ions and holes as the carriers, then only the sign of the electric charge e must be changed. For M 3+ and O 2− , we let e → 2e and Z = 3/2. Therefore our results are quite general. A. Equations for Two-Component Transport We will employ the Einstein Relations ν a D a = q a k B T , ν b D b = q b k B T .(11) For electrons and metal ions, −Zq a = q b , so ν b D a = −Zν a D b , − ν a D a = ν b ZD b = 1 V T ,(12) where V T = k B T e(13) denotes a thermal voltage. Mott and Cabrera's paper 5 used (v a ,v b ) to denote electron and ion mobilities, which they may have intended to be (ν a , ν b ). The one-dimensional Planck-Nernst equations for the number flux densities associated with metal ions and electrons are j a = ν a (n 0 + n a )E − D a ∂ x n a ,(14)j b = ν b n 0 Z + n b E − D b ∂ x n b .(15) Rewriting mobilities in terms of diffusion constants using (11), j a = − D a V T (n 0 + n a )E − D a ∂ x n a ,(16)j b = D b V T (n 0 + Zn b )E − D b ∂ x n b ,(17) We also use the number continuity equations, ∂ t n a + ∂ x j a = 0, ∂ t n b + ∂ x j b = 0,(18) and Gauss's Law, ∂ x E = e (Zn b − n a ),(19) where e is electron charge in Coulombs, and is the permittivity of the oxide. With the charge density and current density defined by ρ = −e(n a − Zn b ), J = −e(j a − Zj b ),(20) use of the number continuity equations yields the charge continuity equation ∂ t ρ + ∂ x J = 0.(21) B. Expansion Notation We seek a series solution in powers of t −1/2 for the electron and ion concentrations (densities) and fluxes, as well as for the electric field. They must satisfy the Planck-Nernst equations, the continuity equations, and Gauss's Law. Following Mott, we take the lowest order fluxes and field to vary as t −1/2 . Examining the structure of the Planck-Nernst equation for electrons (16), and inserting terms of order t −1/2 , the nonlinear term n a E will contain terms of order t −1 . Iteration yields that the series must be in powers of t −n/2 for integer n. We thus make an expansion of the form j a = n=1 J an t −n/2 , j b = n=1 J bn t −n/2 ,(22)n a = n=1 N an t −n/2 , n b = n=1 N bn t −n/2 ,(23)Σ (0) = n=1 Σ (0) n t −n/2 , Σ (L) = n=1 Σ (L) n t −n/2 , (24) E = n=1 E n t −n/2 ,(25)J = n=1 J n t −n/2 , ρ = n=1 ρ n t −n/2 .(26) Here J an , J bn , N an , N bn , E n , ρ n , and J n are functions of the position along the direction of growth, x. From the above definitions, the dimensionality of (J an , J bn ) is concentration times velocity times t n/2 , the dimensionality of (N an , N bn ) is concentration times t n/2 , the dimensionality of the surface charge density (Σ (0) n , Σ (L) n ) is charge per area times t n/2 , the dimensionality of E n is electric field times t n/2 , the dimensionality of ρ n is charge density times t n/2 , and the dimensionality of J n is current density times t n/2 . C. On Specifying Chemical Reaction Rates at Surfaces In the presence of a true chemical reaction at a surface there is a single reaction rate, typically specified by a Butler-Volmer relation, [9][10][11] between the fluxes of all of the relevant components. In the present case the fluxes of the carriers are independent of one another, so that there are two statements about carrier fluxes at each surface, for a total of four conditions. Withμ denoting an electrochemical potential, near equilibrium (as we have here, in the asymptotic regime) each flux j will be proportional to its corresponding ∆μ across the interface (either M/MO or MO/O). The proportionality constant will depend details of the reaction and the baseline properties of the system. That is, j a,b = G a,b ∆μ a,b(27) at each surface, so there are four G's. The equation becomes nonlinear far from equilibrium. Thus the j a,b are proportional to a non-equilibrium quantity, which we take to be a field E 1 , as in Ref. 5. All of the unknown integration constants will be linear or higher in E 1 . We will not attempt to carry this procedure any further. It is sufficient for our purposes to know that this can be done, and that in the present problem there are four constants associated with boundary conditions at the two surfaces for the two carriers. In principle, all of the quantities appearing in the solutions to the transport equations are determined by these surface reaction rates. For a true chemical reaction, which we expect to be described by a Butler-Volmer equation, the fluxes at each surface, because they are related, will be described by only a single independent coefficient G. Note also that the Butler-Volmer equation is non-linear, so that the boundary conditions can be nonlinear. Because we do not consider the boundary conditions in detail, we will neglect this possibility. III. RELATIONS BETWEEN EXPANSION COEFFICIENTS: ALL n A. Continuity Relations and Charge Conservation The continuity equations imply charge conservation, so we treat them in the same subsection. The continuity equations (18) yield n=1 (∂ x J an )t −n/2 + m=1 N am − m 2 t −(m+2)/2 = 0,(28) for subscript a, and a similar relation holds for b. With m = (n − 2), so that m=1 → n=3 , comparison of like powers of t yields, for n = 1 and n = 2, ∂ x J an = 0, ∂ x J bn = 0, (n = 1, 2),(29) and, for n ≥ 3, ∂ x J an = n − 2 2 N a(n−2) , ∂ x J bn = n − 2 2 N b(n−2) , (n ≥ 3).(30) By definition we have ρ n = e (ZN bn − N an ) , J n = e (ZJ bn − J an ) ,(31) and charge conservation for each n is ∂ t ρ n + ∂ x J n = 0.(32) Charge conservation at each surface yields dΣ (0) dt = −J| x=0 = −e(Zj b − j a )| x=0 , dΣ (L) dt = J| x=L = e(Zj b − j a )| x=L ,(33)so n=1 − n 2 Σ (0) n t −(n+2)/2 = −e n=1 (ZJ bn − J an ) | (x=0) t −n/2 ,(34)n=1 − n 2 Σ (L) n t −(n+2)/2 = e n=1 (ZJ bn − J an ) | (x=L) t −n/2 .(35) Comparing powers of t we have, for n = 1 and n = 2, ZJ bn | (x=0) = J an | (x=0) , ZJ bn | (x=L) = J an | (x=L) , (n = 1, 2),(36) and, for n ≥ 3, Σ (0) n + Σ (L) n t −n/2 = e L 0 n=1 (N an − ZN bn )t −n/2 dx,(38) so that, for each n, Σ (0) n + Σ (L) n = e L 0 (N an − ZN bn )dx.(39) B. Gauss's Law Gauss's Law (19) reads n=1 (∂ x E n )t −n/2 = 1 n=1 ρ n t −n/2 ,(40) so, for each n, ∂ x E n = 1 ρ n .(41) Gauss's Law at the surfaces (6) gives, for each n, E n (0) = Σ (0) n , E n (L) = − Σ (L) n .(42) C. Planck-Nernst The Planck-Nernst equation for species a, (16), can be written as n=1 J an t −n/2 = − D a V T n 0 n=1 E n t −n/2 − D a V T m,n=1 N an E m t −(m+n)/2 − D a n=1 (∂ x N a )t −n/2 .(43) or, n=1 (J an + D a V T n 0 E n + D a ∂ x N an )t −n/2 = − D a V T m,n=1 N an E m t −(m+n)/2 ,(44) with a similar form for species b, n=1 (J bn − D b V T n 0 E n + D b ∂ x N bn )t −n/2 = ZD b V T m,n=1 N bn E m t −(m+n)/2 .(45) By matching coefficients of powers of t, we obtain the resultant equations for any n. D. Solving the Transport Equations There are five first-order differential equations for five continuous variables, so there are five integration constants at each order. For n ≥ 3 the current density J n at x = L is known from the n − 2 value of surface charge density Σ (L) n−2 . Therefore only four integration constants need be determined. These can be thought of as fixed by the "reaction rates" of each charge carrier at each of the interfaces, which will not be specified. The cases n = 1 and n = 2 are a bit simpler than n ≥ 3. Nevertheless, for each n our strategy will be the same: (1) from the continuity equations find the ion fluxes J an and J bn ; (2) from all of the five equations find an equation for ρ n and solve it; (3) use this ρ n in Gauss's Law to find E n ; (4) find N an and N bn by substitution of J an , J bn and E n into the Planck-Nernst equations. IV. SOLUTION FOR n = 1 For n = 1, the calculations are simple, but illustrate what happens in higher orders. The continuity equations (29) give J a1 and J b1 to be uniform, and charge conservation at each surface (36) gives J a1 = ZJ b1 .(46) The Planck-Nernst equations (44) and (45) yield J a1 + D a V T n 0 E 1 + D a ∂ x N a1 = 0,(47)J b1 − D b V T n 0 E 1 + D b ∂ x N b1 = 0,(48) where we take E 1 as a uniform, experimentally determined value. From the uniformity of E 1 , ∂ x E 1 = 0,(49) so that Gauss's Law yields ρ 1 = 0, N a1 = ZN b1 .(50) Substitution of fluxes from (46) and concentrations from (50) into the Planck-Nernst equations gives J a1 + D a V T n 0 E 1 + D a ∂ x N a1 = 0,(51)J a1 Z − D b V T n 0 E 1 + D b ∂ x N b1 = 0.(52) Thus, with (50), we have J a1 = ZJ b1 = −(1 + Z) D a D b D b − D a n 0 E 1 V T ,(53)N a1 = ZN b1 = Hn 0 E 1 V T x + M 1 ;(54) here, H = D a + ZD b D b − D a(55) is dimensionless, and M 1 is a constant of integration (units of concentration times s 1/2 ) determined by the surface reaction rates (and thus linear in E 1 ). (Recall that (J a1 , N a1 ) are first order coefficients, which must be multiplied by t −1/2 in order to find the flux and number densities (j, n).) Although constants of integration associated with reaction rates were discussed earlier, we consider them more explicitly here. Figure 1 illustrates the effect of changing the value of M 1 . If M 1 were very large, the high concentration of metal ions near the metal/oxide surface would oppose new ions from entering, whereas the high concentration of metal ions near the oxide/gas interface would encourage more ions to be deposited on the oxide/gas surface. Eventually, the number of metal ions in the bulk would be insufficient to maintain the high rate of ions exiting the oxide, and would drop to some equilibrium value. Thus, M 1 is determined by constraining the oxide to have no net ion-loading or ion-unloading in the bulk at order n = 1. Note that E 1 , which is proportional to (the parabolic growth rate coefficient), is also related to the surface reaction rates. In general, net surface reaction rates involve a Butler-Volmer equation, but not far from equilibrium (as in the Mott solution) they can be linearized in the differences of various electrochemical potentials. This will ensure that there is no net surface reaction rate in the limit of equilibrium. From Gauss's Law at the surfaces (42), E 1 (0) = Σ (0) 1 , E 1 (L) = − Σ (L) 1 .(56) Since E 1 is uniform, Σ (0) 1 = −Σ (L) 1 = E 1 .(57) For |ν a | >> |ν b | (or equivalently in this case, D a >> D b ), Mott and Cabrera 5 find for monovalent ions that J a1 = −2D b ∂N a1 ∂x ;(58) our results can be shown to be consistent with this. V. SOLUTION FOR n = 2 We here summarize the n = 2 results. (For the explicit solution, see Appendix A.) Recall that all coefficients must be multiplied by t to find the physical variables (j, n, E). With the constant M 21 (in units s/m 2 ) determined by surface reaction rates (and thus linear in E 1 ), the second order flux density coefficients are J a2 = ZJ b2 = −(1 + Z) D a D b ZD b + D a M 21 ,(59) and there is no net charge flux, J 2 = ZeJ b2 − eJ a2 = 0.(60) With the constants M 20 , P N b2 = M 20 Z − 1 Z H E 2 1 V T e + M 21 Z x − P (+) a2 e x/ls − P (−) a2 e −x/ls .(62) Here, l s is the screening length, l s = V T (1 + Z)n 0 e = k B T (1 + Z)n 0 e 2 .(63) There is a net charge in the bulk, given by ρ 2 t −1 , where ρ 2 = P (+) 2 e x/ls + P (−) 2 e −x/ls − H E 2 1 V T ,(64) and P (+) 2 = −(1 + Z)eP (+) a2 , P (−) 2 = −(1 + Z)eP (−) a2 . (65) Note that ρ 2 has, in addition to surface charge within a screening length of the two surfaces, a uniform charge density with sign determined by e/(D a − D b ) and independent of the sign of E 1 (or, equivalently, the direction of current flow). Since D a D b here, the term is positive. For holes and oxygen ions the carriers, we have −e/(D a − D b ), but D b D a , so it is again positive. As found in previous work, 7 this uniform charge density leads to a quadratic voltage profile within the bulk, not within a screening length of either surface. The second order coefficient of the electric field is E 2 = l s P (+) 2 e x/ls − P (−) 2 e −x/ls − HE 2 1 V T x + V T M 21 n 0 H − M 1 E 1 n 0 .(66) The surface charge coefficients are given by Σ (0) 2 = l s P (+) 2 − P (−) 2 + V T M 21 n 0 H − M 1 E 1 n 0 ,(67)HE 2 1 V T L − V T M 21 n 0 H + M 1 E 1 n 0 .(68) We have verified that Σ (0) 2 + Σ (L) 2 + L 0 ρ 2 dx = 0,(69) so there is no net charge in the system. VI. RATE OF GROWTH OF OXIDE LAYER The oxide layer grows as metal ions reach the MO/O surface, and are taken into lattice positions to form a new oxide layer. Thus, the rate of growth of the oxide depends on the rate j b at which metal ions arrive, according to (4), or dL/dt = Ωj b . Including n = 3 (see Appendix B) we find the metal ion number flux (using Eqs. (53), (59) or (A17), and (B4)), j b = − 1 + Z Z D a D b D b − D a n 0 E 1 V T t −1/2 + − 1 + Z Z D a D b ZD b + D a M 21 t −1 + Hn 0 E 1 4V T Z x 2 + M 1 2Z x + K 3 Z + E 1 2Ze t −3/2 + . . . ,(70) where K 3 is a constant of integration (units of flux density times s 3/2 ) determined by interfacial reaction rates (and thus linear in E 1 ). Note that, if D b < D a (as for ions relative to electrons) then the microscopics must give E 1 > 0 for a positive growth rate. Keeping only terms of second order, integration of (70) with respect to time gives L = √ 2At 1/2 + B ln t + . . . ,(71) where A = 2 1 + Z 2 Z 2 D a D b D b − D a 2 n 2 0 E 2 1 Ω 2 V 2 T ,(72)B = D a D b D a + Z 2 D b (1 − Z) M 1 E 1 V T − 1 + Z 2 Z M 21 Ω.(73) VII. SUMMARY AND CONCLUSION We have shown that the approach taken by Mott for parabolic growth of oxide films can be turned into a consistent asymptotic expansion, and we have explicitly given the form of the lowest three orders. Up to four integration constants appear at each order, related to the surface reaction rates. At higher order the bulk film is found to be charged, with a corresponding non-uniform current density. The Appendices present the n = 2 and n = 3 solutions in detail, to show that the method can be used for any n to find the fluxes, concentrations, surface charges and electric field. As a consequence one can have confidence that the Mott solution gives the leading term in the complete solution of the complete set of transport equations. The most easily verifiable prediction from the viewpoint of experiment is the prediction that the first correction to the linear growth law is logarithmic. Because ln t is of order unity, data should be analyzed with an additional constant: L = √ 2At+B ln t+C +. . . . A sampling of the current literature 12-16 did not find enough precision to confirm the logarithmic form of the correction term. We would like to thank Allan Jacobson for his comments and suggestions. This work was partially supported by the Department of Energy through grant DE-FG02-06ER46278. Appendix A: Explicit Solution for n = 2 By the continuity equations (29) J a2 and J b2 must be uniform, and by (36) they are related by J a2 = ZJ b2 .(A1) For n = 2, the Planck-Nernst equations from (44) and (45) are J a2 + D a V T n 0 E 2 + D a ∂ x N a2 = − D a V T N a1 E 1 , (A2) J b2 − D b V T n 0 E 2 + D b ∂ x N b2 = ZD b V T N b1 E 1 .(A3) Taking spatial derivatives of (A2) and (A3), and using Gauss's Law (41) and the uniformity of J a2 and J b2 , yields D a n 0 V T ρ 2 + D a ∂ 2 x N a2 = − D a V T ∂ x (N a1 E 1 ), (A4) − D b n 0 V T ρ 2 + D b ∂ 2 x N b2 = ZD b V T ∂ x (N b1 E 1 ). (A5) The right hand sides are found from (54); D a n 0 V T ρ 2 + D a ∂ 2 x N a2 = −D a Hn 0 E 2 1 V 2 T , (A6) − D b n 0 V T ρ 2 + D b ∂ 2 x N b2 = D b Hn 0 E 2 1 V 2 T ,(A7) where H is defined in (55). Subtracting (A6) multiplied by 1/D a from (A7) multiplied by Z/D b yields ∂ 2 x ρ 2 − (1 + Z) n 0 e V T ρ 2 = (1 + Z) Hn 0 eE 2 1 V 2 T .(A8) The solution to this equation, with l s ≡ V T (1 + Z)n 0 e = k B T (1 + Z)n 0 e 2 ,(A9) and with new integration constants P From (A10) we infer that N a2 and ZN b2 are polynomials whose terms that are linear or higher are equal, and they may have different exponential terms. Substituting ρ 2 from (A10) into Gauss's Law (41), and integrating ∂ x E 2 yields E 2 = l s P (+) 2 e x/ls − P (−) 2 e −x/ls − HE 2 1 V T x + F 2 ,(A11) where F 2 is a new integration constant with units V-s/m. By (54), (N a1 , N b1 ) are linear in x, so that the righthand-side of the Planck-Nernst equation (A2) is linear in x. Moreover, the continuity equation (29) implies that J a2 is constant. Therefore by (A11) for E 2 , the Planck-Nernst equation allows ∂ x N a2 to be linear, so N a2 can be quadratic in x. Moreover, the exponential terms in (D a /V T )n 0 E 2 + D a ∂ x N a2 must cancel. Finally, from (A10) any linear or quadratic terms in N a2 and ZN b2 must be equal. With (M 20 , M 21 , M 22 , P (±) a2 ) being new integration constants, we therefore conclude that the following form must hold: N a2 = M 20 + M 21 x + M 22 x 2 + P (+) a2 e x/ls + P (−) a2 e −x/ls . (A12) Use of (31) for n = 2 (ρ 2 = e(ZN b2 − N a2 )), and ρ 2 of (A10) gives N b2 = M 20 Z − 1 Z H E 2 1 V T e + M 21 Z x + M 22 Z x 2 + P (+) 2 + eP (+) a2 Ze e x/ls + P (−) 2 + eP (−) a2 Ze e −x/ls . (A13) Addition of (A2) divided by D a and (A3) divided by D b gives, with (A1) and (54), J a2 1 D a + 1 ZD b + ∂ x (N a2 + N b2 ) = 0.(A14) Substitution for N a2 and N b2 from (A12) and (A13) allows us to solve for some of the constants, M 21 1 + 1 Z + 2M 22 1 + 1 Z x + 1 l s P (+) a2 1 + 1 Z + P (+) 2 Ze e x/ls − 1 l s P (−) a2 1 + 1 Z + P (−) 2 Ze e −x/ls = −J a2 1 D a + 1 Z 2 D b .(A15) So, since J a2 is uniform, comparison of powers of x yields the conditions P (+) 2 = −(1 + Z)eP (+) a2 , P (−) 2 = −(1 + Z)eP (−) a2 ,(A16)J a2 = ZJ b2 = −(1 + Z) D a D b ZD b + D a M 21 ,(A17)M 22 = 0.(A18) We may thus rewrite (A12) and (A13) as N a2 = M 20 + M 21 x + P (+) a2 e x/ls + P (−) a2 e −x/ls , (A19) N b2 = M 20 Z − 1 Z H E 2 1 V T e + M 21 Z x − P (+) a2 e x/ls − P (−) a2 e −x/ls . (A20) Note that in (A2) the exponential and linear coefficients already match, by construction. A new constraint, however, is found by comparing the constant terms, − (1 + Z) D a D b Z 2 D b + D a M 21 + D a n 0 V T F 2 + D a M 21 = − D a M 1 E 1 V T ,(A21) yielding the condition that F 2 = V T M 21 n 0 H − M 1 E 1 n 0 . (A22) We are thus left with four independent constants of integration, which we take to be M 20 , M 21 , P (+) 2 , and P (−) 2 . The reaction rates for electrons and ions at each interface, needed to produce the correct ion fluxes, provide the 4 conditions necessary to solve for these constants. For completeness we note that, from Gauss's Law at each surface (42), J a3 = Hn 0 E 1 4V T x 2 + M 1 2 x + K 3 ,(B3)J b3 = Hn 0 E 1 4V T Z x 2 + M 1 2Z x + K 3 Z + E 1 2Ze . (B4) There is a uniform net electric charge flux (i.e., current density) at order n = 3, J 3 = e (ZJ b3 − J a3 ) = E 1 2 ,(B5) due to the discharge of the surfaces in order n = 1. For n = 3, the Planck-Nernst equations from (44) and (45) are J a3 + D a V T n 0 E 3 + D a ∂ x N a3 = − D a V T (N a2 E 1 + N a1 E 2 ) ,(B6)J b3 − D b V T n 0 E 3 + D b ∂ x N b3 = ZD b V T (N b2 E 1 + N b1 E 2 ) .(B7) Taking spatial derivatives of (B6) and (B7), and using Gauss's Law (41), ∂ x J a3 + D a n 0 V T ρ 3 + D a ∂ 2 x N a3 = − D a V T ∂ x (N a2 E 1 + N a1 E 2 ) , (B8) ∂ x J b3 − D b n 0 V T ρ 3 + D b ∂ 2 x N b3 = ZD b V T ∂ x (N b2 E 1 + N b1 E 2 ) . (B9) Subtracting (B8) multiplied by 1/D a from (B9) multiplied by Z/D b yields an equation for ρ 3 , e 1 D b − 1 D a ∂ x J a3 − (1 + Z) n 0 e V T ρ 3 + ∂ 2 x ρ 3 = eE 1 V T (∂ x N a2 + Z 2 ∂ x N b2 ) + e V T (∂ x N a1 + Z 2 ∂ x N b1 )E 2 + e V T (N a1 + Z 2 N b1 )∂ x E 2 .(B10) Substitution for (N a1 , N b1 ) from (54), (N a2 , N b2 ) from (A19) and (A20), E 2 from (A11), and J a3 from (B3) yields ∂ 2 x ρ 3 − 1 l 2 s ∆ρ 3 = e D b − D a D a D b Hn 0 E 1 2V T x + ∂ x N a3 = − M 20 E 1 V T + M 1 F 2 V T + n 0 F 3 V T + K 3 D a ) = −e (ZJ bn − J an ) | (x=0) , = e (ZJ bn − J an ) | (x=L) , (n ≥ 3). (37) Charge conservation over both surface and bulk yields n=1 FIG. 1 . 1The effect of the constant concentration M1 of metal ions. Here N m1 has too high a value of M1. Only the particular value of M1 in N (eq) m1 permits an equal rate of ions to enter and leave the oxide, here taken to be 7 ions per second. by surface reaction rates (and thus linear in E 1 ), the n = 2 coefficients of the concentrations of electrons and ions are given by N a2 = M 20 + M 21 x + P (+) a2 e x/ls + P (−) a2 e −x/ls , (61) is given by (57). Integration of the continuity equations (B1) then yields only a single new integration constant, which we call K 3 : x 2 e −x/ls ,(B18)where F 3 is a new integration constant, with units Vs 3/2 /m. Substitution of the coefficients N a1 from (54), N a2 from (A12), E 2 from (A11), J a3 from (B3), and E 3 from (B18) into the n = 3 Planck-Nernst equation for electrons (B6) yields Appendix B: Solution for n = 3Recall that in order n = 3, all coefficients must be multiplied by t 3/2 to find the physical variables (j, n, E).Using the n = 1 coefficients for electron and metal ion number densities from (54) and the continuity equations from (30) for n = 3,Integration of these two equations gives two integration constants. However, these two constatnts are constrained by charge conservation across the interface at x = 0 (37),The solution to this second order differential equation, with two new integration constants β (+) 1 and β (−) 1 , iswhere, with substitution of F 2 from (A22),From (B12) we infer that N a3 and ZN b3 may include polynomials whose terms that are quadratic and higher are equal, and may include exponential terms that differ. Substituting ρ 3 from (B12) into Gauss's Law (41), and integrating ∂ x E 3 givesThe solution to this first order differential equation, with one new integration constant M 30 , is1 e x/ls + γ whereM 32 = 1 2Use of (31) for n = 3 (ρ 3 = e(ZN b3 − N a3 )), ρ 3 fromZe e x/lsZeZeWe now have J a3 , J b3 , E 3 , N a3 and N b3 with five constants of integration, K 3 , βAs for n = 2, in (B30) the exponential, quadratic, and linear terms match, by construction. A new constraint, however, is found by comparing the constant terms,Substituting M 31 from (B21) yieldsa relation between integration constants F 3 and K 3 .We are thus left with four independent constants of integration, which we take to be K 3 , M 30 , β (+) 1 , and β (−) 1 . The reaction rates for electrons and ions at each interface, needed to produce the correct ion fluxes, provide the four conditions necessary to solve for these constants.For completeness, we note that from Gauss's Law at each surface (42),so there is no net charge in the system. B12), and N a3 from (B20) gives. B12), and N a3 from (B20) gives . N F Mott, Trans. Faraday Soc. 351175N.F. Mott. Trans. Faraday Soc., 35:1175, 1939. . N F Mott, Trans. Faraday Soc. 35472N.F. Mott. Trans. Faraday Soc., 35:472, 1940. . N F Mott, Trans. Faraday Soc. 43429N.F. Mott. Trans. Faraday Soc., 43:429, 1947. . G Z , Tammann. anorg. allg. Chem. 11178G. Z. Tammann. anorg. allg. Chem., 111:78, 1920. . N Cabrera, N F Mott, Rep. Prog. Phys. 12163N. Cabrera and N. F. Mott. Rep. Prog. Phys., 12:163, 1949. Atom Movements. C Wagner, Am. Soc. for Metals. C. Wagner. Atom Movements. Am. Soc. for Metals, Cleve- land, Ohio, 1951. . W M Saslow, Phys. Rev. Lett. 764849W. M. Saslow. Phys. Rev. Lett., 76:4849, 1996. . Y Yang, W M Saslow, J. Chem. Phys. 10910331Y. Yang and W. M. Saslow. J. Chem. Phys., 109:10331, 1998. D R Crow, Principles and Applications of Electrochemistry. Blackie Academic & Professional4th edD. R. Crow. Principles and Applications of Electrochem- istry, 4th ed. Blackie Academic & Professional, 1994. . J M Rubi, S Kjelstrup, J. Phys. Chem. B. 10713471J. M. Rubi and S. Kjelstrup. J. Phys. Chem. B, 107:13471, 2003. . J Fleig, Phys. Chem. Chem. Phys. 72027J. Fleig. Phys. Chem. Chem. Phys., 7:2027, 2005. . A A Dravnieks, J. Phys. Chem. 55540A. A. Dravnieks. J. Phys. Chem., 55:540, 1951. . M Martin, Thin Solid Films. 25061M. Martin. Thin Solid Films, 250:61, 1994. . S K Lahiri, Microelectronics Journal. 29335S. K. Lahiri. Microelectronics Journal, 29:335, 1998. . L P H Jeurgens, J. Appl. Phys. 921649L. P. H. Jeurgens et al. J. Appl. Phys., 92:1649, 2002. . C Zhong, Appl. Phys. A. 90263C. Zhong et al. Appl. Phys. A, 90:263, 2008.
[]
[ "Long strings and chiral primaries in the hybrid formalism", "Long strings and chiral primaries in the hybrid formalism" ]
[ "Lorenz Eberhardt [email protected] \nInstitut für Theoretische Physik\nETH Zurich\nCH-8093ZürichSwitzerland\n", "Kevin Ferreira [email protected] \nInstitut für Theoretische Physik\nETH Zurich\nCH-8093ZürichSwitzerland\n" ]
[ "Institut für Theoretische Physik\nETH Zurich\nCH-8093ZürichSwitzerland", "Institut für Theoretische Physik\nETH Zurich\nCH-8093ZürichSwitzerland" ]
[]
We revisit two related phenomena in AdS 3 string theory backgrounds. At pure NS-NS flux, the spectrum contains a continuum of long strings which can escape to the boundary of AdS 3 at a finite cost of energy. Related to this are certain gaps in the BPS spectrum one computes from the RNS worldsheet description. One expects that both these effects disappear when perturbing slightly away from the pure NS-NS flux background. We employ the hybrid formalism for mixed flux backgrounds to demonstrate directly from the worldsheet that this is indeed the case.
10.1007/jhep02(2019)098
[ "https://arxiv.org/pdf/1810.08621v1.pdf" ]
73,678,703
1810.08621
3cf0b7178892f94197d4cc83ee82d3a4f438a738
Long strings and chiral primaries in the hybrid formalism 19 Oct 2018 Lorenz Eberhardt [email protected] Institut für Theoretische Physik ETH Zurich CH-8093ZürichSwitzerland Kevin Ferreira [email protected] Institut für Theoretische Physik ETH Zurich CH-8093ZürichSwitzerland Long strings and chiral primaries in the hybrid formalism 19 Oct 2018Prepared for submission to JHEP We revisit two related phenomena in AdS 3 string theory backgrounds. At pure NS-NS flux, the spectrum contains a continuum of long strings which can escape to the boundary of AdS 3 at a finite cost of energy. Related to this are certain gaps in the BPS spectrum one computes from the RNS worldsheet description. One expects that both these effects disappear when perturbing slightly away from the pure NS-NS flux background. We employ the hybrid formalism for mixed flux backgrounds to demonstrate directly from the worldsheet that this is indeed the case. Introduction String theory in AdS 3 backgrounds exhibit a variety of rich and interesting features. In type IIB string theory, AdS 3 can be supported by either pure NS-NS flux, pure R-R flux, or a mixture thereof. Because of this, AdS 3 backgrounds have typically a large number of moduli and surprising phenomena occur at various places in the moduli space of the theory. In this paper, we will mostly discuss the backgrounds AdS 3 × S 3 × M 4 where M 4 = T 4 or K3. These backgrounds are realised in string theory as the near-horizon limit of the D1-D5 brane system compactified on the manifold M 4 [1][2][3]. As such, the system is characterised by the values of the fluxes of D1-D5, F1-NS5 and D3-branes, which can wrap various cycles of M 4 . The fluxes transform under the U-duality group, so that different backgrounds are classified by the norm of a charge vector. 1 It was first discussed by Seiberg and Witten [3] that there is a codimension four locus in the moduli space in which the background becomes 'singular'. This means that the brane system can separate at no cost of energy. In other words, this locus is a wall of marginal stability of the system. The singularity manifests itself in a variety of ways, with some drastic consequences for the theory. In particular, the string spectrum changes discontinuously as we move through the singular locus in the moduli space. The most extreme changes in the spectrum on the singular locus are the following: 1. The string spectrum develops a continuum of states on the singular locus, due to the fact that a finite amount of energy suffices for a string to reach the boundary of AdS 3 . These so-called long strings can have arbitrary radial momentum above a certain threshold. 2 2. The spectrum of BPS operators is discontinuous on the singular locus. In particular, there are less BPS operators on the singular locus than outside of it. Since chiral primaries are 1 2 -BPS operators, we will refer to this phenomenon as 'missing chiral primaries'. These predictions can be understood by considering the worldvolume CFT description of the D1-D5 system as follows. The theory on the intersection of the D1-and D5-branes flows to a two-dimensional CFT in the IR limit, which is identified with the sigma-model on the Higgs branch of the worldvolume gauge theory. On the other hand, the Coulomb branch describes the emission of branes from the system. On the singular locus these two branches meet classically in the small instanton limit, but are otherwise disconnected. As we shall review in this paper, this statement is corrected quantum mechanically. In the quantum theory, the two branches are connected via an infinitely long tube. A Liouville field associated with this tube is responsible for the continuum part of the spectrum. In the same vein, chiral primaries can get swallowed in the tube and disappear from the Higgs branch, leading to the missing chiral primaries. These expectations are hard to confirm explicitly from the string theory side. The case of pure NS-NS flux lies on the singular locus of the theory, and one indeed observes the existence of a continuum of long strings and missing chiral primaries in the spectrum. Nevertheless, while there is an exact description of string theory on pure NS-NS backgrounds via WZW-models [4][5][6][7][8][9][10], there is currently no exactly solvable description of string theory with mixed flux, which would allow us to move away from the singular locus. We should mention that while integrability techniques give some insight into the string theory spectrum on AdS 3 backgrounds [11], they are not (yet) able to to reproduce the qualitative behaviour described above. Integrability techniques require the decompactification of the worldsheet, which in turn requires a large amount of background flux. On the other hand, missing chiral primaries and long strings are effects which are non-perturbative in the inverse flux, and hence are not readily visible in the decompactification limit, even when including wrapping corrections. A recent study [12] in this context shows that the string theory spectrum depends essentially only on the directions normal to the singular locus. In this paper, we confirm the expectations above using the hybrid formalism of Berkovits, Vafa and Witten [13] to describe string theory on AdS 3 × S 3 × M 4 with mixed flux. This formalism consists of a sigma-model on the supergroup PSU(1, 1|2) and a sigma-model on M 4 , coupled together by ghosts. This is an exact worldsheet description of the theory, but it is exceedingly hard to solve exactly. However, one might hope to understand this theory just enough to observe the qualitative features we described above. It is natural to expect that the emergence of these features is largely attributable to the PSU(1, 1|2) part of the hybrid formalism. With this in mind, we take a closer look at the spectrum of the sigma-model CFT on the supergroup PSU (1, 1|2). This supergroup has the special property of having vanishing dual Coxeter number, which guarantees the conformal symmetry of the worldsheet theory even away from the pure NS-NS case. We use the algebraic methods of [14][15][16] to analyse the spectrum of the supergroup sigma-model. In [16] these methods were used to derive the full BMN spectrum of in a background with mixed flux from a large-charge limit of the worldsheet theory. In general, at the WZW-point (pure NS-NS flux) the spectrum of the theory is constrained by enhanced worldsheet symmetries. However, this constraint is absent in the case of mixed flux, which results in the appearance of new representations of the worldsheet CFT. These will allow us to retrieve the missing chiral primaries as soon as an infinitesimal amount of R-R flux is turned on, i.e. as soon as we leave the singular locus in the moduli space. On the other hand, we will explicitly show that the conformal weight of excited states in the continuous representations describing long strings acquire a non-vanishing imaginary part. This forbids these representations from appearing in a unitary string theory. As a byproduct of our analysis, we fill a small gap in the literature on the SL(2, R)-WZW model. Remarkably, there are two different bounds on the allowed SL(2, R)-spins in the literature. One is the unitarity bound of the no-ghost theorem [17,18], and the other is the Maldacena-Ooguri bound [7], which arises from demanding square integrability of the respective harmonic functions in the quantum mechanical limit. The Maldacena-Ooguri bound is stronger, and it is somewhat puzzling that it should not be derivable from unitarity alone. We discover that the Maldacena-Ooguri bound arises in fact from considering the R-sector no-ghost theo-rem, in the literature only the NS-sector no-ghost theorem was discussed. Hence at least in the superstring, the Maldacena-Ooguri bound arises purely from unitarity. This paper is organised as follows. In Section 2 we review the arguments that lead to the prediction of long strings, their disappearance and the missing chiral primaries. After this, we set the stage for our computations by reviewing the algebraic treatment of supergroup sigma-models in Section 3 and explaining its application to the case of PSU(1, 1|2). With these preparations at hand, we analyse their implications for the long strings and missing chiral primaries in Section 4. This involves in particular the computation of conformal weights of single-sided excitations in the supergroup CFT. We discuss our findings in Section 5. Three Appendices with background on the affine Lie superalgebra psu(1, 1|2), on the level n spectrum of the supergroup sigma-model and the R-sector no-ghost theorem complement the discussion. The sigma-model description This section is mostly a review of the material appearing in [3,19,20]. The D-brane setup We consider the D1-D5 system on compactified on M 4 , where M 4 = T 4 or K3. The D-branes are wrapped as follows: 0 1 2 3 4 5 6 7 8 9 Q 5 D5-branes × × × × × × Q 1 D1-branes × × ∼ ∼ ∼ ∼ (2.1) The manifold M 4 is located in the directions 6789, × denotes directions in which the brane extends, ∼ denotes directions in which the brane is smeared. We can also consider the inclusion of F1-strings and NS5-branes, and moreover D3-branes can wrap any of the n + 6 two-cycles of M 4 , where n = 0 for T 4 and n = 16 for K3. The charge vector parametrising different configurations of the system takes values in the even self-dual lattice Γ 5,5+n . The U-duality group is the orthogonal group O(Γ 5,5+n ), under which the charge vector transforms in the fundamental representation. In the following we will assume that this charge vector is primitive, i.e. not a non-trivial multiple of another charge vector. If this is not so, the brane system can break into subsystems at no cost of energy at any point in the moduli space, which renders the dual CFT singular. Note that the U-duality group acts transitively on the set of primitive charge vectors of a fixed norm. Therefore we can always apply a U-duality transformation to bring the charge vector into the standard form Q ′ 1 = N = Q 1 Q 5 and Q ′ 5 = 1, with all other charges vanishing [21]. The moduli space is provided by the scalars of the compactification. Locally, they parametrise the homogeneous space O(5, 5 + n) O(5) × O(5 + n) , (2.2) on which U-duality acts and which leads to global identifications. In the near-horizon limit some of the moduli freeze out and the charge vector becomes fixed. The remaining scalars parametrise locally the moduli space O(4, 5 + n) O(4) × O(5 + n) ,(2.3) and U-duality is reduced to the little group fixing the charge vector [21,22]. Seiberg and Witten studied under what circumstances the system can break apart at no cost of energy [3,23]. For a primitive charge vector, this happens on a codimension 4 subspace of the moduli space. On this sublocus, the instability should be reflected as a singularity in the dual CFT. In particular, the pure NS-NS flux background lies on this locus and is hence a singular region in the moduli space. In this way, for pure NS-NS flux fundamental strings can leave the system and can reach the boundary of AdS 3 at a finite cost of energy. These are the so-called long strings. These considerations predict the existence of a continuum of states above a certain threshold for pure NS-NS flux. Such states indeed exist in the worldsheet description of string theory, and are associated with continuous representations of the sl(2, R) k -current algebra [7]. The gauge theory description In this part we review the gauge theory worldvolume description of the D1-D5 system. For simplicity, we work in the case in which neither D3-branes, F1-strings nor NS5branes are present. The worldvolume theory of the D5-branes is given by a U(Q 5 ) gauge theory coupling to the two-dimensional defects given by the D1-branes. In the low-energy limit, the dynamics becomes essentially a two-dimensional gauge theory which lives on the intersection of the D1-D5 branes [2], and which flows to an N = (4, 4) superconformal field theory in the IR. In fact, the IR fixed-point is described by two superconformal field theories -one corresponding to the Coulomb branch and one to the Higgs branch of the theory. 3 There are a number of ways of justifying this, the simplest being the comparison of central charges and R-symmetries [19]. Indeed, these two SCFTs have different sets of massless fields, and hence different central charges. Furthermore, since the scalars transform non-trivially under the various su(2) R-symmetries and obtain non-trivial vacuum expectation values, the R-symmetry is generically broken down to different su(2)'s. Let us have a closer look at the different central charges. On the Coulomb branch, the gauge group is generically broken to U(1) Q 5 , while all other fields are massive. The central charge is then given by the Q 5 massless gauge vector multiplets, that is c = 6Q 5 . On the other hand, on the Higgs branch only n H − n V hypermultiplets remain massless, while all other fields become massive. The central charge is then c = 6(n H − n V ), where n H is the number of hypermultiplets and n V the number of vector multiplets. Evaluating this number gives c = 6Q 1 Q 5 , M 4 = T 4 , 6(Q 1 Q 5 + 1) , M 4 = K3 . (2.4) We hence conclude that the central charges on the Higgs and Coulomb branches are generically different, and therefore the IR fixed-point is described by two decoupled SCFTs. 4 These two branches meet classically at the small instanton singularity of the gauge theory. In the quantum theory, the Coulomb branch metric is corrected and develops a tube near the small instanton singularity [3]. Hence the Coulomb branch moves infinitely far away from the Higgs branch. For the Higgs branch the story is more subtle: since it is hyperkähler, it is not renormalised at the quantum level. Nevertheless, the description of the Higgs branch SCFT as a sigma-model on the classical Higgs branch breaks down near the singularities of the moduli space, and one has to use a different set of variables. In those variables, the small instanton singularity exhibits also a tube-like behaviour on the Higgs branch [3]. 5 This implies that an instanton can travel through the tube and come out on the Coulomb branch. This is the gauge theory description of the emission of a D1-brane, i.e. of the long strings. In this process the central charge does not change since, for example for M 4 = T 4 , c tot = 6Q 1 Q 5 = 6(Q 1 − 1)Q 5 + 6Q 5 ,(2.5) where we have used the central charge for the Coulomb and Higgs branch. Let us slowly move away from the singular locus in the moduli space of the theory. From the gauge theory picture we learn that the tube disappears from the moduli space, since the sigma-model description is always a good description. Note that this happens immediately at the slightest perturbation away from the singular locus. This means that when perturbing the theory slightly, the continuum provided by the long strings should completely disappear. The situation is depicted schematically in Figure 1. There is one related phenomenon occurring. Starting at a non-singular point in the moduli space, as we slowly approach the singular locus the small instanton singularity will form at some places on the Higgs branch. The support of the cohomology cycles associated with the instanton shrinks to zero size in this process and, as the tube forms, these cycles will move down the tube and disappear from the Higgs branch, see Figure 1. As cohomology classes correspond to chiral primaries in the CFT description, this means that these chiral primaries are missing on the singular locus. In this way, all cohomology classes which are obtained by multiplication in the chiral ring vanish from the spectrum. From a string theory point of view, this means that all multi-particle chiral primary states obtained from a given chiral primary are missing. 6 It is hard to say which cycles are these from a gauge theory perspective, since there is no good explicit description of the instanton moduli space on T 4 or K3. In [3,26] it was argued that the first missing chiral primary should have degree (Q 5 − 1, Q 5 − 1), i.e. conformal weights h =h = 1 2 (Q 5 − 1). However, more chiral primaries are expected to be missing from the spectrum. We will argue in the worldsheet description that all cohomology classes of degrees ((w + 1)Q 5 − 1, (w + 1)Q 5 − 1) (2.6) are in fact missing, where w ∈ {0, 1, . . . , Q 1 } corresponds to the spectral flow parameter on the worldsheet. 7 It would be interesting to confirm this directly from the instanton moduli space side. The worldsheet description In this section we introduce the worldsheet description of string theory in AdS 3 × S 3 × M 4 with mixed NS-NS and R-R fluxes, where M 4 = T 4 or K3. We start by briefly describing the hybrid formalism, and then concentrate our attention on its main constituent: the PSU(1, 1|2) sigma-model. The hybrid formalism The hybrid formalism [13] gives a covariant formalism describing string theory on AdS 3 × S 3 × M 4 and has the following ingredients: 1. A sigma-model on the supergroup PSU(1, 1|2). 2. A topologically twisted c = 6 N = (4, 4) CFT. This can be a sigma-model on either T 4 or K3. 3. Two additional ghost fields ρ and σ coupled to the remaining fields of the theory. The hybrid formalism makes half of the spacetime supersymmetries manifest, but the existence of ghost couplings makes the theory much more complicated. However, we are only interested in the emergence of the qualitative features outlined in the previous section. We expect these features to be already present at the level of the sigma-model on PSU(1, 1|2), and we will see that this is indeed the case. We will henceforth only focus on the sigma-model on the supergroup PSU(1, 1|2). The sigma-model is characterised by two parameters k and f , see the action (3.2) below. In the hybrid formalism, these parameters are related to the background fluxes as follows. The amount of NS-NS flux is given by k, i.e. Q NS 5 ≡ k. Since the hybrid formalism is a perturbative string theory description, we expect k to be quantised. In the sigma-model, this follows from the usual topological argument for WZW models. 7 The importance of spectral flow in the worldsheet description of AdS 3 was not yet realised when [3] was published, which explains the differences between our statement and the one in [3,26]. Note also that our statements are true for Q 5 ≥ 2, since only for these values a complete worldsheet description exists. See however [27,28] for a recent proposal on how to make sense of the Q 5 = 1 theory, and in which the missing chiral primaries are present. Furthermore, f −2 is equal to the radius of AdS 3 (in units of string length). Solving the supergravity equations of motion, this is related to the background fluxes as 4π 2 R 2 AdS α ′ = 1 f 2 = Q NS 5 2 + g 2 Q RR 5 2 , (3.1) where g is the string coupling constant. Since g ≪ 1 in the perturbative treatment, f effectively becomes a continuous parameter of the theory. The pure NS-NS background is characterised by Q RR 5 = 0, that is kf 2 = 1. 8 This corresponds to the WZW-point for the sigma-model. The sigma-model on PSU(1, 1|2) In this section, we review the formalism developed in [14,15] for sigma-models on supergroups G with vanishing dual Coxeter number. This formalism was further studied and extended in [16], where it was used to compute the plane-wave spectrum of string theory in AdS 3 backgrounds with mixed flux. In [15,16] the two-parameter family of sigma-models on a supergroup G -which for our purposes is taken to be PSU(1, 1|2) -with action S[g] = − 1 4πf 2 d 2 z Tr ∂gg −1∂ gg −1 + k S WZ [g] ,(3.2) was analysed. Here, S WZ [g] denotes the standard WZW-term, and g the embedding coordinate on the supergroup. The model possesses G × G symmetry, acting by left-and right-multiplication on g. We denote by j(z),j(z) the conserved currents associated with this G × G symmetry, whose components are j z = − 1 + kf 2 2f 2 ∂gg −1 , jz = − 1 − kf 2 2f 2∂ gg −1 j z = − 1 − kf 2 2f 2 g −1 ∂g ,jz = − 1 + kf 2 2f 2 g −1∂ g . (3.3) Importantly, the currents are in general neither holomorphic, nor anti-holomorphic, except at the WZW-point kf 2 = 1. From these currents we can define modes Q a n , P a n , Qā n ,Pā n (see (3.5) and (3.6) below) whose (anti)commutation relations are [14][15][16]29] 9 [Q a m , Q b n ] = kmκ ab δ m+n,0 + if ab c Q c m+n , [Q a m ,Pb n ] = kmA ab m+n , [Q a m , P b n ] = kmκ ab δ m+n,0 + if ab c P c m+n , [Qā m , A bb n ] = ifābc A bc m+n , [Qā m ,Qb n ] = −kmκ ab δ m+n,0 + ifābc Qc m+n , [Q a m , A bb n ] = if ab c A cb m+n , [Qā m ,Pb n ] = −kmκābδ m+n,0 + ifābcPc m+n , [Qb m , P a n ] = −kmA ab m+n , (3.4) 8 In the following we restrict to Q NS 5 > 0, and therefore to kf 2 > 0. For Q NS 5 < 0, we would have kf 2 = −1. 9 We will always write commutators. These are understood to be anticommutators for two fermionic generators. with all other commutation relations vanishing. Here and in the following, the indices a, b, . . . andā,b, . . . denote adjoint g-indices. We will refer to this algebra as the mode algebra. This algebra replaces the usual Kač-Moody algebra g k × g k present at the WZW-point. In fact, the Kač-Moody algebra g k × g k is still present in the form of the subalgebra spanned by the modes Q a m andQā m . The modes Q a n , P a n ,P a n ,Q a n are defined as Q a n = X a n + Y a n , P a n = 2kf 2 X a n 1 + kf 2 − Y a n 1 − kf 2 , Qā n =Xā n +Ȳā n ,Pā n = −2kf 2 Xā n 1 − kf 2 −Ȳā n 1 + kf 2 , (3.5) where X a n ≡ |z|=R dz R z n j a z (z) , Y a n ≡ |z|=R dz R z n−1z j ā z (z) ,(3.6) and analogously for the right-currentsj a (z), which give rise to the operatorsX a n ,Ȳ a n . Finally, A aā n are the modes of a bi-adjoint field defined as A aā = STr g −1 t a g tā , (3.7) where t a , tā are the generators of each of the two copies of g in the adjoint representation. This field has vanishing conformal dimension in the quantum theory, and hence can mix with the other operators. Since the currents are neither holomorphic nor anti-holomorphic, there is no sense in which they are left-or right-moving. This motivates our (slightly unusual) uniform conventions in (3.6). The sigma-model (3.2) has quantum conformal symmetry for any values of k and f . The energy-momentum tensor is given by 8) and the modes of T (z) will be denoted L n as usual. It was shown in [15] that this energy-momentum tensor is indeed holomorphic. One can use (3.8) to express the Virasoro modes in terms of bilinears in the mode algebra, and then use the relations of the mode algebra to derive the following commutation relations: T (z) = 2f 2 (1 + kf 2 ) 2 κ ab (j a z j b z )(z) = 2f 2 (1 − kf 2 ) 2 κāb(jā zjb z )(z) ,(3.[L m , Q a n ] = − 1 + kf 2 2 nQ a n+m − 1 − k 2 f 4 4kf 2 nP a m+n , [L m , P a n ] = −kf 2 nQ a n+m − 1 − kf 2 2 nP a n+m − if 2 f a bc Q b P c n+m , [L m ,Qā n ] = − 1 − kf 2 2 nQā n+m + 1 − k 2 f 4 4kf 2 nPā m+n , [L m ,Pā n ] = kf 2 nQā n+m − 1 + kf 2 2 nPā n+m − if 2 fābc QbPc n+m . (3.9) Evidently the Virasoro tensor does not act diagonally on the Hilbert space spanned by the modes. Furthermore, the left-algebra (i.e. the unbarred modes) does not commute with the right-algebra (the barred modes), which makes it difficult to impose a highest weight condition for both the left-and right-algebra. This fact prevents us from computing conformal weights of excitations with both barred and unbarred oscillators. In [16] these problems were solved in a BMN-like limit in which the algebra contracts and the action of the Virasoro modes can be diagonalised. Despite this, it is still possible to access part of the worldsheet spectrum for any values of k and f , by looking only at one half of the mode algebra, namely the subalgebra generated by Q a m and P a m . For this subalgebra, we can define lowest weight representations as usual: affine primary states |Φ in a representation R 0 of psu(1, 1|2) are defined as [15,16] Q a m |Φ = 0 , m > 0 , Q a 0 |Φ = t a |Φ , P a m |Φ = 0 , m ≥ 0 . (3.10) where t a R 0 are the generators of g in the representation R 0 . It then follows that their conformal weight is [15,16] h(|Φ ) = 1 2 f 2 C(R 0 ) ,(3.11) where C(R 0 ) denotes the quadratic Casimir of g in R 0 . The associated lowest weight representation can then be constructed by acting with the negative modes of Q a m<0 and P a m<0 on |Φ . These 'chiral' representations will be sufficient for our purposes in this paper. It is useful to recall the possible representations R 0 arising in the spectrum of the model. For large values of k, these can be derived by a mini-superspace analysis [30], which essentially gives the spectrum proposed by Maldacena and Ooguri [7]. More precisely, representations of psu(1, 1|2) are induced from representations of its bosonic subalgebra sl(2, R)⊕su(2) [30]. The relevant representations of su(2) are the finite-dimensional ones, and are labelled by their spin ℓ. The relevant representations of sl(2, R) fall in two categories: 1. Discrete representations. These are lowest weight representations of the sl(2, R) zero-mode algebra, and they are labelled by the spin j of the lowest weight state. These representations give rise to the so-called short string excitations. These are important to understand the phenomenon of the missing chiral primaries. 2. Continuous representations. These are neither lowest nor highest weight representations. They can be viewed as representations of spin j = 1 2 + ip, where the parameter p determines the quadratic Casimir as C = −2j(j − 1) = 1 2 + 2p 2 . 10 10 Note the additional factor of two in our conventions for the Casimir. Since these representations depend on a continuous parameter p, they are commonly referred to as continuous representations. In the string theory setting, they give rise to long strings states with radial momentum p. Additionally, the spectrally flowed images of these representations may appear in the spectrum. In this paper we will be mostly concerned with the unflowed representations. The spectrum of the sigma-model In general we would like to determine the conformal weight of states obtained by the action of normal-ordered products on a primary state |Φ , such as for example Q a n |Φ , (Q a P b ) n |Φ , (Q aQā ) n |Φ , (A aā ) n |Φ , and others. In the following we will be able to compute the conformal weight of a state containing either solely unbarred oscillators or solely barred oscillators, and no A aā m . The reason for this is that L 0 mixes only finitely many states constructed using solely unbarred oscillators, say, and in this way its eigenvalues can be computed. In this case, we will be able to make use of the definition of affine primary states (3.10) associated with the 'chiral' lowest weight representations introduced in the previous section. When including also barred oscillators or the A aā -field, infinitely many states get mixed under the action of L 0 , and its eigenvalues cannot be extracted with a finite amount of calculation. This is a difficulty which we have not been able to overcome. We are then interested in the conformal weights of the single-sided states of the type For simplicity, in the following we illustrate the computation of the conformal weights of such states using single-oscillator excitations, i.e. using states of the type Q a −n |Φ , P a −n |Φ . The multi-oscillator states can be treated using the same methods, but we have not managed to find a closed form solution. Nevertheless, we will be able to derive strong results concerning the expected qualitative behaviour of the spectrum described in Section 2 using only (4.2). In particular, in Subsection 4.2 we will derive a unitarity bound on the values that kf 2 can take, in Subsection 4.3 we will argue that the continuous representations cannot be part of the CFT spectrum, and in Subsection 4.4 we will retrieve the chiral primaries that are missing at the pure NS-NS point. The spectrum at the first level The states in the spectrum at the first level are Q a −1 |Φ , P a −1 |Φ . (4.3) They mix under the application of L 0 as follows: L 0 Q a −1 |Φ = h(Φ) + 1 2 (1 + kf 2 )Q a −1 + 1 − k 2 f 4 4kf 2 P a −1 |Φ , (4.4) L 0 P a −1 |Φ = h(Φ) + kf 2 Q a −1 + 1 2 (1 − kf 2 )P a −1 − if 2 f a bc Q b P c −1 |Φ = h(Φ) + kf 2 Q a −1 + 1 2 (1 − kf 2 )P a −1 − if 2 f a bc P c −1 t b |Φ . (4.5) We have used the definition of affine primary (3.10) and the commutation relations (3.9). Note that the structure constants if bc a = (t b ad ) a c are the generators in the adjoint representation and hence if a bc t b = −κ bd (t d ad ) a c t c . (4.6) This can expresses as a difference of Casimirs: κ bd t b ad t d = 1 2 κ bd (t b ad + t b )(t d ad + t d ) − κ bd t b ad t d ad − κ bd t b t d = 1 2 C R 0 ⊗ ad − C R 0 ,(4.7) where we have used that the Casimir of the adjoint representation vanishes C ad = 0. Note that the states (4.3) transform in the (reducible) representation R 0 ⊗ ad. Restricting to an irreducible subrepresentation R 1 ⊂ R 0 ⊗ ad we find if 2 f a bc t b = − 1 2 f 2 C(R 1 ) − C(R 0 ) δ a c = − 1 2 f 2 ∆Cδ a c ,(4.8) where we denoted by ∆C the difference of Casimirs. 11 Thus L 0 mixes only Q a −1 |Φ and P a −1 |Φ , and in this basis L 0 takes the form L 0 = h(|Φ )1 + 1 2 (1 + kf 2 ) kf 2 1−k 2 f 4 4kf 2 1 2 (1 − kf 2 ) + 1 2 f 2 ∆C , (4.9) where we used (3.11). The associated eigenvalues are h ± Q a −1 |Φ , P a −1 |Φ = h |Φ + 1 4 f 2 ∆C + 2 ± 4 − 4kf 4 ∆C + f 4 (∆C) 2 . (4.10) 11 The pertinence of the difference of Casimirs to the computation of conformal weights was already noticed in [15]. Notice that this result is similar to the large-charge formula found in [16], except that 2(a · ℓ) has been replaced by ∆C. It is easy to confirm that in the large-charge limit ∆C indeed becomes 2(a · ℓ), and the exact formula (4.10) is therefore consistent with the one found in the large-charge limit. Furthermore, it was argued in [16] that only the solution h + is physical. In fact, due to the identifications between the modes of the algebra, the solution h − can be interpreted as the application of a barred oscillator with the wrong mode number. On the other hand, only the solution h + reduces to the correct result h + = h(|Φ ) + 1 at the WZW-point kf 2 = 1. Hence we will discard the state with eigenvalue h − from the physical spectrum. It is not clear at this point if this should be the only effect of the physical constraints on the one-sided worldsheet spectrum. A similar analysis can be performed for the spectrum at n-th level (4.2). We do not make use of it in the following analysis, but we have included it for the sake of completeness in Appendix B. A unitarity bound There is one very interesting consequence of (4.10). Classically, we know from (3.1) that −1 ≤ kf 2 ≤ 1, and we will see that also holds at the quantum level, assuming that k ≥ 2. 12 According to [30, eq. (6.3)], for k ≥ 2 the spectrum of the sigma-model on psu(1, 1|2) should contain the representation R 0 = j, ℓ = k 2 − 1 , where j is the sl(2, R)-spin and ℓ = k 2 − 1 the su(2)-spin, see also Appendix A for the conventions of psu(1, 1|2). In this way, we can choose R 1 = j, k 2 ⊂ j, k 2 − 1 ⊗ ad . (4.11) This choice of representations yields ∆C = 2k, and inserting this into (4.10) we obtain the following conformal weight of the excited state: h = 1 2 kf 2 + 1 + 1 − k 2 f 4 . (4.12) An obvious requirement of any CFT is that the conformal weights are real. We see that this is only the case provided that − 1 ≤ kf 2 ≤ 1 . (4.13) Continuous representations We found that the conformal weight of states constructed with a single oscillator depend on the difference of Casimirs ∆C between the ground state representation and the representation of the state. Consider then a ground state representation with su(2)-spin ℓ and sl(2, R) spin j = 1 2 + ip, i.e. the sl(2, R) part transforms in a continuous representation. Its Casimir is then C = −2j(j − 1) + 2ℓ(ℓ + 1) = 1 2 + 2p 2 + 2ℓ(ℓ + 1). At first excitation level, we have states in the representations with spin j − 1, j and j + 1 of sl(2, R) appearing. The respective differences of Casimirs are ∆C = 2 − 4ip , 0 , and 2 + 4ip . (4.14) Plugging this result into the formula for the conformal weight at level one (4.10), we realise that the conformal weights for p = 0 generated by charged oscillators become generically complex. Since the appearance of complex conformal weights implies that the energy momentum tensor is not self-adjoint in these representations, these representations are forbidden and hence cannot be part of the spectrum. The only exception to this statement is the WZW-point, where the conformal dimensions do not depend explicitly on the difference of Casimirs ∆C. This result should continue to hold once we consider complete representations of the mode algebra, and not just of its 'chiral' version. Since already the 'chiral' continuous representations contain complex conformal weights, the full representations must be ruled out. Hence we confirm the fact that long strings disappear from the spectrum in a mixed-flux background. Missing chiral primaries We are also in the position to retrieve the chiral primaries that are missing from the spectrum at the WZW-point. In the following we review this phenomenon in the worldsheet description. For simplicity, we focus on the background AdS 3 × S 3 × T 4 . The psu(1, 1|2) k WZW-model has [30] discrete representations (j, ℓ, w) with 1 2 < j < k+1 2 and ℓ ∈ {0, 1 2 , . . . , k−2 2 }, where w ∈ Z is the spectral flow number. Every discrete representation of the form (ℓ + 1, ℓ, w) yields four chiral primary states [3,[32][33][34]. These are the four sl(2, R)⊕su(2) representations in (A.8) for which j−ℓ is minimal. 13 They have the followings su(2)-spins: ℓ + kw , 2 × ℓ + 1 2 + kw , ℓ + 1 + kw . (4.15) Combining with the right-movers, we obtain the complete Hodge-diamond of T 4 , with the lowest state having left-and right-moving su(2) spin ℓ + kw. It has the following form: (1, 1) 2 × ( 1 2 , 1) 2 × (1, 1 2 ) (0, 1) 4 × ( 1 2 , 1 2 ) (1, 0) 2 × (0, 1 2 ) 2 × ( 1 2 , 0) (0, 0) (4.16) where (δ,δ) denotes an su(2) ⊕ su(2) representation with spin (ℓ + kw + δ, ℓ + kw +δ). Note that because of the restriction ℓ ∈ {0, 1 2 , . . . , k−2 2 }, ℓ + kw takes values in 1 2 Z \ k 2 Z − 1 2 and thus every k-th Hodge diamond is missing. This was alluded to in Section 2.2 and is what we mean by 'missing chiral primary'. The absence of the chiral primaries on the worldsheet is caused by the unitarity bounds constraining the worldsheet theory. The main bounds are the restriction to j < k+1 2 and ℓ ≤ k−2 2 , whose origin we will briefly review in the following. We will only treat the unflowed sector w = 0, for comments on the spectrally flowed sectors see the Discussion 5. Consider the state J − −1 S −++ 0 |j, ℓ , whose norm at the WZW-point is (see Appendix A): j, ℓ|S +−− 0 J + 1 J − −1 S −++ 0 |j, ℓ = − 2(j − 1 2 ) + k j − 1 2 , ℓ|j − 1 2 , ℓ . (4.17) This norm is non-negative if j ≤ k + 1 2 , (4.18) which is the Maldacena-Ooguri bound. For this to be a unitarity restriction, the state J − −1 S −++ 0 |j has to be physical in string theory, which is in fact the case. This can be seen from the fact that there is no state at level zero with the same quantum numbers. 14 Hence all positive modes of uncharged operators have to annihilate the state and so it lies in particular in the BRST-cohomology of physical states. This is then the most stringent bound possible. In the RNS formalism, it arises from considering the no-ghost theorem in the R-sector. The fact that the R-sector no-ghost theorem yields a stronger bound than the NS-sector version was to our knowledge not considered before in the literature. Thus, to fill this gap, we review the proof of the no-ghost theorem in Appendix C at the WZW-point. We explain the very small difference which occurs in the proof of the theorem in the R-sector. Similarly, the unitarity constraint for su(2) representations can be obtained by requiring the norm of the state K + −1 S −++ 0 S +++ 0 |j, ℓ to be non-negative. This yields ℓ ≤ k − 2 2 ,(4.19) 14 This would not be true for the state J − −1 |j , since at level zero there is a state with the same quantum numbers, namely S −++ 0 S −−+ 0 |j . which is the familiar bound from the RNS formalism. The considered state is again physical. These are the bounds we mentioned above. Let us move away from the WZW-point and see how these bounds change. For this we first find the eigenvectors of L 0 at the first level, which are (Q a −1 + b ± P a −1 )|Φ , where b ± = ∆Cf 2 − 2kf 2 ± 4 − 4∆Ckf 4 + (∆C) 2 f 4 4kf 2 . (4.20) These have L 0 eigenvalues h ± as in (4.10), respectively. As noted before, only the state with conformal weight h + is part of the physical spectrum. The analogue of the state J − −1 S −++ 0 |j, ℓ in the mixed flux case is (J Q,− −1 + b ± J P,− −1 )S Q,−++ 0 |j, ℓ ,(4.21) where we use the notation J Q for the J-currents of the Q-modes and J P for the J-currents of the P -modes. Using the algebra (3.4) and the explicit form of b ± with ∆C = 4j − 4, the norm of this state can be computed. Requiring this norm to be non-negative gives the constraint j ≤ k + 1 2 + 1 2 − f 4 + k 2 f 4 − 1 2f 2 < k + 2 2 , (4.22) which is less constraining than the usual bound (4.18), and reduces to it at kf 2 = 1. We see that the bound changes slightly when going away from the WZW-point, but nothing spectacular happens. The situation is entirely different when looking at the corresponding state for the su(2)-spin bound |Ψ ≡ (K Q,+ −1 + b ± K P,+ −1 )S Q,−++ 0 S Q,+++ 0 |j, ℓ . (4.23) Asking for |Ψ to have positive norm led at the WZW-point to the constraint ℓ ≤ k−2 2 , which in turn excluded the missing chiral primary at ℓ = k−1 2 from the spectrum. Now we find that the norm of this state is in general 15 Ψ|Ψ = ± f −4 − 4(ℓ + 1)(k − ℓ − 1) WZW-point −−−−−−→ ± k − 2ℓ − 2 2 = ± k−2ℓ−2 . (4.24) As indicated, the term under the square root becomes a perfect square at the WZWpoint. From this description it is not clear which sign should be chosen in the last equality, but from the WZW-description we know that we should take the positive sign. The two branches for the norm of are plotted in Figure 2. We see that away from the WZW-point, the two branches no longer cross. In particular, the first branch has always positive norm and there is no unitarity bound on ℓ! su(2)-spin ℓ In summary, away from the pure NS-NS point we found that the upper bound on j is slightly shifted upwards, but always strictly less than k+2 2 . On the other hand, the bound on ℓ completely disappears. This has the following consequences for the chiral primaries. As we discussed above, chiral primaries come from representations with ℓ = j − 1 ∈ 1 2 N 0 . While there is no longer an upper bound on ℓ, there is such a bound on j, which now allows the values ℓ ∈ {0, 1 2 , . . . , k−1 2 }. Thus, we see that there is one new chiral primary compared to the WZW-point, namely ℓ = k−1 2 . Combining this with spectral flow, it precisely fills the gaps (2.6) in the BPS spectrum. We conclude that the missing chiral primaries are indeed reinstated by any perturbation away from the WZW-point. Discussion In this paper we proposed an explicit argument for the expected qualitative behaviour of the spectrum at the singular locus of the moduli space of string theory in AdS 3 backgrounds. To perform our computations we relied on the hybrid formalism, and more precisely on the algebraic structure of the PSU(1, 1|2) sigma-model. We found that continuous representations are only allowed to exist at the WZW-point. On the other hand, unitarity of the CFT at the WZW-point introduces bounds on the allowed su(2)-spins. Once perturbing slightly away from the WZW-point, the upper bound on the su(2)-spin disappears completely and additional chiral primaries appear in the string theory spectrum. These relatively simple computations give a mechanism explaining the change in the representation content as the singular locus of the moduli space is crossed. However, there are several intriguing open questions and interesting future directions. While we have explained the change in the representation content of the supergroup WZW-model, we have not presented a convincing argument that the same conclusions hold in the complete worldsheet theory consisting of the supergroup CFT, a sigma-model on M 4 , and the ghost couplings of the two constituents. Nevertheless, we believe this to be true for the following reasons. The complete worldsheet CFT (including the ghosts) still has a left and right PSU(1, 1|2) symmetry, which is all one needs for the mode algebra (3.4) to exist. However, the construction of the Virasoro tensor is then more complicated and involves also the additional fields. While this may correct the conformal weights slightly, it will not modify their analytical structure. In particular, under a generic perturbation the phenomenon of imaginary conformal weights of continuous representations, and of avoided crossing as in Figure 2, will not disappear. Thus, we believe that the same mechanisms continue to hold in the full model. Our arguments were limited in that they involved only single-sided excitations. Clearly, a single-sided excitation is not level-matched and hence is not a physical state of the full string theory. However, due to the mode algebra (3.4) the existence of a single non-unitary state in one representation excludes the whole representation from the physical spectrum. Our computations were performed in the unflowed w = 0 sector of the worldsheet CFT. Spectral flow of the mode algebra was discussed in [16], and it is far more complicated than at the WZW-point. In particular, it mixes barred and unbarred oscillators, so that in order to understand spectral flow one first has to understand states which are excited both on the left and right. This is a difficulty which we have not managed to surmount in this paper. An exception to this statement is given by the affine primary states, which behave in a simple manner under spectral flow. For this reason, spectrally flowing the retrieved w = 0 missing chiral primary will fill the other gaps in the chiral primary spectrum, and all the missing chiral primaries are retrieved. Furthermore, spectrally flowed continuous representations are not allowed in the spectrum for any w, since the unflowed w = 0 continuous representation can be obtained from these by applying a negative amount of spectral flow. We have explained the two most drastic changes in the spectrum of string theory on AdS 3 × S 3 × M 4 when leaving the singular locus. It would be very interesting to extend these results to obtain the complete string theory partition function infinitesimally far away from the singular locus. This would entail the understanding of the (dis)appearance of all states in the theory, not just the special ones we considered. Having this at hand, one could compute protected quantities which remain constant away from the singular locus. Obviously, since the chiral primary spectrum is discontinuous at the singular locus, the same should be true for the elliptic genus (or the modified elliptic genus of [35]). Therefore, one should be able to quantify this discontinuity in the form of a wall-crossing formula. Finally, it would be interesting to repeat our analysis for the background AdS 3 × S 3 × S 3 × S 1 , whose spectrum exhibits similar features [34]. There is no hybrid formalism for the background, but one expects that the superalgebra d(2, 1; α) can be used to describe the string propagation in a background with mixed flux. One easily confirms that the calculations presented in Section 4 continue to hold true for this superalgebra and hence the mechanism for the disappearance of long strings and appearance of chiral primaries seems to be the same. The structure of the chiral primary spectrum is however far more intricate, and in particular not every BPS state in the twisted sector is related to a BPS state in the unflowed sector by spectral flow. It would be interesting to understand this better. The other important class of representations describing long strings are continuous representations which are still finite-dimensional for the su(2)-part, but are neither highest, nor lowest weight representations for the sl(2, R)-part. The sl(2, R)representation is then specified by an element α ∈ R/Z together with its Casimir. The Casimir is commonly parametrised by C = 1 2 + 2p 2 for p ∈ R ≥0 . α enters the representation by imposing that the sl(2, R)-spins take values in Z + α. A representation |j, ℓ is atypical if the BPS bound j ≥ ℓ + 1 is saturated, and it is otherwise typical. A typical representation |j, ℓ consists of the following 16 sl(2, R) ⊕ su(2)-multiplets: 4(j, ℓ) , (j ± 1, ℓ) , (j, ℓ ± 1) , 2(j ± 1 2 , ℓ ± 1 2 ) . (A.8) B The spectrum at the n-th level In this section we generalise the analysis of Section 4.1 to level n excitations of the form Q a −n |Φ , P a −n |Φ . (B.1) As we will see, under the action of L 0 these states mix with multi-oscillator states such as f a bc Q b −n+1 P c −1 |Φ . However, L 0 behaves as follows: under the action of L 0 the number of oscillators either increases or stays the same, but never decreases. To prove this assertion, we start with a state of the form g a b 1 ···bm J b 1 −n 1 · · · J bm −nm |Φ . (B.2) Here g a b 1 ···bm is an invariant tensor of g of the form g a b 1 ···bm = f a b 1 a 1 f a 1 b 2 a 2 · · · f a mg a b 1 ···bm κ b i b j = 0 , g a b 1 ···bm f b i b j c = 0 , (B.5) thanks to the vanishing of all Casimirs of the adjoint representation, see [29]. This implies that normal ordering in (B.2) is not relevant: the oscillators can freely be reordered, since the commutator produces structure constants. They vanish because of the second relation in (B.5). We compute L 0 on the state (B.2). There will be two types of terms appearing, corresponding to the two types of terms in the commutation relations (3.9). The first type of terms are linear in the modes and obviously preserve the number of modes. The second type of terms yields the following expression: g a b 1 ···bm f b i cd J b 1 −n 1 · · · J b i−1 −n i−1 (Q c P d ) −n i J b i+1 −n i+1 · · · J bm −nm |Φ . (B.6) The invariant tensor g a b 1 ···bm f b i cd still has the same property as (B.5), so we may still freely reorder the oscillators. In the normal-ordered product term (Q c P d ) −n i in (B.6), either both oscillators have negative modes or one is a zero-mode (a term with positive mode vanishes, since we can commute it through to the right, where it then annihilates |Φ ). In the former case, we obtain a term with m + 1 oscillators, whereas in the latter case, the zero mode on |Φ gives a generator t c or t d and hence the number of oscillators remains the same. Also, we note that the action of L 0 closes on the set (B.2), we do not have to consider other invariant tensors. This proves the above assertion that the number of oscillators can never be decreased by the action of L 0 . When computing the matrix-representation of L 0 on all level n states which can be mixed by the action of L 0 , we hence get the following block structure: 1 oscillator 2 oscillators 3 oscillators . . . n − 1 oscillators n oscillators           ⋆ 0 0 · · · 0 0 0 ⋆ ⋆ 0 · · · 0 0 0 0 ⋆ ⋆ · · · 0 0 0 . . . . . . . . . . . . . . . . . . . . . 0 0 0 · · · ⋆ ⋆ 0 0 0 0 · · · 0 ⋆ ⋆           . (B.7) Thus for the purpose of computing the spectrum of L 0 on single-oscillator excitations, we can simply ignore multi-oscillator excitations, since they do not contribute to the eigenvalue. They do however contribute to the precise eigenvector. With this at hand, the computation is completely analogous to the computation in (4.1): L 0 acts on Q a −n |Φ and P a −n |Φ as follows: L 0 = h(|Φ )1 + 1 2 (1 + kf 2 )n kf 2 n 1−k 2 f 4 4kf 2 n 1 2 (1 − kf 2 )n + 1 2 f 2 ∆C , (B.8) where we ignored all multi-oscillator terms. The correction to the eigenvalues with respect to the ground state is given by δh ± Q a −n |Φ , P a −n |Φ = 1 4 f 2 ∆C + 2n ± 4n 2 − 4kf 4 n∆C + f 4 (∆C) 2 . (B.9) We again expect only the positive sign eigenvalue to be part of the physical spectrum. This reduces again to the BMN-like limit of [16] for values of the charges. Furthermore, at the pure NS-NS point kf 2 = 1 we retrieve the WZW result. This result makes it seem as if the structure is always so simple. However, once one tries to compute the conformal weight of multioscillator excitations, the computations become quickly very complicated. C Proof of the R-sector no-ghost theorem We consider the following CFT as a worldsheet theory: sl(2, R) (1) k ⊕ CFT int , (C.1) as usual the level of the bosonic sl(2, R) is k + 2. CFT int is some internal N = 1 SCFT, which has the correct central charge to give the total central charge c = 15. In [17,18], a no-ghost theorem for these theories was proven in the NS-sector. Here, we want to fill the gap and prove the no-ghost theorem in the R-sector. This will actually yield a different bound and explains the somewhat mysterious appearance of the Maldacena-Ooguri bound [7] in the literature. Since the proof is almost identical to the NS-sector version, we will only explain the strategy and point out the small, but important difference in the end. We denote in the following the worldsheet N = 1 superconformal algebra by L n and G r . Since we focus on the R-sector, n, r ∈ Z. The complete Hilbert space of the worldsheet theory will be denoted by H. It enjoys as usual a natural grading by the eigenvalue of L 0 . H (N ) denotes the subspace of H with grade less or equal to N. We define a state φ ∈ H to be physical if it satisfies the physical state conditions L n φ = G r φ = 0 , n, r > 0 . (C.2) In string theory, a physical state has to satisfy in addition L 0 φ = G 0 φ = 0 , (C.3) and the GSO-projection. We define furthermore the subspace F ⊂ H by the requirements L n φ = G r φ = J 3 n φ = ψ 3 r φ = 0 , n, r > 0 . (C.4) Here, J 3 n is the Cartan-generator of the bosonic sl(2, R)-algebra, ψ 3 n the corresponding fermion. See [37] for our conventions. Then analogously to [18], one finds the following basis for H (N ) : Lemma. For c = 15 and 0 < j < k+2 2 , the states |{ε, λ, δ, µ}, f := G ε 1 −1 · · · G εa −a L λ 1 −1 · · · L λm −m (ψ 3 −1 ) δ 1 · · · (ψ 3 −a ) δa (J 3 −1 ) µ 1 · · · (J 3 −m ) µm |f , (C. 5) where f ∈ F is at grade L, ε b , δ b ∈ {0, 1} and We call a state spurious, if it is a linear combination of states of the form (C.5) with λ = 0 or ε = 0. By the Lemma, every physical states φ, can be written as a spurious states φ s plus a linear combination of states of the form (C.5) with λ = 0 and ε = 0, i.e. φ = φ s + χ . (C.7) For c = 15, φ s and χ are separately physical states and φ s is therefore null [38]. In parallel to [18], we have then Lemma. For 0 < j < k+2 2 , if χ is a physical state of the form (C.5) with λ = 0 and ε = 0, then χ ∈ F . So far, everything is exactly the same as in the NS-sector version proof. The only small difference appears in the final step, where we use that the coset sl(2, R)/u(1) is unitary in the R-sector. This obviously follows from the N = 2 spectral flow, but under the spectral flow, the sl(2, R)-spin gets shifted by 1 2 unit. Indeed, when spectrally flowing the formulas given in [39], one sees that the bounds on j get shifted by 1 2 . Thus, we have: Theorem. For c = 15 and 1 2 < j < k+1 2 , every physical state φ differs by a spurious physical state from a state in F . Consequently, the norm of every physical state is non-negative. Proof. By the previous two lemmas, the proof boils down to showing that the R-moded coset sl(2, R)/u(1) is unitary. The NS-moded version of this coset was analysed in detail in [39], it was found that in that case it is unitary provided that 0 < j < k+2 2 . As explained above the bound gets shifted by half a unit upon spectrally flowing this bound to the R-sector. This concludes the proof of the no-ghost theorem. Strictly speaking, we have demonstrated the sufficiency of this bound. Let us also demonstrate that it is necessary by constructing the relevant state. This state is exactly identified with the one we used in Section 4.4 in the supergroup language. For this, we look at the sl(2, R)-representation of spin j. The fermionic zero-modes in the R-sector construct a representation on top of this ground state, where some states have J 3 0 eigenvalue j + 1 2 and some have J 3 0 eigenvalue j − 1 2 . 17 We pick a state with J 3 0 eigenvalue j − 1 2 , and apply the oscillator J − −1 . The resulting state is denoted |Φ j ≡ J − −1 |j, m = j − 1 2 . This state is clearly annihilated by positive N = 1 Virasoro modes, since there is no state at level zero with the same J 3 0 -eigenvalue. Hence it is physical and its norm is Φ j |Φ j = −2 j − 1 2 + k . (C.8) Demanding positivity yields indeed the Maldacena-Ooguri bound j ≤ k+1 2 . Figure 1 . 1The structure of the moduli space on the singular locus and when slightly perturbed away from it. On the singular locus (left-hand picture), chiral primaries can escape to the Coulomb branch and are emitted as D1-branes from the system. The Higgs branch and the Coulomb branch are connected by an infinitely long tube, with the string coupling constant blowing up in the middle. Associated with the tube are long strings, which give rise to a continuum in the spectrum. When slightly perturbing the system away from the singular locus (right-hand picture), the moduli space approximation becomes good and the non-renormalization theorem makes the Higgs branch flat. The tube disappears and all chiral primaries are confined to the Higgs branch. Figure 2 . 2The two branches of the norm of |Ψ . At the WZW-point, the two branches intersect at ℓ = k−2 2 . For a slight perturbation away from the WZW-point, we have an 'avoided crossing' and the first branch has always positive norm. possible permutations of the free indices. In the expression (B.2), each J b i −n ican stand either for Q b i −n i or P b i −n i . Moreover, we require that the state is at level n, basis for H (N ) . Provided that the charge vector is primitive, see the discussion in Section 2.1. 2 In [4] these long strings were associated with divergences of the free energy of the string. There can of course be also mixed branches. The same result can be obtained semi-classically by using the Brown-Henneaux central charge[24], which yields c = 6Q 1 Q 5 . The correction in the K3 case is a supergravity one-loop effect[25].5 This tube can be described by a Liouville field in the gauge theory, and the energy gaps can be seen to match[3,20]. Furthermore, since the chiral primaries always come in Hodge diamonds of M 4 , the whole diamond will be missing. The k = 1 theory behaves quite differently. Since su(2) 1 ⊂ psu(1, 1|2) 1 has no affine representation based on the adjoint representation of su(2), the theory cannot have a field in the adjoint representation. In particular, the biadjoint field A aā does not transform in a valid representation of psu(1, 1|2) 1 × psu(1, 1|2) 1 at the WZW-point. Hence it is not clear whether we can deform the model away from the WZW-point. The k = 1 theory at the WZW-point is discussed in[27,28,31]. In fact, these representations saturate the psu(1, 1|2) BPS bound and are therefore atypical representations. Thus the representation splits up into four atypical representations, each of which yielding one BPS state. Notice the state with conformal weight h − has negative norm and is therefore unphysical, as argued before. We changed our conventions slightly with respect to[16] to accommodate the fact that one direction is timelike and the others are spacelike. For example, the zero modes generate the representation 2 · (2, 2) of sl(2, R) ⊕ su(2) in the case of AdS 3 × S 3 × T 4 . AcknowledgementsWe thank Minjae Cho, Scott Collier, Andrea Dei, Matthias Gaberdiel, Juan Maldacena, Alessandro Sfondrini, Xi Yin and Ida Zadeh for useful conversations and Matthias Gaberdiel for reading the manuscript prior to publication. LE thanks Imperial College London for hospitality where part of this work was done. Our research is supported by the NCCR SwissMAP, funded by the Swiss National Science Foundation.A The (affine) Lie superalgebra psu(1, 1|2)The algebra psu(1, 1|2) plays a major rôle when applying the algebraic formalism to string theory, so we recall here the relevant commutation relations. We use these commutation relations explicitly in Section 4.4. We display here the commutation relations for the affine algebra psu(1, 1|2) k . The commutation relations for the global algebra follow by looking at the zero-modes only. We use a spinor notation for the algebra. In particular, the indices α, β, γ denote spinor indices and take values {±}. The bosonic subalgebra of psu(1, 1|2) k consists of sl(2, R) k ⊕ su(2) k , whose modes we denote by J a m and K a m , respectively. The fermionic generators are denoted S αβγ n .They satisfy the commutation relations[30,36]: 16Here a ∈ {±, 3} denote adjoint indices of su(2) or sl(2, R). We have chosen the signature such that J 3 is timelike, but J + and J − are spacelike. This is important in the main text, where we compute the norm of states. The constant c a equals −1 for a = − and 1 otherwise. The sigma-matrices read explicitlyand all other components are vanishing. The two Cartan generators are chosen to be J 3 0 and K 3 0 , and we denote their eigenvalues throughout the text as j and ℓ, respectively. Furthermore, there is a unique (up to rescaling) invariant form on psu(1, 1|2), which can be read off from the central terms:We consider two kinds of representations for string theory applications. The discrete representations are lowest weight for the sl(2, R)-oscillators, and half-infinite. For su(2), they are finite dimensional. Hence they are characterised by J 3 0 |j, ℓ = j|j, ℓ , K 3 0 |j, ℓ = ℓ|j, ℓ , J − 0 |j, ℓ = 0 , K + 0 |j, ℓ = 0 , J a m |j, ℓ = 0 , m > 0 , K a m |j, ℓ = 0 , m > 0 . Requiring that the zero-mode representation has no negative-norm states imposes ℓ ∈ 1 2 Z ≥0 . j is not quantised. The Casimir of such a representation reads C(j, ℓ) = −2j(j − 1) + 2ℓ(ℓ + 1) . (A.7) The Large N limit of superconformal field theories and supergravity. J M Maldacena, 10.1023/A:1026654312961,10.4310/ATMP.1998.v2.n2.a1hep-th/9711200Int. J. Theor. Phys. 381113J. M. Maldacena, The Large N limit of superconformal field theories and supergravity, Int. J. Theor. Phys. 38 (1999) 1113 [hep-th/9711200]. Large N field theories, string theory and gravity. O Aharony, S S Gubser, J M Maldacena, H Ooguri, Y Oz, 10.1016/S0370-1573(99)00083-6hep-th/9905111Phys. Rept. 323183O. Aharony, S. S. Gubser, J. M. Maldacena, H. Ooguri and Y. Oz, Large N field theories, string theory and gravity, Phys. Rept. 323 (2000) 183 [hep-th/9905111]. The D1/D5 system and singular CFT. N Seiberg, E Witten, 10.1088/1126-6708/1999/04/017hep-th/9903224JHEP. 0417N. Seiberg and E. Witten, The D1/D5 system and singular CFT, JHEP 04 (1999) 017 [hep-th/9903224]. Strings in AdS 3 and SL(2, R) WZW model. Part 2. Euclidean black hole. J M Maldacena, H Ooguri, J Son, 10.1063/1.1377039hep-th/0005183J. Math. Phys. 422961J. M. Maldacena, H. Ooguri and J. Son, Strings in AdS 3 and SL(2, R) WZW model. Part 2. Euclidean black hole, J. Math. Phys. 42 (2001) 2961 [hep-th/0005183]. A Giveon, D Kutasov, N Seiberg, 10.4310/ATMP.1998.v2.n4.a3hep-th/9806194Comments on string theory on AdS. 3733A. Giveon, D. Kutasov and N. Seiberg, Comments on string theory on AdS 3 , Adv. Theor. Math. Phys. 2 (1998) 733 [hep-th/9806194]. . D Kutasov, N Seiberg, 10.1088/1126-6708/1999/04/008hep-th/9903219More comments on string theory on AdS. 3048JHEPD. Kutasov and N. Seiberg, More comments on string theory on AdS 3 , JHEP 04 (1999) 008 [hep-th/9903219]. R) WZW model 1.: The Spectrum. J M Maldacena, H Ooguri, 10.1063/1.1377273hep-th/0001053Strings in AdS 3 and SL. 422929J. M. Maldacena and H. Ooguri, Strings in AdS 3 and SL(2, R) WZW model 1.: The Spectrum, J. Math. Phys. 42 (2001) 2929 [hep-th/0001053]. R) WZW model. Part 3. Correlation functions. J M Maldacena, H Ooguri, 10.1103/PhysRevD.65.106006hep-th/0111180Strings in AdS 3 and SL. 65106006J. M. Maldacena and H. Ooguri, Strings in AdS 3 and SL(2, R) WZW model. Part 3. Correlation functions, Phys. Rev. D65 (2002) 106006 [hep-th/0111180]. D Israel, C Kounnas, M P Petropoulos, 10.1088/1126-6708/2003/10/028hep-th/0306053Superstrings on NS5 backgrounds, deformed AdS 3 and holography. 28D. Israel, C. Kounnas and M. P. Petropoulos, Superstrings on NS5 backgrounds, deformed AdS 3 and holography, JHEP 10 (2003) 028 [hep-th/0306053]. Counting giant gravitons in AdS 3. S Raju, 10.1103/PhysRevD.77.046012Phys. Rev. 77460120709.1171S. Raju, Counting giant gravitons in AdS 3 , Phys. Rev. D77 (2008) 046012 [0709.1171]. Towards integrability for AdS 3 /CFT 2. A Sfondrini, 10.1088/1751-8113/48/2/0230011406.2971J. Phys. 4823001A. Sfondrini, Towards integrability for AdS 3 /CFT 2 , J. Phys. A48 (2015) 023001 [1406.2971]. Closed strings and moduli in AdS 3 /CFT 2. O , Ohlsson Sax, B Stefański, 10.1007/JHEP05(2018)1011804.02023JHEP. 05101O. Ohlsson Sax and B. Stefański, Closed strings and moduli in AdS 3 /CFT 2 , JHEP 05 (2018) 101 [1804.02023]. Conformal field theory of AdS background with Ramond-Ramond flux. N Berkovits, C Vafa, E Witten, 10.1088/1126-6708/1999/03/018hep-th/9902098JHEP. 0318N. Berkovits, C. Vafa and E. Witten, Conformal field theory of AdS background with Ramond-Ramond flux, JHEP 03 (1999) 018 [hep-th/9902098]. Conformal Current Algebra in Two Dimensions. S K Ashok, R Benichou, J Troost, 10.1088/1126-6708/2009/06/017JHEP. 06170903.4277S. K. Ashok, R. Benichou and J. Troost, Conformal Current Algebra in Two Dimensions, JHEP 06 (2009) 017 [0903.4277]. The Conformal Current Algebra on Supergroups with Applications to the Spectrum and Integrability. R Benichou, J Troost, 10.1007/JHEP04(2010)1211002.3712JHEP. 04121R. Benichou and J. Troost, The Conformal Current Algebra on Supergroups with Applications to the Spectrum and Integrability, JHEP 04 (2010) 121 [1002.3712]. The plane-wave spectrum from the worldsheet. L Eberhardt, K Ferreira, 1805.12155L. Eberhardt and K. Ferreira, The plane-wave spectrum from the worldsheet, 1805.12155. No ghost theorem for SU(1,1) string theories. S Hwang, 10.1016/0550-3213(91)90177-YNucl. Phys. 354100S. Hwang, No ghost theorem for SU(1,1) string theories, Nucl. Phys. B354 (1991) 100. The no ghost theorem for AdS 3 and the stringy exclusion principle. J M Evans, M R Gaberdiel, M J Perry, 10.1016/S0550-3213(98)00561-6hep-th/9806024Nucl. Phys. 535152J. M. Evans, M. R. Gaberdiel and M. J. Perry, The no ghost theorem for AdS 3 and the stringy exclusion principle, Nucl. Phys. B535 (1998) 152 [hep-th/9806024]. On the conformal field theory of the Higgs branch. E Witten, 10.1088/1126-6708/1997/07/003hep-th/9707093JHEP. 073E. Witten, On the conformal field theory of the Higgs branch, JHEP 07 (1997) 003 [hep-th/9707093]. IR dynamics of D = 2, N = (4, 4) gauge theories and DLCQ of 'little string theories. O Aharony, M Berkooz, 10.1088/1126-6708/1999/10/030hep-th/9909101JHEP. 1030O. Aharony and M. Berkooz, IR dynamics of D = 2, N = (4, 4) gauge theories and DLCQ of 'little string theories', JHEP 10 (1999) 030 [hep-th/9909101]. Instanton strings and hyperKähler geometry. R Dijkgraaf, 10.1016/S0550-3213(98)00869-4hep-th/9810210Nucl. Phys. 543545R. Dijkgraaf, Instanton strings and hyperKähler geometry, Nucl. Phys. B543 (1999) 545 [hep-th/9810210]. U(1) charges and moduli in the D1 -D5 system. F Larsen, E J Martinec, 10.1088/1126-6708/1999/06/019hep-th/9905064JHEP. 0619F. Larsen and E. J. Martinec, U(1) charges and moduli in the D1 -D5 system, JHEP 06 (1999) 019 [hep-th/9905064]. Anti-de Sitter fragmentation. J M Maldacena, J Michelson, A Strominger, 10.1088/1126-6708/1999/02/011hep-th/9812073JHEP. 0211J. M. Maldacena, J. Michelson and A. Strominger, Anti-de Sitter fragmentation, JHEP 02 (1999) 011 [hep-th/9812073]. Central Charges in the Canonical Realization of Asymptotic Symmetries: An Example from Three-Dimensional Gravity. J D Brown, M Henneaux, 10.1007/BF01211590Commun.Math.Phys. 104207J. D. Brown and M. Henneaux, Central Charges in the Canonical Realization of Asymptotic Symmetries: An Example from Three-Dimensional Gravity, Commun.Math.Phys. 104 (1986) 207. Supergravity one-loop corrections on AdS 7 and AdS 3 , higher spins and AdS/CFT. M Beccaria, G Macorini, A A Tseytlin, 10.1016/j.nuclphysb.2015.01.0141412.0489Nucl. Phys. 892211M. Beccaria, G. Macorini and A. A. Tseytlin, Supergravity one-loop corrections on AdS 7 and AdS 3 , higher spins and AdS/CFT, Nucl. Phys. B892 (2015) 211 [1412.0489]. D1/D5 system with B-field, noncommutative geometry and the CFT of the Higgs branch. A Dhar, G Mandal, S R Wadia, K P Yogendran, 10.1016/S0550-3213(00)00066-3hep-th/9910194Nucl. Phys. 575177A. Dhar, G. Mandal, S. R. Wadia and K. P. Yogendran, D1/D5 system with B-field, noncommutative geometry and the CFT of the Higgs branch, Nucl. Phys. B575 (2000) 177 [hep-th/9910194]. . M R Gaberdiel, R Gopakumar, 1803.04423Tensionless String Spectra on AdS. 3M. R. Gaberdiel and R. Gopakumar, Tensionless String Spectra on AdS 3 , 1803.04423. . L Eberhardt, M R Gaberdiel, R Gopakumar, To appearL. Eberhardt, M. R. Gaberdiel and R. Gopakumar, To appear. PSL(n|n) sigma model as a conformal field theory. M Bershadsky, S Zhukov, A Vaintrob, 10.1016/S0550-3213(99)00378-8hep-th/9902180Nucl. Phys. 559205M. Bershadsky, S. Zhukov and A. Vaintrob, PSL(n|n) sigma model as a conformal field theory, Nucl. Phys. B559 (1999) 205 [hep-th/9902180]. G Gotz, T Quella, V Schomerus, 10.1088/1126-6708/2007/03/003hep-th/0610070The WZNW model on PSU(1, 1|2). 3G. Gotz, T. Quella and V. Schomerus, The WZNW model on PSU(1, 1|2), JHEP 03 (2007) 003 [hep-th/0610070]. . G Giribet, C Hull, M Kleban, M Porrati, E Rabinovici, Superstrings on AdS 3 at k = 1, 1803.04420G. Giribet, C. Hull, M. Kleban, M. Porrati and E. Rabinovici, Superstrings on AdS 3 at k = 1, 1803.04420. Microscopic formulation of black holes in string theory. J R David, G Mandal, S R Wadia, 10.1016/S0370-1573(02)00271-5hep-th/0203048Phys. Rept. 369549J. R. David, G. Mandal and S. R. Wadia, Microscopic formulation of black holes in string theory, Phys. Rept. 369 (2002) 549 [hep-th/0203048]. Spectral Flow in AdS 3 /CFT 2. G Giribet, A Pakman, L Rastelli, 10.1088/1126-6708/2008/06/013JHEP. 06130712.3046G. Giribet, A. Pakman and L. Rastelli, Spectral Flow in AdS 3 /CFT 2 , JHEP 06 (2008) 013 [0712.3046]. A holographic dual for string theory on AdS 3 × S 3 × S 3 × S 1. L Eberhardt, M R Gaberdiel, W Li, 10.1007/JHEP08(2017)1111707.02705JHEP. 08111L. Eberhardt, M. R. Gaberdiel and W. Li, A holographic dual for string theory on AdS 3 × S 3 × S 3 × S 1 , JHEP 08 (2017) 111 [1707.02705]. J M Maldacena, G W Moore, A Strominger, hep-th/9903163Counting BPS black holes in toroidal Type II string theory. J. M. Maldacena, G. W. Moore and A. Strominger, Counting BPS black holes in toroidal Type II string theory, hep-th/9903163. The massless string spectrum on AdS 3 × S 3 from the supergroup. M R Gaberdiel, S Gerigk, 10.1007/JHEP10(2011)0451107.2660JHEP. 1045M. R. Gaberdiel and S. Gerigk, The massless string spectrum on AdS 3 × S 3 from the supergroup, JHEP 10 (2011) 045 [1107.2660]. Higher spins on AdS 3 from the worldsheet. K Ferreira, M R Gaberdiel, J I Jottar, 10.1007/JHEP07(2017)1311704.08667JHEP. 07131K. Ferreira, M. R. Gaberdiel and J. I. Jottar, Higher spins on AdS 3 from the worldsheet, JHEP 07 (2017) 131 [1704.08667]. Compatibility of the Dual Pomeron with Unitarity and the Absence of Ghosts in the Dual Resonance Model. P Goddard, C B Thorn, 10.1016/0370-2693(72)90420-0Phys. Lett. 40235P. Goddard and C. B. Thorn, Compatibility of the Dual Pomeron with Unitarity and the Absence of Ghosts in the Dual Resonance Model, Phys. Lett. 40B (1972) 235. N = 2 Superconformal Symmetry and SO(2, 1) Current Algebra. L J Dixon, M E Peskin, J D Lykken, 10.1016/0550-3213(89)90459-8Nucl. Phys. 325329L. J. Dixon, M. E. Peskin and J. D. Lykken, N = 2 Superconformal Symmetry and SO(2, 1) Current Algebra, Nucl. Phys. B325 (1989) 329.
[]
[]
[ "Christian Konrad [email protected] \nDepartment of Computer Science\nCentre for Discrete Mathematics and its Applications (DIMAP)\nUniversity of Warwick\nCoventryUK\n" ]
[ "Department of Computer Science\nCentre for Discrete Mathematics and its Applications (DIMAP)\nUniversity of Warwick\nCoventryUK" ]
[]
We give a maximal independent set (MIS) algorithm that runs in O(log log ∆) rounds in the congested clique model, where ∆ is the maximum degree of the input graph. This improves upon the O( log(∆)·log log ∆ √ log n + log log ∆) rounds algorithm of [Ghaffari, PODC '17], where n is the number of vertices of the input graph. In the first stage of our algorithm, we simulate the first O( n poly log n ) iterations of the sequential random order Greedy algorithm for MIS in the congested clique model in O(log log ∆) rounds. This thins out the input graph relatively quickly: After this stage, the maximum degree of the residual graph is poly-logarithmic. In the second stage, we run the MIS algorithm of [Ghaffari, PODC '17] on the residual graph, which completes in O(log log ∆) rounds on graphs of poly-logarithmic degree. ⋆ C.
null
[ "https://arxiv.org/pdf/1802.07647v1.pdf" ]
3,395,959
1802.07647
94088eb062dc0e6846bc679e80939b324ce644ce
21 Feb 2018 Christian Konrad [email protected] Department of Computer Science Centre for Discrete Mathematics and its Applications (DIMAP) University of Warwick CoventryUK 21 Feb 2018MIS in the Congested Clique Model in O(log log ∆) Rounds ⋆ We give a maximal independent set (MIS) algorithm that runs in O(log log ∆) rounds in the congested clique model, where ∆ is the maximum degree of the input graph. This improves upon the O( log(∆)·log log ∆ √ log n + log log ∆) rounds algorithm of [Ghaffari, PODC '17], where n is the number of vertices of the input graph. In the first stage of our algorithm, we simulate the first O( n poly log n ) iterations of the sequential random order Greedy algorithm for MIS in the congested clique model in O(log log ∆) rounds. This thins out the input graph relatively quickly: After this stage, the maximum degree of the residual graph is poly-logarithmic. In the second stage, we run the MIS algorithm of [Ghaffari, PODC '17] on the residual graph, which completes in O(log log ∆) rounds on graphs of poly-logarithmic degree. ⋆ C. Introduction The LOCAL and CONGEST Models. The LOCAL [19,23] and CONGEST [23] models are the most studied computational models for distributed graph algorithms. In these models, a communication network is represented by an n-vertex graph G = (V, E), which also constitutes the input to a computational graph problem. Each vertex (or network node) v ∈ V hosts a computational unit and is identified by a unique ID ∈ Θ(log n). Initially, besides its ID, every vertex knows its neighbors (and their IDs). All network nodes simultaneously commence the execution of a distributed algorithm. Such an algorithm proceeds in synchronous rounds, where each round consists of two phases. In the computation phase, every vertex may execute unlimited computations. This is followed by the communication phase, where vertices may exchange individual messages with their neighbors. While message lengths are unbounded in the LOCAL model, in the CONGEST model every message is of length O(log n). The goal is to design algorithms that employ as few communication rounds as possible. The output is typically distributed. For independent set problems, which are the focus of this paper, upon termination of the algorithm, every vertex knows whether it participates in the independent set. The LOCAL model provides an abstraction that allows for the study of the locality of a distributed problem, i.e., how far network nodes need to be able to look into the network in order to complete a certain task. In addition to the locality constraint, the CONGEST model also addresses the issue of congestion. For example, while in the LOCAL model, network nodes can learn their distance-r neighborhoods in r rounds, this is generally not possible in the CONGEST model due to the limitation of message sizes. The CONGESTED-CLIQUE Model. In recent years, the CONGESTED-CLIQUE model [20], a variant of the CONGEST model, has received significant attention (e.g. [13,8,14,9,17,4,12,5,11,18]). It differs from the CONGEST model in that every pair of vertices (as opposed to only every pair of adjacent vertices) can exchange messages of sizes O(log n) in the communication phase. The focus of this model thus solely lies on the issue of congestion, since non-local message exchanges are now possible. This model is at least as powerful as the CONGEST model, and many problems, such as computing a minimum spanning tree [13,9] or computing the size of a maximum matching [17], can in fact be solved much faster than in the CONGEST model. In [8], Ghaffari asks whether any of the classic local problems -maximal independent set (MIS), maximal matching, (∆ + 1)-vertex-coloring, and (2∆ − 1)-edge-coloring -can be solved much faster in the CONGESTED-CLIQUE model than in the CONGEST model, where ∆ is the maximum degree of the input graph. Ghaffari made progress on this question and gave a O( log(∆)·log log ∆ √ log n +log log ∆) rounds MIS algorithm in the CONGESTED-CLIQUE model, while the best known CONGEST model algorithm runs in O(log ∆) + 2 O( √ log log n) rounds [7]. This algorithm separates the two models with regards to the MIS problem, since it is known that Ω(min{ log ∆ log log ∆ , log n log log n }) rounds are required for MIS in the CONGEST model [16,15] 1 . Result. While Ghaffari gave a roughly quadratic improvement over the best CONGEST model MIS algorithm, in this paper, we show that an exponential improvement is possible. Our main result is as follows: Theorem 1 (Main Result). Let G = (V, E) be a graph with maximum degree ∆. There is a randomized algorithm in the CONGESTED-CLIQUE model that operates in (deterministic) O(log log ∆) rounds and outputs a maximal independent set in G with high probability. Techniques. Ghaffari gave a variant of his MIS algorithm that runs in O(log log ∆) rounds on graphs G with poly-logarithmic maximum degree, i.e., ∆(G) = O(polylog n) (Lemma 2.15. in [8]) 2 . To achieve a runtime of O(log log ∆) rounds even on graphs with arbitrarily large maximum degree, we give a O(log log ∆) rounds algorithm that computes an independent set I such that the residual graph G \ Γ G [I] (Γ G [I] denotes the inclusive neighborhood of I in G) has poly-logarithmic maximum degree. We then run Ghaffari's algorithm on the residual graph to complete the independent set computation. Our algorithm is an implementation of the sequential Greedy algorithm for MIS in the CONGESTED-CLIQUE model. Greedy processes the vertices of the input graph in arbitrary order and adds the current vertex to an initially empty independent set if non of its neighbors have previously been added. The key idea is to simulate multiple iterations of Greedy in O(1) rounds in the CONGESTED-CLIQUE model. A simulation of √ n iterations in O(1) rounds can be done as follows: Let v 1 v 2 . . . v n be an arbitrary ordering of the vertices (e.g. by their IDs). Observe that the subgraph G[{v 1 , . . . , v √ n }] induced by the first √ n vertices has at most n edges. Lenzen gave a routing protocol that can be used to collect these n edges at one distinguished vertex u in O(1) rounds. Vertex u then simulates the first √ n iterations of Greedy locally (observe that the knowledge of G[{v 1 , . . . , v √ n }] is sufficient to do this) and then notifies the nodes chosen into the independent set about their selection. The presented simulation can be used to obtain a O( √ n) rounds MIS algorithm in the CONGESTED-CLIQUE model. To reduce the number of rounds to O(log log n), we identify a residual sparsity property of the Greedy algorithm: If Greedy processes the vertices in uniform random order, then the maximum degree of the residual graph after having processed the kth vertex is O( n k log n) with high probability (Lemma 1). To make use of this property, we will thus first compute a uniform random ordering of the vertices. Then, after having processed the first √ n vertices as above, the maximum degree in the residual graph isÕ( √ n) 3 . This allows us to increase the block size and simulate the nextÕ(n 3/4 ) iterations in O(1) rounds: Using the fact that the maximum degree in the residual graph isÕ( √ n), it is not hard to see that the subgraph induced by the nextΘ(n 3 4 ) random vertices has a maximum degree ofÕ(n 1/4 ) with high probability (and thus contains O(n) edges). Pursuing this approach further, we can processΘ(n 1− 1 2 i ) vertices in the ith block, since, by the residual sparsity lemma, the maximum degree in the ith residual graph isÕ(n 1 2 i ). Hence, after having processed O(log log n) blocks, the maximum degree becomes poly-logarithmic. In Section 4, we give slightly more involved arguments that show that O(log log ∆) iterations (as opposed to O(log log n) iterations) are in fact enough. The Residual Sparsity Property of Greedy. The author is not aware of any work that exploits or mentions the residual sparsity property of the random order Greedy algorithm for MIS. In the context of correlation clustering in the data streaming model, a similar property of a Greedy clustering algorithm was used in [1] ( Lemma 19). Their lemma is in fact strong enough and can give the version required in this paper. Since [1] does not provide a proof, and the residual sparsity property is central to the functioning of our algorithm, we give a proof that follows the main idea of [1] 4 adapted to our needs. Further Related Work. The maximal independent set problem is one of the classic symmetry breaking problems in distributed computing. Without all-to-all communication, Luby [22] and independently Alon et al. [2] gave O(log n) rounds distributed algorithms more than 30 years ago. Barenboim et al. [3] improved on this for certain ranges of ∆ and gave a O(log 2 ∆) + 2 O( √ log log n) rounds algorithm. The currently fastest algorithm is by Ghaffari [7] and runs in O(log ∆) + 2 O( √ log log n) rounds. The only MIS algorithm designed in the CONGESTED-CLIQUE model is the previously mentioned algorithm by Ghaffari [8]. Ghaffari shows how multiple rounds of a CONGEST model algorithm can be simulated in much fewer rounds in the CONGESTED-CLIQUE model. This is similar to the approach taken in this paper, however, while in our algorithm the simulation of multiple iterations of the sequential Greedy algorithm is performed at one distinguished node, every node participates in the simulation of the CONGEST model algorithm in Ghaffari's algorithm. Outline. We proceed as follows. First, we give necessary definitions and notation, and we state known results that we employ in this paper (Section 2). We then give a proof of the residual sparsity property of the sequential Greedy algorithm (Section 3). Our O(log log ∆) rounds MIS algorithm is subsequently presented (Section 4), followed by a brief conclusion (Section 5). Preliminaries We assume that G = (V, E) is a simple unweighted n-vertex graph. For a node v ∈ V , we write Γ G (v) to denote v's (exclusive) neighborhood, and we write deg G (v) := |Γ G (v)|. The inclusive neighborhood is defined as Γ G [v] := Γ (v)∪{v}. Inclusive neighborhoods are extended to subsets U ⊆ V as Γ G [U ] := ∪ u∈U Γ G [u]. Given a subset of vertices U ⊆ V , the subgraph induced by U is denoted by G[U ]. Independent Sets. An independent set I ⊆ V is a subset of non-adjacent vertices. An independent set I is maximal if for every v ∈ V \ I, I ∪ {v} is not an independent set. Given an independent set I, we call the graph G ′ = G[V \ Γ G [I]] the residual graph with respect to I. If clear from the context, we may simple call G ′ the residual graph. We say that a vertex u ∈ V is uncovered with respect to I, if u is not adjacent to a vertex in I, i.e., u ∈ V \ Γ G [I]. Again, if clear from the context, we simply say u is uncovered without specifying I explicitly. Ghaffari gave the following result that we will reuse in this paper: Theorem 2 (Ghaffari [8]). Let G be a n-vertex graph with ∆(G) = poly log(n). Then there is a distributed algorithm that runs in the CONGESTED-CLIQUE model and computes a MIS on G in O(log log ∆) rounds. Routing. As a subroutine, our algorithm needs to solve the following simple routing task: Let u ∈ V be an arbitrary vertex. Suppose that every other vertex v ∈ V \ {u} holds 0 ≤ n v ≤ n messages each of size O(log n) that it wants to deliver to u. We are guaranteed that v∈V n v ≤ n. Lenzen proved that in the CONGESTED-CLIQUE model there is a deterministic routing scheme that achieves this task in O(1) rounds [18]. In the following, we will refer to this scheme as Lenzen's routing scheme. Concentration Bound for Dependent Variables. In the analysis of our algorithm, we require a Chernoff bound for dependent variables (see for example [6]): Theorem 3 (Chernoff Bound for Dependent Variables, e.g. [6]). Let X 1 , X 2 , . . . , X n be 0/1 random variables for which there is a p ∈ [0, 1] such that for all k ∈ [n] and all a 1 , . . . , a k−1 ∈ {0, 1} the inequality P [X k = 1 | X 1 = a 1 , X 2 = a 2 , . . . , X k−1 = a k−1 ] ≤ p holds. Let further µ ≥ p · n. Then, for every δ > 0: P n i=1 X i ≥ (1 + δ)µ ≤ e δ (1 + δ) 1+δ µ . Last, we say that an event occurs with high probability if the probability of the event not occuring is at most 1 n . Sequential Random Order Greedy Algorithm for MIS The Greedy algorithm for maximal independent set processes the vertices of the input graph in arbitrary order. It adds the current vertex under consideration to an initially empty independent set I if none of its neighbors are already in I. This algorithm progressively thins out the input graph, and the rate at which the graph loses edges depends heavily on the order in which the vertices are considered. If the vertices are processed in uniform random order (Algorithm 1), then the number of edges in the residual graph decreases relatively quickly. A variant of the next lemma was proved in [1] in the context of correlation clustering in the streaming model: Lemma 1. Let t be an integer with 1 ≤ t < n. Let U i be the set U at the beginning of iteration i of Algorithm 1. Then with probability at least 1 − n −9 the following holds: ∆(G[U t ]) ≤ 10 ln(n) n t . Proof. Fix an arbitrary index j ≥ t. We will prove that either vertex v j is not in U t , or it has at most 10 ln(n) n t neighbors in G[U t ], with probability at least 1 − n −10 . The result follows by a union bound over the error probabilities of all n vertices. We consider the following process in which the random order of the vertices is determined. First, reveal v j . Then, reveal vertices v i just before iteration i of the algorithm. Let Γ G (v j ) ∩ U i be the set of neighbors of v j that are uncovered in the beginning of iteration i, and let d i = |N i |. For every 1 ≤ i ≤ t − 1, the following holds: P [v i ∈ N i | v j , v 1 , . . . , v i−1 ] = d i n − 1 − (i − 1) ≥ d i n , since v i can be one of the not yet revealed n − 1 − (i − 1) vertices. We now distinguish two cases. First, suppose that d t−1 ≤ 10 ln(n) n t . Then the result follows immediately since, by construction, d t ≤ d t−1 (the sequence (d i ) i is decreasing). Suppose next that d t−1 > 10 ln(n) n t . Then, we will prove that with high probability there is one iteration i ′ ≤ t − 1 in which a neighbor of v j is considered by the algorithm, i.e., v i ′ ∈ N i ′ . This in turn implies that v j is not in U t . We have: P [∀i < t : v i / ∈ N i | v j ] ≤ i<t P [v i / ∈ N i | v j , v 1 , . . . , v i−1 ] ≤ i<t (1 − d i n ) ≤ (1 − d t−1 n ) t−1 ≤ e d t−1 (t−1) n ≤ n −10 . ⊓ ⊔ MIS Algorithm in the Congest Clique Model Algorithm Our MIS algorithm, depicted in Algorithm 2, consists of three parts: First, all vertices agree on a uniform random order as follows. The vertex with the smallest ID choses a uniform random order locally and informs all other vertices about their positions within the order. Then, all vertices broadcast their positions to all other vertices. As a result, all vertices know the entire order. Let v 1 , v 2 , . . . , v n be this order. Next, we simulate Greedy until the maximum degree of the residual graph is at most log 4 n (this bound is chosen only for convenience; any poly-logarithmic number in n is equally suitable). To this end, in each iteration of the while-loop, we first determine a number k as a function of the maximum degree ∆(G ′ ) of the current residual graph G ′ so that the subgraph of G ′ induced by the yet uncovered vertices of {v 1 , . . . , v k } has at most n edges w.h.p. (see Lemma 3). Using Lenzen's routing protocol, these edges are collected at vertex v 1 , which continues the simulation of Greedy up to iteration k. It then informs the chosen vertices about their selection, who in turn inform their neighbors about their selection. Vertices then compute the new residual graph and its maximum degree and proceed with the next iteration of the while-loop. We prove Input: G = (V, E) is an n-vertex graph with maximum degree ∆ := ∆(G) Set parameter C = 5 1. Nodes agree on random order. All vertices exchange their IDs in one round. Let u ∈ V be the vertex with the smallest ID. Vertex u choses a uniform random order of V and informs every vertex v ∈ V \ {u} about its position r v within the order. Then, every vertex v ∈ V broadcasts r v to all other vertices. As a result, all vertices know the order. Let v 1 , v 2 , . . . , v n be the resulting order. 2. Simulate sequential Greedy. Last, we run Ghaffari's algorithm on G ′ which completes the maximal independent set computation. Every vertex v i sets u i ← true indicating that v i is uncovered. Let G ′ := G. Every vertex v i broadcasts deg G ′ (v i ) Analysis Let G ′ i denote the graph G ′ at the beginning of iteration i of the while-loop. Notice that G ′ 1 = G. Let ∆ i := ∆(G ′ i ) and let k i = n √ ∆ i C be the value of k in iteration i. Observe that the while-loop is only executed if ∆ i > log 4 n and hence k i ≥ n log 2 nC(1) holds for every iteration i of the while-loop. Further let H i be the graph H in iteration i of the while-loop. To establish the runtime of our algorithm, we need to bound the number of iterations of the while-loop. To this end, in the next lemma we bound ∆ i for every 1 ≤ 1 ≤ n and conclude that ∆ j ≤ log 4 n, for some j ∈ O(log log ∆). Lemma 2. With probability at least 1 − n −8 , for every i ≤ n, the maximum degree in G ′ i is bounded as follows: ∆ i ≤ ∆ 1 2 i−1 · 100C ln 2 n . Proof. We prove the statement by induction. Observe that ∆ 1 = ∆ and the statement is thus trivially true for i = 1. Suppose that the statement holds up to some index i − 1. Recall that G ′ i is the residual graph obtained by running Greedy on vertices v 1 , . . . , v k i−1 . Hence, by applying Lemma 1, the following holds with probability 1 − n −9 : ∆ i ≤ 10 ln(n) n k i−1 = 10 ln(n)n n √ ∆ i−1 C = ∆ i−1 · 10C ln n . Resolving the recursion, we obtain ∆ i = ∆ 1 2 i−1 · i−2 j=0 (10C ln n) 1 2 j = ∆ 1 2 i−1 · (10C ln n) i−2 j=0 1 2 j ≤ ∆ 1 2 i−1 · 100C 2 ln 2 n . Observe that we invoked n times Lemma 1. Thus, by the union bound, the result holds with probability 1 − n −8 . ⊓ ⊔ Corollary 1. ∆ i = O(log 2 n) for some i ∈ O(log log ∆). To establish correctness of the algorithm, we need to ensure that we can apply Lenzen's routing protocol to collect the edges of H i at vertex v 1 . For this to be feasible, we need to prove that, for every i, H i contains at most n edges with high probability. Lemma 3. With probability at least 1 − n −9 , graph H i has at most n edges. Proof. Let U i be the vertex set of G ′ i , i.e., the set of uncovered vertices at the beginning of iteration i. We will prove now that, with probability at least 1 − n 10 , for every v j ∈ U i , the following holds d(v j ) := |Γ G ′ i (v j ) ∩ {v k i−1 +1 , . . . , v k }| ≤ n k i .(2) Since the vertex set of H i is a subset of at most k i − k i−1 ≤ k i vertices of U i , the result follows by applying the union bound on the error probabilities for every vertex of G ′ i . To prove Inequality 2, observe that graph G ′ i is solely determined by vertices v 1 , v 2 , . . . , v k i−1 , and the execution of the algorithm so far was not affected by the outcome of the random variables v k i−1 +1 , . . . , v n . Thus, by the principle of deferred decision, for every k i−1 + 1 ≤ l ≤ k i , vertex v l can be seen as a uniform random vertex chosen from V \ {v 1 , . . . , v l−1 }. For 1 ≤ l ≤ k i − k i−1 , let X l be the indicator variable of the event "v k i−1 +l ∈ Γ G ′ i (v j )". Observe that d(v j ) = l X l and E [d(v j )] = deg G ′ i (v j ) · k i − k i−1 n − k i−1 ≤ deg G ′ i (v j ) · k i n .(3) Furthermore, observe that for every 1 ≤ l ≤ k i − k i−1 , and all a 1 , . . . , a l−1 ∈ {0, 1}, the inequality P [X l = 1 | X 1 = a 1 , X 2 = a 2 , . . . , X l−1 = a l−1 ] ≤ deg G ′ i (v j ) n − k i ≤ 2 · deg G ′ i (v j ) n holds (using the bound k i ≤ n/2, which follows from Inequality 1), since in the worst case, we have a 1 = a 2 = · · · = a l−1 = 0, which implies that there are deg G ′ i (v j ) choices left out of at least n − k i possibilities such that X l = 1. We can thus use the Chernoff bound for dependent variables as stated in Theorem 3 in order to bound the probability that d(v j ) deviates from its expectation. We distinguish two cases. First, suppose that E [d(v j )] ≥ 4 log n. Then by Theorem 3 (setting µ = 2E [d(v j )] and δ = 8), P [d(v j ) ≥ 18 · E[d(v j )]] ≤ exp e 8 (1 + 8) 1+8 8 log n ≤ n −10 . Thus, using Inequality 3, with high probability, Concerning the correctness of the algorithm, the only non-trivial step is the collection of graph H i at vertex v 1 . This is achieved using Lenzen's routing protocol, which can be used since we proved in Lemma 3 that graph H i has at most n vertices with high probability. ⊓ ⊔ d(v j ) ≤ 18 · E [d(v j )] ≤ 18 · deg G ′ i (v j ) k i n ≤ 18 · ∆ i √ ∆ i C ≤ 18 · n k i C 2 ≤ n k i , since C ≥ 5. Suppose now that E [d(v j )] < 4 Conclusion In this paper, we gave a O(log log ∆) rounds MIS algorithm that runs in the CONGESTED-CLIQUE model. We simulated the sequential random order Greedy algorithm, exploiting the residual sparsity property of Greedy. It is conceivable that the round complexity can be reduced further -there are no lower bounds known for MIS in the CONGESTED-CLIQUE model. Results on other problems, such as the minimum weight spanning tree problem where the O(log log n) rounds algorithm of Lotker et al. [21] has subsequently been improved to O(log log log n) rounds [10], O(log * n) rounds [9], and finally to O(1) rounds [13], give hope that similar improvements may be possible for MIS as well. Can we simulate other centralized Greedy algorithms in few rounds in the CONGESTED-CLIQUE model? N i := Input: G = (V, E) is an n-vertex graph 1. Let v 1 , v 2 , . . . , v n be a uniform random ordering of V 2. I ← {}, U ← V (U is the set of uncovered elements) 3. for i ← 1, 2, . . . , n do if v i ∈ U then I ← I ∪ {v i } U ← U \ Γ G [v i ] 4. return I Algorithm 1.Random order Greedy algorithm for MIS. to all other vertices so that every vertex knows ∆(G ′ ). while ∆(G ′ ) > log 4 n do vertex v i with u i = true and i ≤ k sends all its incident edges v i v j with u j = true and j < i to v 1 using Lenzen's routing protocol in O(1) rounds. (c) Vertex v 1 knows the subgraph H of uncovered vertices v j with j ≤ k, i.e.,H := G ′ [{v j : j ≤ k and u j = true}] .It continues the simulation of Greedy up to iteration k using H. Let I ′ be the vertices selected into the independent set.(d) Vertex v 1 informs nodes I ′ about their selection in one round. Nodes I ′ inform their neighbors about their selection in one round. (e) Every node v i ∈ Γ G [I ′ ] sets u i ← f alse. (f) Let G ′ := G[{v i ∈ V : u i = true}]. Every vertex v i broadcasts u i to all other vertices. Then every vertex v i computes deg G ′ (v i ) locally and broadcasts deg G ′ (v i ) to all other vertices. As a result, every vertex knows ∆(G ′ ). end while 3. Run Ghaffari's algorithm. Run Ghaffari's MIS algorithm on G ′ in O(log log ∆) rounds. Algorithm 2. O(log log ∆) rounds MIS algorithm in the CONGESTED-CLIQUE model. in Lemma 2 that only O(log log ∆) iterations of the while-loop are necessary until ∆(G ′ ) drops below log 4 n. log n. Then, by Theorem 3 (setting µ = 8 log n and δ = 8), P [d(v j ) ≥ 72 log n] ≤ n −10 , by the same calculation as above. Since k i ≤ n log 2 n·C (Inequality 1), we have d(v j ) ≤ n k i , which completes the proof. ⊓ ⊔ Theorem 1 (restated) Algorithm 2 operates in O(log log ∆) rounds in the CONGESTED-CLIQUE model and outputs a maximal independent set with high probability. Proof. Concerning the runtime, Step 1 of the algorithm requires O(1) communication rounds. Observe that every iteration of the while-loop requires O(1) rounds. The while-loop terminates in O(log log ∆) rounds with high probability, by Corollary 1. Since Ghaffari's algorithm requires O(log log ∆ ′ ) = O(log log ∆) rounds, where ∆ ′ is the maximum degree in the residual as computed in the last iteration of the while-loop (or in case ∆ < log 4 n then ∆ ′ = ∆), the overall runtime is bounded by O(log log ∆). This lower bound even holds in the LOCAL model.2 This variant works in fact on graphs with maximum degree bounded by 2 c √ log n , for a sufficiently small constant c, but a poly-logarithmic degree bound is sufficient for our purposes.3 We use the notationÕ(.), which equals the usual O() notation where all poly-logarithmic factors are ignored. The authors of[1] kindly shared an extended version of their paper with me. AcknowledgementsThe author thanks Amit Chakrabarti, Anthony Wirth, and Graham Cormode for discussions about the residual sparsity property of the clustering algorithm given in[1]. Correlation clustering in data streams. K J Ahn, G Cormode, S Guha, A Mcgregor, A Wirth, Proceedings of the 32Nd International Conference on International Conference on Machine Learning. the 32Nd International Conference on International Conference on Machine Learning37Ahn, K.J., Cormode, G., Guha, S., McGregor, A., Wirth, A.: Correlation clustering in data streams. In: Proceedings of the 32Nd International Conference on International Conference on Machine Learning -Volume 37. pp. 2237-2246. ICML'15, JMLR.org (2015), http://dl.acm.org/citation.cfm?id=3045118.3045356 A fast and simple randomized parallel algorithm for the maximal independent set problem. N Alon, L Babai, A Itai, 10.1016/0196-6774(86)90019-2J. Algorithms. 74Alon, N., Babai, L., Itai, A.: A fast and simple randomized parallel algorithm for the maximal independent set problem. J. Algorithms 7(4), 567-583 (Dec 1986), http://dx.doi.org/10.1016/0196-6774(86)90019-2 The locality of distributed symmetry breaking. L Barenboim, M Elkin, S Pettie, J Schneider, http:/doi.acm.org/10.1145/29031371-20:45J. ACM. 633Barenboim, L., Elkin, M., Pettie, S., Schneider, J.: The locality of distributed symmetry breaking. J. ACM 63(3), 20:1-20:45 (Jun 2016), http://doi.acm.org/10.1145/2903137 Algebraic methods in the congested clique. K Censor-Hillel, P Kaski, J H Korhonen, C Lenzen, A Paz, J Suomela, http:/doi.acm.org/10.1145/2767386.2767414Proceedings of the 2015 ACM Symposium on Principles of Distributed Computing. the 2015 ACM Symposium on Principles of Distributed ComputingNew York, NY, USAACMPODC '15Censor-Hillel, K., Kaski, P., Korhonen, J.H., Lenzen, C., Paz, A., Suomela, J.: Algebraic methods in the congested clique. In: Proceedings of the 2015 ACM Symposium on Principles of Distributed Computing. pp. 143-152. PODC '15, ACM, New York, NY, USA (2015), http://doi.acm.org/10.1145/2767386.2767414 On the power of the congested clique model. A Drucker, F Kuhn, R Oshman, http:/doi.acm.org/10.1145/2611462.2611493Proceedings of the 2014 ACM Symposium on Principles of Distributed Computing. the 2014 ACM Symposium on Principles of Distributed ComputingNew York, NY, USAACMPODC '14Drucker, A., Kuhn, F., Oshman, R.: On the power of the congested clique model. In: Proceedings of the 2014 ACM Symposium on Principles of Distributed Computing. pp. 367-376. PODC '14, ACM, New York, NY, USA (2014), http://doi.acm.org/10.1145/2611462.2611493 Improved algorithms for latency minimization in wireless networks. A Fanghänel, T Kesselheim, B Vöcking, 10.1016/j.tcs.2010.05.004Theor. Comput. Sci. 41224Fanghänel, A., Kesselheim, T., Vöcking, B.: Improved algorithms for latency mini- mization in wireless networks. Theor. Comput. Sci. 412(24), 2657-2667 (May 2011), http://dx.doi.org/10.1016/j.tcs.2010.05.004 An improved distributed algorithm for maximal independent set. M Ghaffari, Proceedings of the Twenty-seventh Annual ACM-SIAM Symposium on Discrete Algorithms. the Twenty-seventh Annual ACM-SIAM Symposium on Discrete AlgorithmsPhiladelphia, PA, USASODA '16Ghaffari, M.: An improved distributed algorithm for maximal independent set. In: Proceed- ings of the Twenty-seventh Annual ACM-SIAM Symposium on Discrete Algorithms. pp. 270- 277. SODA '16, Society for Industrial and Applied Mathematics, Philadelphia, PA, USA (2016), http://dl.acm.org/citation.cfm?id=2884435.2884455 Distributed mis via all-to-all communication. M Ghaffari, http:/doi.acm.org/10.1145/3087801.3087830Proceedings of the ACM Symposium on Principles of Distributed Computing. the ACM Symposium on Principles of Distributed ComputingNew York, NY, USAACMPODC '17Ghaffari, M.: Distributed mis via all-to-all communication. In: Proceedings of the ACM Symposium on Principles of Distributed Computing. pp. 141-149. PODC '17, ACM, New York, NY, USA (2017), http://doi.acm.org/10.1145/3087801.3087830 Mst in log-star rounds of congested clique. M Ghaffari, M Parter, http:/doi.acm.org/10.1145/2933057.2933103Proceedings of the 2016 ACM Symposium on Principles of Distributed Computing. the 2016 ACM Symposium on Principles of Distributed ComputingNew York, NY, USAACMPODC '16Ghaffari, M., Parter, M.: Mst in log-star rounds of congested clique. In: Proceedings of the 2016 ACM Symposium on Principles of Distributed Computing. pp. 19-28. PODC '16, ACM, New York, NY, USA (2016), http://doi.acm.org/10.1145/2933057.2933103 Toward optimal bounds in the congested clique: Graph connectivity and mst. J W Hegeman, G Pandurangan, S V Pemmaraju, V B Sardeshmukh, M Scquizzato, http:/doi.acm.org/10.1145/2767386.2767434Proceedings of the 2015 ACM Symposium on Principles of Distributed Computing. pp. 91-100. PODC '15. the 2015 ACM Symposium on Principles of Distributed Computing. pp. 91-100. PODC '15New York, NY, USAACMHegeman, J.W., Pandurangan, G., Pemmaraju, S.V., Sardeshmukh, V.B., Scquizzato, M.: Toward optimal bounds in the congested clique: Graph connectivity and mst. In: Proceedings of the 2015 ACM Sympo- sium on Principles of Distributed Computing. pp. 91-100. PODC '15, ACM, New York, NY, USA (2015), http://doi.acm.org/10.1145/2767386.2767434 Lessons from the congested clique applied to mapreduce. J W Hegeman, S V Pemmaraju, Structural Information and Communication Complexity. Halldórsson, M.M.ChamSpringer International PublishingHegeman, J.W., Pemmaraju, S.V.: Lessons from the congested clique applied to mapreduce. In: Halldórsson, M.M. (ed.) Structural Information and Communication Complexity. pp. 149-164. Springer International Publishing, Cham (2014) Near-constant-time distributed algorithms on a congested clique. J W Hegeman, S V Pemmaraju, V B Sardeshmukh, Distributed Computing. Kuhn, F.Berlin Heidelberg; Berlin, HeidelbergSpringerHegeman, J.W., Pemmaraju, S.V., Sardeshmukh, V.B.: Near-constant-time distributed algorithms on a con- gested clique. In: Kuhn, F. (ed.) Distributed Computing. pp. 514-530. Springer Berlin Heidelberg, Berlin, Heidelberg (2014) MST in O(1) rounds of congested clique. T Jurdzinski, K Nowicki, 10.1137/1.9781611975031.167Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms. the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete AlgorithmsNew Orleans, LA, USAJurdzinski, T., Nowicki, K.: MST in O(1) rounds of congested clique. In: Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2018, New Orleans, LA, USA, January 7-10, 2018. pp. 2620-2632 (2018), https://doi.org/10.1137/1.9781611975031.167 Brief announcement: Towards a complexity theory for the congested clique. J H Korhonen, J Suomela, 10.4230/LIPIcs.DISC.2017.5531st International Symposium on Distributed Computing. Vienna, Austria55Korhonen, J.H., Suomela, J.: Brief announcement: Towards a complexity theory for the congested clique. In: 31st International Symposium on Distributed Computing, DISC 2017, October 16-20, 2017, Vienna, Austria. pp. 55:1-55:3 (2017), https://doi.org/10.4230/LIPIcs.DISC.2017.55 Local computation: Lower and upper bounds. F Kuhn, T Moscibroda, R Wattenhofer, http:/doi.acm.org/10.1145/2742012J. ACM. 632Kuhn, F., Moscibroda, T., Wattenhofer, R.: Local computation: Lower and upper bounds. J. ACM 63(2), 17:1-17:44 (Mar 2016), http://doi.acm.org/10.1145/2742012 F Kuhn, T Moscibroda, R Wattenhofer, http:/doi.acm.org/10.1145/1011767.1011811What cannot be computed locally! In: Proceedings of the Twentythird Annual ACM Symposium on Principles of Distributed Computing. New York, NY, USAACMPODC '04Kuhn, F., Moscibroda, T., Wattenhofer, R.: What cannot be computed locally! In: Proceedings of the Twenty- third Annual ACM Symposium on Principles of Distributed Computing. pp. 300-309. PODC '04, ACM, New York, NY, USA (2004), http://doi.acm.org/10.1145/1011767.1011811 Further algebraic algorithms in the congested clique model and applications to graph-theoretic problems. Le Gall, F , Distributed Computing. Gavoille, C., Ilcinkas, D.Berlin Heidelberg; Berlin, HeidelbergSpringerLe Gall, F.: Further algebraic algorithms in the congested clique model and applications to graph-theoretic problems. In: Gavoille, C., Ilcinkas, D. (eds.) Distributed Computing. pp. 57-70. Springer Berlin Heidelberg, Berlin, Heidelberg (2016) Optimal deterministic routing and sorting on the congested clique. C Lenzen, http:/doi.acm.org/10.1145/2484239.2501983Proceedings of the 2013 ACM Symposium on Principles of Distributed Computing. pp. 42-50. PODC '13. the 2013 ACM Symposium on Principles of Distributed Computing. pp. 42-50. PODC '13New York, NY, USAACMLenzen, C.: Optimal deterministic routing and sorting on the congested clique. In: Proceedings of the 2013 ACM Symposium on Principles of Distributed Computing. pp. 42-50. PODC '13, ACM, New York, NY, USA (2013), http://doi.acm.org/10.1145/2484239.2501983 Distributive graph algorithms-global solutions from local data. N Linial, 10.1109/SFCS.1987.2028th Annual Symposium on Foundations of Computer Science. Los Angeles, California, USALinial, N.: Distributive graph algorithms-global solutions from local data. In: 28th Annual Symposium on Foundations of Computer Science, Los Angeles, California, USA, 27-29 October 1987. pp. 331-335 (1987), https://doi.org/10.1109/SFCS.1987.20 Minimum-weight spanning tree construction in o(log log n) communication rounds. Z Lotker, B Patt-Shamir, E Pavlov, D Peleg, 10.1137/S0097539704441848SIAM J. Comput. 351Lotker, Z., Patt-Shamir, B., Pavlov, E., Peleg, D.: Minimum-weight spanning tree construc- tion in o(log log n) communication rounds. SIAM J. Comput. 35(1), 120-131 (Jul 2005), https://doi.org/10.1137/S0097539704441848 Mst construction in o(log log n) communication rounds. Z Lotker, E Pavlov, B Patt-Shamir, D Peleg, http:/doi.acm.org/10.1145/777412.777428Proceedings of the Fifteenth Annual ACM Symposium on Parallel Algorithms and Architectures. the Fifteenth Annual ACM Symposium on Parallel Algorithms and ArchitecturesNew York, NY, USAACMSPAA '03Lotker, Z., Pavlov, E., Patt-Shamir, B., Peleg, D.: Mst construction in o(log log n) communication rounds. In: Proceedings of the Fifteenth Annual ACM Symposium on Parallel Algorithms and Architectures. pp. 94-100. SPAA '03, ACM, New York, NY, USA (2003), http://doi.acm.org/10.1145/777412.777428 A simple parallel algorithm for the maximal independent set problem. M Luby, http:/doi.acm.org/10.1145/22145.22146Proceedings of the Seventeenth Annual ACM Symposium on Theory of Computing. pp. 1-10. STOC '85. the Seventeenth Annual ACM Symposium on Theory of Computing. pp. 1-10. STOC '85New York, NY, USAACMLuby, M.: A simple parallel algorithm for the maximal independent set problem. In: Proceedings of the Seventeenth Annual ACM Symposium on Theory of Computing. pp. 1-10. STOC '85, ACM, New York, NY, USA (1985), http://doi.acm.org/10.1145/22145.22146 Distributed Computing: A Locality-sensitive Approach. D Peleg, Society for Industrial and Applied Mathematics. Peleg, D.: Distributed Computing: A Locality-sensitive Approach. Society for Industrial and Applied Math- ematics, Philadelphia, PA, USA (2000)
[]